Performance – Yes we care!

It’s already common knowledge that site speed equals conversion. The giants shared their figures on this (quote amazon / google / microsoft etc) and it’s clear that the faster you deliver your content the more happy users you’ll have, which finally comes down to your profit. So how to keep that load time just under 1 sec and still deliver all the high quality graphics and all the modern bells and whistles? Now, that’s not that easy and has several points to it. We’ll take a closer look at each of them in a minute.

First, it’s important to realize that that time your site takes to load is made up of several components, starting from the hardware and the time it takes the data to move from your servers to your client (and the other way around too!), through the time to generate a response from your web application and ending at the speed the page is being rendered in the viewer’s browser. While the ultimate goal is to minimize the time at every step of the way – it might not always pay off that quick. Let’s start with some of the low-hanging fruits.

 

Assets

 

Modern webpages tend to be bloated with dozens of icons, images, CSS styles, custom fonts, and jQuery plugins. Usually each of these assets lives in a separate file under the URL tree of the website. This has several performance implications:
most browsers have a limit of 6-8 concurrent requests to one web host,
many websites set user cookies, which are then resent by the client along with each request
each request is prone to the TCP slow-start issue.

Domain sharding

Take a moment and visit Steve Souders’ website to check how many parallel downloads you can get: http://stevesouders.com/hpws/parallel-downloads.php – I get 6 on a Chrome.

Parallel download limit at work

That means any request after the first 6 has to wait until the former finish and this is visible in the timeline section of the network tab of the Chrome inspector. How does that affect your webpage? Count the number of images, divide by 6 and you’ll get the rough number of request batches that will be made, mutlltiply that by the average load time of an asset and you’ll get a lower bound on your page load time. There’s a relatively easy fix for that – the browser only checks the target host name, so we can get around the limit by applying the so-called domain sharding technique. Here’s how Etsy.com benefits from that:

Handling the parallel connection limit through domain sharding

 

Zoom in at the host part of the image URLs – it’s img0.etsystatic.com, img1.etsystatic.com, etc.
This brings us to our second issue – the overhead involved with user cookies, let’s look at it closer.

Cookies (and why you should let Cookie Monster eat them instead of sending them in asset requests).

 

Cookies

What does this actually mean? Aside of legal issues, it means each time you request a resource from the web server you send some data about yourself along. Compare the size of the request (what your web browser sends to fetch an image) with and without cookies (again – courtesy of Etsy.com):

Cookie payload sent with each reques

 

That’s roughly 2 kilobytes of data sent along a request for an image under the etsy.com domain.

Request to a "cookieless" domain is much shorte

Here a mere 500 bytes are transfered from the client to the server while requesting an image from img1.etsystatic.com.

While cookieCokie data is very important for the application to properly interact with the client, it’s mostly useless however for loading static objects, like CSS and images. Add to that the fact that majority of the clients use an asymmetric link (they’re quicker to download than to upload) and multiply by the number of asset requests you make and you’ll get the gain you can get from moving the static assets away from the main application domain to a so-called cookieless domain, i.e. a domain, which doesn’t save any cookies.