First of all, some required reading. Go here and read this
Essentially my pitch is this. It is not 20% backend, it is 20% response to initial request time. That’s not quite as catchy granted but it depends on how you look at it.
Whilst I accept that the definition of ‘backend’ time is generally the “time it takes for the server to get the first byte back to the client”, I don’t understand why this terminology should only be applied to the initial request. In every instance when a page is required to download additional content, the application has to carry out some work, and deliver this content to the client. This is additional backend time, it’s just not the initial backend time.
Of course, for sites that are dynamically generating content on the fly there will be a more significant overhead – but even sites that serve static content (mostly images, but JS and CSS files aren’t usually dynamically generated per user either) can present an element of back end processing time, and this time should be taken into account when discussing the 80 / 20 rule. I still agree with Steve’s quote that “… the longest pole in the tent was frontend performance…” but there also needs to be some perspective around this. If the backend cannot serve the content quickly enough then any optimisation work that has been conducted will be, if not in vain, then at least not as effective as it perhaps could have been.
Furthermore, plenty of systems serve “static” objects (JS, CSS , images etc.) out of a CMS, and it may well take time to either locate the contents or retrieve them from database / remote file systems etc. All of this will look like backend time to the client, and should be considered so given that it lies within the control of the host.
How best to demonstrate this? With a picture!
This is an excerpt from a waterfall graph for www.amazon.com. The initial request time includes 0.21 seconds of DNS lookup time, 0.07 seconds of connection time and 0.13 seconds of data start time. That’s 0.41 seconds of ‘backend’ time before the content starts to load. Given that the total load time for this page was 4.7 seconds that’s apparently 90/10 in favour of frontend time.
However, almost without exception every other object beneath this initial request also carries elements of connect time and data start time.
Whilst it would be incorrect to stitch all of these separate backend times together (given the parallel download of many of these objects) I can have a go at roughly calculating the processing time by selecting the first individual object from each ‘wave’ of parallel object downloads. The total recorded backend time is now approximately 3.25 seconds.
90/10 has now become more like 70/30 in favour of backend – we now have a shorter pole!
Here at Site Confidence we often debate the true meaning of the 80/20 rule. If all ‘static’ content served by your website is served how static content was born to be served then brilliant – go sort out your 80!
However, if you’re stuck with a legacy CMS, an unusual CDN implementation or simply have never paid that much attention to your strategy for serving static content you’ve still got 2 poles to shorten.