The website performance blog posts series focuses on the techniques and tools that can be use to improve a website’s performance.
Optimizing web application performance is all about numbers and metrics so, before delving into optimization techniques, it is essential to understand what can be optimized and how to measure improvements in performance. In this post, we will review the five areas where website performance can be improved, how to establish a performance baseline, and how to measure progress.
Using user-perceived delay as a metric
The overall goal of improving performance is to minimize the perceived delay the user experiences between the moment he clicks on the link and the page is finally displayed. The reason why we are focusing on minimizing the user-perceived delay rather than any other metric is because, in the end, what matters is improving your user’s/visitor’s experience. Having a user-centric mind set is important because it gives us a clear way to prioritize where to spend our resources and focus our efforts. For example, even if implementing a new caching system seems exciting, it might not be useful if the resource-processing time only accounts for 5% of the user-perceived delay. Of course, if your servers are under heavy load reducing it is important, but I believe that only looking at performance from the user’s point of view will give you the whole picture. For example, reducing the perceived delay can be achieved by adding progress indicators and deferred loading, which gives the user something to focus on while the rest of the page is loading. These kinds of optimizations are not captured by other metrics.
Four strategies to reduce the perceived delay
- Reducing the time the browser takes to fetch a given resource: This can be done, for instance, by reducing the server processing time, using browser caching, and HTTP pipelining.
- Making the loading time appear shorter: Leverage how humans perceive information to make the delay appear less than it really is by adding loading indicators, pre-caching, and deferred loading strategies (the famous AJAX paradigm).
The lifecycle of the a web resource
For each resource the browser fetches, the five steps depicted in the following diagram occur. Note that, for simplicity, I am not taking into account caching mechanisms here as they will be the subjects of an entire post.
How to measure our progress?
Now that we have a clearer idea of what we want to do, we need the right tools to measure our progress. There are a lot of performance/benchmark tools, all of which with their utility. For example recommendation tools, such as yslow, that analyzes your page and gives you recommendations on how to improve you performance are very useful and will be covered in an upcoming post. However, to start optimizing your website, you only need two kinds of tools: A browser performance monitor and a resource performance monitor. Here is the short list of those I use daily. I am sure there are a ton of others tools that are worth mentioning , if you know one, let me know by commenting or tweeting. A browser performance monitor is the essential tool that will allow you to understand how the browser spends time rendering your page. Every major browser has either a built-in monitor (Chrome, Safari, and Internet Explorer 8/9.aspx)) or an add-on that provides it (Firebug for Firefox). I tend to mainly use Safari, which is part of the “developer tools,” because I find it more responsive than Firebug and more integrated than the Chrome one. Note that the Chrome developer tools and Safari are one and the same because they are part of Webkit, which is the Chrome and Safari rendering engine. Only their integration differs. To enable Safari developer tools, go to preferences -advanced and enable the “Develop” menu option located at the bottom of the option (see screenshot below).
Regardless of the tool you choose, it will at least provide you with two interesting reports. The first one is an overview that summarizes how the browser spent its time while rendering the page; the second one is a breakdown by resources. For example, for my blog, the Safari report looks like this:
The WatchMouse diagnostic service shows another interesting piece of information about my site performance. As you can see on the screenshot, my resolve time from Hong Kong is 288ms, which accounts for 10% of the time, if the record is not in the DNS cache. What you don’t see (because I cropped the screenshot) is that the resolve time from Vancouver takes 374 ms. Once again, I have bad resolve time from Vancouver and Honk Hong because my DNS is located in France. Clearly, if the goal is to get the page displayed in less than a second, 400 ms for resolving is not going to cut it.
In reality, my DNS problem is even worse than what is reported by these graphs; however, it only became apparent by continuously monitoring my site. Once again, I am using watchmouse (free trial) to do this, but there is another (free) alternative, like pingdom.com, which has a nice iphone app. The next screenshot shows how my website performs for a 24-hour period.
This continuous monitoring made it very clear that my DNS hosting (gandi.net) is not reliable. As is visible in this screenshot, sometimes it takes more than 1 second to resolve my domain name. My DNS problem emphasizes how valuable it is to have continuous monitoring (from multiple locations) to pinpoint what needs to be improved. Since DNS resolving is such a big issue for me these days, my next post on performance will be about how to deal with DNS issues, and I will report how I solved my problem.
Thanks for reading this (too) long post. Let me know what you think, and don’t forget to share!