We recently talked about reducing HTTP requests. Here’s a quick recap:
- Slow web pages impede your website’s goals;
- 90% of a typical web page’s slowness happens after the HTML has been downloaded;
- Reducing the number of HTTP requests triggered by your page is usually the best first step to making your pages faster;
- We reviewed some specific techniques for reducing the number of HTTP requests in a given page;
- We noted that automation can ease or remove the maintenance burden for more invasive optimization techniques
Next up on the list: taking advantage of the browser’s capabilities to make your web pages faster and more efficient.
But are they even “pages” any more?
Modern web pages have outgrown their humble origins and are not really recognizable as “pages” anymore. Except for the simplest and most old-fashioned brochure-ware sites, visiting a website means executing a complex application that is distributed — and executed — across the web.
Viewed as such, these web applications are comprised of many parts: a client (the browser); one or more origin servers (where the site is hosted); CDN nodes (where static assets are cached); reverse proxy nodes (e.g. for next-gen whole site acceleration services); third-party assets (hosted on various servers); and the networks that connect them all. So it’s time to stop acting like the origin server has to do all the work and the browser can only present the page to the user. The server is just one part of the application, and it’s playing a shrinking role.
Performance-minded website architects are showing an increasing tendency to shift the burden of work from the (overloaded) server to the (powerful, underutilized) client, and with good reason. In this article I’ll review some of the ways you can make your website faster by easing the burden on your server and giving the browser more responsibility.
“Put Me In, Coach, I’m Ready To Play!”
Modern web browsers run on hardware which is staggeringly powerful by historical standards, and which is simply massive overkill for the uses to which most users put them. It is very common for a user to interact with a site without even beginning to strain the RAM or CPU on his or her computer, while waiting far longer than necessary while an overloaded server (often a poorly configured virtual server on shared hardware in a cheap hosting center) struggles to allocate memory and keep up with the flow of requests without crashing under the load. Distributing more work to the client helps keep the server from getting swamped, can help save on bandwidth and hosting costs, makes the application faster and more responsive, and is generally a better architecture. It’s simply a more efficient allocation of available resources. (And even for less powerful clients, like some mobile devices, the high latency costs of HTTP round trips over mobile connections can still make it worthwhile to offload work from the server to the client.)
But too many web developers continue to treat the browser – the client side of the client-server interaction – as just a simple “view” of the application. It’s better understood as residing at the heart of the application that is the modern web page. The server has its place, but the browser is increasingly where the action is. It’s got tons of under-utilized processing and memory resources, and its capabilities should be respected and used to their fullest.
Ok, if you’re ready to leverage the client the first thing you’ll need to do is clean up
your client-tier code. Seriously.
Use web standards.
Using web standards is essential for creating maintainable, accessible, future-proof websites. A great side effect is it’s also the best foundation for maximizing performance. Use of modern web standards encourages the separation of content (HTML), styling (CSS), and behavior (JavaScript). Of course, what constitutes “standards” is a surprisingly tricky question to answer. Debates rage around use of vendor prefixes; formal W3C recommendations lag behind the real world by years; religious wars are fought on the topic of abstract specifications vs de facto standards of what browser manufacturers actually implement… you get the point. But — pedantry aside — in general, strive to write front-end code that validates. And be aware of the places where you trigger warnings or errors.
Recommended validators include http://validator.w3.org/ (for HTML), http://www.jshint.com/ (for JavaScript), and http://jigsaw.w3.org/css-validator/ (for CSS). Read and follow heroes like Jeff Zeldman and Paul Irish and you’ll be well on your way. Leveraging open source UI frameworks and/or boilerplate templates is a smart path to a solid foundation in standards-based front-end code, too. Using web standards doesn’t alone suffice to make your site fast (though it’ll help), but it will make optimization much more practical and achievable.
Apply MVC in the page.
The venerable “MVC” (Model/View/Controller) design pattern has long been the well-established best practice for web applications. Traditionally, “model” maps to the structured data you’d put in your database, “controller” is the application tier on the server that handles requests, applies business logic and generates responses, and “view” is everything the server sends back to the browser. But what some developers overlook is that this same MVC pattern can properly be applied in the front end of your website’s code too. Think of the HTML (the DOM, really) as the model, the CSS as the view, and the JavaScript as the controller. Adhering to this conceptual separation – keeping the HTML model (“what it is”) separate from the CSS view (“what it looks like”) and separate from unobtrusive JavaScript controller (“how it behaves”) – tends to make code more efficient and maintainable, and makes many optimization techniques much more practical to apply.
Leverage Ajax techniques. Properly.
Don’t refresh the whole page if you don’t have to! Use Ajax. By only requiring small parts of the page to change in response to user actions, you make your site or web application much more responsive and efficient. But be aware, there are different Ajax approaches.
For example, fetching complete, styled HTML fragments via Ajax may be appropriate for implementing a sophisticated “single-page interface” (SPI) [https://en.wikipedia.org/wiki/Single page_application]. That’s a powerful approach, but don’t take it lightly – serious SEO and usability gotchas abound. If you’re not doing SPI, retrieving chunks of styled HTML from the server is probably not the right thing to do.
For most common use cases, it’s better and faster to just pull pure data from the server. Client side templating libraries help solve the problem of turning that data into HTML that can be injected into the DOM and displayed. (Here’s a helpful template chooser.) But with or without client-side templates, fetching serialized data is usually the best Ajax approach for performance.
Validate in the client.
At the risk of insulting you smart readers, I have to mention the most obvious case for pushing work to the client, just because so many sites get it wrong: form validation. Picture a user, taking the time to fill out your signup or order form. They painstakingly complete the form and submit it. And then they wait. They look at a blinding white blank screen while the form is posted to the server… and processed…and a new page is generated… and sent back… and rendered… until finally… yes, they see — an error? What a waste of time! That’s an unhappy user and a likely candidate to bail out, abandon your site and go to a competitor.
Whenever possible, validate the user’s form input from within the page, right where the input is happening. In some cases (such as checking for the availability of a username), doing an async request to the server is appropriate. But in many cases all of the validation rules can be implemented in JavaScript and included with the form in the page. This allows you to give the user instantaneous feedback as they complete the form, and it saves the server a lot of unnecessary work.
Note for security reasons, web applications should always also validate on the server side. (Rule #1 of web app security is that user input cannot be trusted.) So, validate in the client for reasons of performance and UX, and validate on the server for security.
Let the browser do the data viz.
One last specific scenario I want to mention is the visual display of quantitative information. Generating charts and graphs — any sort of pretty-looking data visualization — used to be the sole province of the server. Those days are long gone.
Now, it makes much more sense to push just the raw data from the server to the browser, in the initial page request. If the data set is too large to include in the initial view, it can be updated via Ajax, in response to user interaction. With modern client libraries (like Processing, D3, and Flot), you can create all kinds of stunning interactive data visualizations right there in the browser. Their capabilities go way, way beyond sorting table columns or rendering a pie chart.
In this way, many user interactions avoid hitting the server at all. And when they do, it’s a small request and response, consuming the minimum amount of network bandwidth and requiring the least possible amount of work from the poor overworked server.
To recap:
- Web “pages” aren’t really pages any more, they’re distributed applications
- Pushing work from the server to the client is a great way to make your site faster
- Use best practices (web standards and MVC separation in HTML, CSS and JS)
- Use the right Ajax approach for the job
- Powerful client-side templating libraries and dataviz libraries abound
That’s it for this second article. Next time I’ll dive into another area of web performance optimization. In the meantime I’m always interested in feedback and others’ thoughts on web performance.