This document discusses optimizing the client-side performance of websites. It describes how reducing HTTP requests through techniques like image maps, CSS sprites, and combining scripts and stylesheets can improve response times. It also recommends strategies like using a content delivery network, adding expiration headers, compressing components, correctly structuring CSS and scripts, and optimizing JavaScript code and Ajax implementations. The benefits of a performant front-end are emphasized, as client-side optimizations often require less time and resources than back-end changes.
1. Website Performance at Client Level
Monica Macoveiciuc and Constantin Stan
Faculty of Computer Science, Alexandru Ioan Cuza University, Iasi
Abstract. This paper describes the importance of a performant pre-
sentation tier. It presents the easiest way of optimizing the client-side
code, providing source code examples for good practices. It then shows
the correct approach to using CSS and HTML and the impact it has
on the website response time. The Ajax techonolgy is briefly described,
emphasizing the role of JavaScript and presenting methods for improv-
ing its performance. In the end, some popular tools for monitoring and
testing web applications are introduces.
2. Introduction. The Importance of a Performant
Presentation Tier
Multi-tier architecture (often referred to as N-tier architecture) is a client-server
architecture in which the presentation, the application processing, and the data
management are logically separate processes. There are many business benefits
to N-Tier Architecture. For example, a small business can begin running all tiers
on a single machine. As traffic and business increases, each tier can be expanded
and moved to its own machine and then clustered. This is just one example of
how N-Tier Architecture improves scalability and supports cost-efficient appli-
cation building.
The presentation tier is the topmost level of the application. It communicates
with other tiers by outputting results to the browser/client tier and all other
tiers in the network.
Client-side programming is based on the idea that the CPU power of the com-
puter which the client is using to browse the web can be also exploit. Things
like processing simple requests, maintaining state, and the presentation tier are
handled by the web surfer’s own computer instead of being handled by some
web server hosting a site.
Web page optimization streamlines the content to maximize display speed. Fast
display speed is the key to success for a website. It increases profits, decreases
costs, and improves customer. The front-end is the most accessible part of a
website. Many times, the access to the server is limited and, even if one has the
permissions to modify the web server or the database, improving their perfor-
mance requires specialized knowledge.
There is more potential for improvement by focusing on the front-end. Cutting
it in half reduces response times by 40% or more, whereas cutting back-end per-
formance in half results in less than a 10% reduction. Front-end improvements
typically require less time and resources than back-end projects (redesigning
application architecture and code, finding and optimizing critical code paths,
adding or modifying hardware, distributing databases, etc.). Optimizing the pre-
sentation level is also inexpensive compared to the other levels of application.
3. Optimization
The Performance Golden Rule states that only 10 to 20% of the user response
time involves retrieving the requested HTML document, while the rest of it is
spent on dealing with the retrieved content.
Fewer HTTP Requests
A simple way to improve response time is to reduce the number of HTTP re-
quests, by reducing the number of components. There are different techniques
for achieving this: the use of image maps, CSS sprites, inline images, combined
scripts and stylesheets. The increase in speed is noticeable and, depending on
the website, it can exceed 50%.
Image Maps
It is a common practice to use images for displaying navigation bars or buttons.
These images are associated with URLs and, if one uses multiple hyperlinked
images in this way, image maps may be a way to reduce the number of HTTP
requests without changing the page’s look and feel. Adjacent images can be
compound into one composite image. An image map associates multiple URLs
with this image and the destination URL is chosen based on where the user
clicks on the image. Instead of multiple HTTP requests, this technique requires
only one. For example, the following HTML code:
<div>
<h4>Two Images, with Two HTTP Requests</h4>
<p>
<img src="img1.jpg" alt="First Image">
<img src="img2.jpg" alt="Second Image">
</p>
</div>
can be optimized by using a clientside usemap, the following way:
<div>
<h4>One Combined Image, with One HTTP Request</h4>
<map name="user_map">
<area href="#1" alt="1" title="1" shape="rect"
coords="0,0,100,100">
<area href="#2" alt="2" title="2" shape="rect"
coords="100,0,210,100">
</map>
<img src="combined.jpg" width="210" height="100"
4. alt="Combined image"
usemap="#user_map" border="0">
</div>
The only disadvantage of this approach is that it can easily lead to errors. Defin-
ing the area coordinates of the image maps, if done manually, is tedious. Further
more, it is almost impossible to use any shape other than rectangles.
CSS Sprites
Like image maps, CSS sprites allow you to combine images, but they are much
more flexible. The images in an image map must be contiguous, while the CSS
sprites dont have that limitation. Another advantage of using them is the reduced
download size - the combined image tends to be smaller than the sum of the
separate images as a result of reducing the amount of image overhead (color
tables, formatting information, etc.). Moreover,it results in clean markup and
fewer images to deal with. There are many tools available online that create CSS
sprites from separate images. One of them is http://www.csssprites.com/.
Although it works in most of the situations, this method has its drawbacks -
in the rare cases in which users have turned off images in their browsers but
retained CSS, a big empty hole will appear in the page where we expect our
images to be placed. The links are still there and clickable, but nothing visually
appears.
Combined Scripts and Stylesheets
Most of the websites, nowadays, are built using JavaScript and CSS. There are
two ways of using them, either inline, or from external script and stylesheet files.
Generally, using the latter approach is better for performance, but since there is
a trend of breaking the code into many small files (the idea of modularization), it
might lead to bigger response time, since additional HTTP requests are needed.
The solution is using two combined files, one for all the scripts, and the other,
for all the stylesheets. One website that provides compared results for common
practices in building websites is http://stevesouders.com/hpws/rules.php.
The tests have proven that pages with the combined scripts loads 38% faster.
Use a Content Delivery Network
A content delivery network (CDN) is a collection of web servers distributed across
multiple locations to deliver content to users more efficiently. This efficiency is
typically discussed as a performance issue, but it can also result in cost savings.
When optimizing for performance, the server selected for delivering content to
a specific user is based on a measure of network proximity. For example, the
CDN may choose the server with the fewest network hops or the server with the
quickest response time. Other benefits include backups, caching, and the ability
5. to absorb traffic spikes better. Examples of CDNs include Akamai Technologies,
Limelight Networks, SAVVIS, and Panther Express. Smaller and noncommercial
web sites might not be able to afford the cost of these CDN services, but there
are several free CDN services available:
1. Globule (http://www.globule.org) - an Apache module developed at Vrije
Universiteit in Amsterdam;
2. CoDeeN (http://codeen.cs.princeton.edu) - developed at Princeton Uni-
versity on top of PlanetLab;
3. CoralCDN (http://www.coralcdn.org) - developed at New York Univer-
sity.
Add an Expires Header
When a user visits a Web page, the browser downloads and caches the page’s
resources. The next time the user visits the page, the browser checks to see if any
of the resources can be served from its cache, avoiding time-consuming HTTP
requests. The browser bases its decision on the resource’s expiration date. If
there is an expiration date, and that date is in the future, then the resource is
read from disk. If there is no expiration date, or that date is in the past, the
browser issues a HTTP request. Web developers can avoid the delay caused by
the new request by specifying an explicit expiration date in the future.
The HTTP specification define this header as ”the date/time after which the
response is considered stale.” It is sent in the HTTP response and it looks as
following:
Expires: Thu, 1 Jan 2015 20:00:00 GMT
If this header is returned for an image in a page, the browser uses the cached
image on subsequent page views, reducing the number of HTTP requests by one.
Compress components
Another way of reducing the response time is by reducing the size of the HTTP
response, which means that fewer packets need to travel from the server to the
client. Many Web servers and Web-hosting services enable compression of HTML
documents by default, but compression shouldn’t stop there. Developers should
also compress other types of text responses, such as scripts, stylesheets, XML,
and JSON, among others. GNU zip (gzip) is the most popular compression
technique. It typically reduces data sizes by 70 percent. Web clients indicate
support for compression with the Accept-Encoding header in the HTTP request:
Accept-Encoding: gzip, deflate
If the web server sees this header in the request, it may compress the response
using one of the methods listed by the client.The web server notifies the web
client of this via the Content-Encoding header in the response:
Content-Encoding: gzip
6. Correct Approach to Dealing with CSS and Scripts
Progressive rendering is an expression used for the pages that load preogressively
- the browser displayes the content as soon as it is available, even if it not the
entire content. This is especially important for pages with a lot of content and
for users on slower Internet connections. The importance of giving users visual
feedback is summarized by Jakob Nielson:
Progress indicators have three main advantages: They reassure the user that
the system has not crashed but is working on his or her problem; they indicate
approximately how long the user can be expected to wait, thus allowing the user to
do other activities during long waits; and they finally provide something for the
user to look at, thus making the wait less painful. This latter advantage should
not be underestimated and is one reason for recommending a graphic progress
bar instead of just stating the expected remaining time in numbers.
Put Stylesheets at the Top
Stylesheets inform the browser how to format elements in the page. If stylesheets
are included lower in the page, the broswer might face the situation when it has
available content, but it does not know how to render it. Browser deal with this
problem differently:
Internet Explorer delays rendering elements in the page until all stylesheets are
downloaded. This causes the page to appear blank for a longer period of time,
however, giving users the impression that the page is slow.
Firefox renders page elements and redraws them later if the stylesheet changes
the initial formatting. This causes elements in the page to ”flash” when they’re
redrawn, which is disruptive to the user.
The best answer is to avoid including stylesheets lower in the page and instead
load them in the HEAD of the document.
Put Scripts at the Bottom
External scripts (mainly JavScript files) have a bigger impact on performance
than do other resources, for two reasons. First, once a browser starts downloading
a script it won’t start any other parallel downloads. Second, the browser won’t
render any elements below a script until the script has finished downloading.
Both of these impacts are felt when scripts are placed near the top of the page,
such as in the HEAD section. Other resources in the page (such as images)
are delayed from being downloaded and elements in the page that already exist
(such as the HTML text in the document itself) aren’t displayed until the earlier
scripts are done. Moving scripts lower in the page avoids these problems.
7. Avoid CSS expressions
CSS expressions are a way to set CSS properties dynamically. They enable setting
a style’s property based on the result of executing JavaScript code embedded
within the style declaration. The issue with CSS expressions is that they are
evaluated more frequently than one might expectpotentially thousands of times
during a single page load. If the JavaScript code is inefficient, it can cause the
page to load more slowly.
Not all the browser support all the CSS properties, and one solution for obtaining
the same rendering in all of them is using CSS expressions. The following example
ensures that a page width is always at least 600 pixels, using an expression that
Internet Explorer respects and a static setting honored by other browsers:
width: expression(document.body.clientWidth < 600 ?
"600px" : "auto" );
min-width: 600px;
CSS expressions are re-evaluated when the page changes, such as when it is
resized.This ensures that as the user resizes his browser, the width is adjusted
appropriately.The frequency with which CSS expressions are evaluated is what
makes them work, but it is also what makes CSS expressions bad for perfor-
mance.
8. The Benefits of Ajax
Ajax(Asynchronous JavaScript and XML) is a cross-platform set of technologies
that allows developers to create web pages that behave more interactively, like
applications. It uses a combination of Cascading Style Sheets (CSS), XHTML,
JavaScript, and some textual datausually XML or JavaScript Object Notation
(JSON) to exchange data asynchronously. This allows sectional page updates
in response to user input, reducing server transfers (and resultant wait times)
to a minimum. The goal of Ajax is to increase conversion rates through a faster,
more user-friendly web experience. Unfortunately, unoptimized Ajax can cause
performance lags, the appearance of application fragility, and user confusion.
The improved communication power of the Ajax pattern is caused primarily
by the XMLHttpRequest object(XHR). The object is natively supported in
browsers such as Firefox, Opera, and Safari, and was initially supported as an
ActiveX control under Internet Explorer 6.x and earlier. In IE 7.x, XHRs are
natively supported, but the ActiveX solution is also available.
The following JavaScript function contains the first step of sending an Ajax
request:
function createXHR( ){
// Firefox, Opera, Safari, IE 7.x
try { return new XMLHttpRequest();
} catch(e) {}
// IE 6.x and earlier
try { return new ActiveXObject("Msxml2.XMLHTTP.6.0");
} catch (e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP.3.0");
} catch (e) {}
try { return new ActiveXObject("Msxml2.XMLHTTP");
} catch (e) {}
try { return new ActiveXObject("Microsoft.XMLHTTP");
} catch (e) {}
// No XHR support
return null;
}
A simple call creates an XMLHttpRequest object:
var xhr = createXHR( );
The open() method of the XHR object is used to begin forming the request,
specifying the HTTP method, URI, and a boolean value that indicates whether
the request should be synchronous(false) or asynchronous(true).
9. xhr.open("GET","test.php",true);
Summarized, the advantages of Ajax over classical web-based applications in-
clude:
1. Asynchronous calls Ajax allows for the ability to make asynchronous calls
to a web server. This allows the client browser to avoid waiting for all data
to arrive before allowing the user to act once more.
2. Minimal data transfer By not performing a full postback and sending all
form data to the server, network utilization is minimized and quicker oper-
ations occur. In sites and locations with restricted pipes for data transfer,
this can greatly improve network performance.
3. Limited processing on the server Along with the fact that only the necessary
data is sent to the server, the server is not required to process all form
elements. By sending only the necessary data, there is limited processing on
the server.
4. ResponsivenessBecause Ajax applications are asynchronous on the client,
they are perceived to be very responsive.
5. Context With a full postback, users may lose the context of where they
are. Users may be at the bottom of a page, hit the Submit button, and be
redirected back to the top of the page. With Ajax there is no full postback.
Clicking the Submit button in an application that uses Ajax will allow users
to maintain their location. The user state is maintained, and the users are
no longer required to scroll down to the location they were at before clicking
Submit.
Inspite of all the obvious benefits, one should not abuse of Ajax calls. Although
most requests should be made asynchronously so that the user can continue work-
ing without the browser locking up as it is waiting for a response, synchronous
data transfer is not always an inappropriate choice. The reality is that some
requests must, in fact, be made synchronously because of dependency concerns.
JavaScript Optimization
JavaScript brings all the Ajax technologies together and optimizing the .js code
might be a key action in improving the website performance. Despite this real-
ity, JavaScript has a reasonable claim to being the world’s most misunderstood
programming language. While often considered a toy, beneath its simplicity lie
some powerful language features. Deeper knowledge of this technology is an im-
portant skill for any web developer.
JavaSript has the ability to supply objects that control a web browser and its
Document Object Model (DOM). For example, client-side extensions allow an
application to place elements on an HTML form and respond to user events such
as mouse clicks, form input, and page navigation.
Web browsers can interpret client-side JavaScript statements embedded in an
HTML page. When the browser (or client) requests such a page, the server sends
10. the full content of the document, including HTML and JavaScript statements,
over the network to the client. The browser reads the page from top to bottom,
displaying the results of the HTML and executing JavaScript statements as they
are encountered.
Since most of the user response time is spent on dealing with the content, op-
timizing JavaScript is very important. There are a few simple rules that can
significantly improve the performance:
1. Remove the comments - most of the time, they just increase the file size.
2. Remove the whitespaces. For example, instead of writing this:
var str = "JavaScript is " +
x +
" times more fun than HTML ";
you can write this:
var str="JavaScript is "+x+" times more fun than HTML";
3. Use JavaScript shortand -
x + 1
should be replaced with
x++
And the code:
var isGreater;
if (x > 10) {
isGreater = true;
}
else {
isGreater = false;
}
can become this:
var isGreater = (x > 10) ? true : false;
4. Use string constant macros - if a message needs to be displayed often, declare
a string variable containing that message.
5. Remap built-in objects - the file size can be reduced by renaming the built-in
objects, such as Window, Document, Navigator. For example,
alert(window.navigator.appName);
alert(window.navigator.appVersion);
alert(window.navigator.userAgent);
could be rewritten as follows:
w=window;n=w.navigator;a=alert;
a(n.appName);
a(n.appVersion);
6. Lazy-load the code - many JavScript libraries support the ”lazy-loading”
concept - the code is loaded only when necessary.
11. Web Site Performance Monitoring and Testing
Continuous monitoring is critical to ensuring that the website and web-based
applications are available and performing with acceptable response times.
There are many tools for monitoring and testing websites, such as Firebug,
Y!Slow - for Firefox, or Selenium, that is supported in many browsers.
Firebug
Firebug is a revolutionary Firefox extension that helps web developers and de-
signers test and inspect front-end code.
It includes a powerful JavaScript debugger that alows pausing the execution at
any time. Using the JavaScript profiler, one can measure performance and find
bottlenecks fast. The command line is one of the oldest tools in the programming
toolbox. Firebug includes a command line for JavaScript and provides power-
ful logging functions for all the Ajax request traffic, also allowing developers
to inspect the responses. The tool includes inspectors for HTML and CSS that
provide all the related information about the page’s elements. Users can alter
the HTML and CSS and the effects are seen instantly.
Firebug is free and open source.
Y!Slow and JSLint
Y!Slow is a Yahoo product that analyzes web pages and finds out why they are
slow, based on some rules for high performance. It is integrated with Firebug
and its features include a performance report card, HTTP/HTML summary, the
list of components in the page and some integrated tools, like JSLint. JSLint is
a code quality tool for JavaScript. It takes a source text and scans it. If it finds a
problem, it returns a message describing the problem and an approximate loca-
tion within the source. The problem is not necessarily a syntax error, although
it often is. JSLint looks at some style conventions as well as structural problems.
It does not prove that the program is correct, but it can and does reveal the
code’s problems.
Y!Slow completes FireBug functionality to make Firefox an unbeatable web de-
velopment tool.
Selenium
Selenium is a high quality open source test automation tool for web application
testing. Selenium runs in Internet Explorer, Mozilla and Firefox on Windows,
Linux, and Macintosh, Safari on the Mac. It includes an IDE for Selenium test
scripts, which are portable and can also be run from JUNit. For example, test
12. scripts written using Selenium IDE in Firefox on Windows can run on Firefox
in Mac or Linux, without changing any code. Selenium tests run directly in
browsers and so matches the end-user experience closely.
Selenium provides a rich set of testing functions specifically designed to the
needs of testing of a web application. These operations are highly flexible, allow-
ing many options for locating UI elements and comparing expected test results
against actual application behavior.
References
1. Andrew B. King, ”Website Optimization”, O’Reilly Media, 2008.
2. Steve Souders, ”High Performance Web Sites”, O’Reilly Media, 2007.
3. Douglas Crockford, ”JavaScript - The Good Parts”, O’Reilly Media, 2008.
4. Jakob Nielson, ”Response Times: The Three Important Limits”, http://www.
useit.com/papers/responsetime.html.
5. Douglas Crockford, ”The World’s Most Misunderstood Programming Language”,
http://javascript.crockford.com/javascript.html.
6. Yahoo! Developer Network Blog,
http://developer.yahoo.net/blog/archives/2007/03/high_performance.html.