The Easy Stuff
As long as we’re designing our sites with web standards some of these practices will already be familiar.
- External CSS and JS files
- Avoid CSS expressions and filters
- Lighter markup compared to table based layouts (fewer DOM elements)
- Specify a character set early
- Avoid empty tags and duplicate scripts
Other best practices are super easy to implement given clean, efficient, and (mostly) standard coding practices.
- Put CSS at the top, scripts at the bottom
- Combine external CSS and JS files
The Not-So-Easy Stuff
Unfortunately this isn’t as simple as using “Save for Web” in Photoshop. To really squeeze every byte out of your image bandwidth without quality loss requires a little research into image formats and which one is appropriate for a given situation. Optimization tools like Smush.it are nice too. You can see Google’s suggestions and Yahoo’s suggestions but I’ll summarize my understanding below.
- JPG and PNG files handle almost everything quite well
- JPG for photos, PNG for everything else (icons, backgrounds, etc)
- PNG-8 does everything GIF can do except for animations
- The last browser support hurdle for PNG is alpha channel transparency, which is not possible with GIF anyway
Things start to get a bit more involved at this stage. Even “advanced” image optimization is a fire-and-forget sort of process. You do it right the first time and reap the benefits forever after. Minification effectively adds a compile step to our workflow and a lot of us webbies aren’t used to that sort of thing.
But it’s not as difficult as it may sound. I keep a non-minified version of code around to edit and test with, then minify it before putting it into production. I started out using Yahoo’s command line compressor. Since Google Page Speed is part of my standard testing routine, it’s trivial to save the minified version straight from its report once I’m satisfied with the changes.
You can find a ton of resources on this topic with a quick Google search, but I prefer the classics. This technique can get quite confusing, and I recommend taking the time to really understand it before implementing it. That being said, we have many online sprite tools to make the job easier. I used SpriteMe and found it useful enough. Just make sure you create a test page to run it on that makes use of every image you plan to sprite, and you may want to adjust the suggested sprites to minimize white space.
The downside is a certain loss of agility when changing or adding images to the site. If we want to add LinkedIn to our social media icons in the footer of our site, we’ll have to edit the Icons sprite rather than just uploading a new icon to the server and linking it in separately. But there are also benefits.
- Fewer HTTP requests (reduced by about a third on our site)
- Sprites are often no bigger than the combined file size of the individual files, sometimes even smaller
- Commonly used images are effectively preloaded even if not used on the initial page visited
Be warned that sprites aren’t appropriate for everything. I tried to combine our PNG logo with the JPG RODP logo and the resulting sprite was 27k larger than the individual files. I think that was due to things like gradients and shadows in the RODP logo greatly increasing the number of colors required. I stuck with jpg for that.
Now we’re moving to server configuration. I’m lucky enough to have access to and admin rights on the server, so I could do this stuff myself. In another organization I might have to rely on the server admin group for this.
I used a combination of blogs and official documentation to set this up on our IIS server. It seems to be easier on Apache. Since I’ve never set it up on Apache myself I can’t vouch for the quality of the available documentation, but once again a quick Google search should turn up a wealth of information. On either platform, the important things to remember are:
- Dynamic content (PHP, ASP, etc.) should be freshly compressed with each request — no caching — and not quite as heavily compressed as static content
- Compressing binary files such as images, PDFs, Word documents, etc. costs CPU cycles on the server with little to no benefit — in some cases compressing such files can even make them larger
Like image optimization, this is something you set up right the first time and reap the benefits with no repeating costs. It’s worth it to buddy up with the server admins to get this done. Compression has netted us around 70% reduction in transferred file size for both static and dynamic content and our hardware isn’t remotely taxed even when serving up around 10,000 visits per day.
Browser & Proxy Caching
I took this step only after some heavy thought. This involves more server configuration and has a dramatic impact on workflow. Essentially we set up the server to tell the browser to not even bother asking for new copies of static content until a given date or until after X time has passed. This content is loaded from the local cache. The benefit is nearly instant page renders for repeat visitors. Of course, that means if the file has changed since it was saved in the local cache, the user won’t see that change.
On the Server
- Edit sprite image to incorporate the new icon
- Save the updated sprite with a new file name — I recommend version numbering
- Edit un-minified CSS to reference the new image and adjust other styles as appropirate
- Save CSS with a new file name
- Test and debug cycle
- Minify CSS
<link>in universal header include to point to updated CSS file
After incorporating all these changes, first time page loads for first time visitors are about 2 seconds faster. Subsequent page loads, even for first time visitors, are about 1 second faster. That means in a year’s time our users will spend about 1 month less time waiting for our pages to load. Plus the gains that brings to the user experience (a bit harder to quantify). Plus the associated reduction in bandwidth costs (a question for our IT department). Plus possibly gaining favor with search engines.