Chris Ruppel
April 24, 2013

This post is part of our Webperf Wednesday series.

I’d like to demo a simple how-to. There are many, many techniques to make pages load faster, but this post attempts to demonstrate large gains from very small code changes.

People often build beautiful sites with multiple easy-to-use JavaScript libraries. Then, when it comes to addressing frontend performance, suddenly those libraries are an enormous download that the users are forced to bear.

Just one image

Before you go worrying about how to minify every last library or shave tests out of Modernizr, try and see if you can remove just one photo from your design. It will make a bigger difference.

Coined by Adam Sontag, the “one less JPG” idea — nay, MOVEMENT — is summed up perfectly here:

Real example

Last year we re-launched We have some mobile traffic, but it’s likely people just browsing for info, since no one has a good reason to download Pressflow onto a phone or tablet. Let’s keep their attention and make the experience fast.

We have this huge, beautiful mountain on the homepage. It’s great. But it’s also 160K. I tried making it smaller, or splitting the photo off of the background pattern, but it decreased the quality of the photo too much when I lowered the file size. We made a wonderfully small SVG logo, but that’s not an option for a photograph with this kind of detail.

How much impact does it have?

A mountain is a big thing — just like the amount of traffic Pressflow can handle — and the image we chose was meant to convey that vastness. Since it doesn’t really pack the same punch on smaller screens, why include it at all? I decided to use Modernizr and conditionally load the stylesheet that references the mountain. That way it never gets loaded by tiny screens that don’t need it.

Using the Modernizr Drupal module, I added a conditional load into the .info file of my theme:

; Load CSS with Modernizr
modernizr['screen and (min-width: 42em)')][yep][] = css/big.css

This tells Modernizr to output a Modernizr.load() statement with the test I specified. In this case, Modernizr will only load big.css if the test is true. My test checks the width of the window using a media query ( and returns true if the screen is at least 42em, causing the CSS to be fetched. Here’s the JavaScript output:

  test:'screen and (min-width: 42em)'),
  yep : '',

So that’s it, instant savings!

..oh what’s that? Always test your work? Thanks for keeping me honest.

Here’s some data.

I’ve got two network waterfalls here for comparison. They show a pretty stark difference following this one-line change to my code. If a screen isn’t big enough for the mountain, it’ll only take 20 HTTP requests and 193K total. If the screen is big enough, it takes 24 HTTP requests — for the CSS and then the images inside it — totalling 384KB total. That’s a savings of 191KB (almost exactly 50%) from a single change to my code. You’d have to remove 19 copies of jQuery 2.0 to achieve this kind of bandwidth savings.

(by the way, didja hear that jQuery 2.0 has small QSA-only custom builds?)

Small screens

Waterfall: Conditional load small

Big screens

Waterfall: Conditional load big

You can see in the second waterfall that the Initiator of big.css is modernizr.min.js, meaning that JavaScript loaded the file after running the test.


I hope this shows how easy it can be to reduce your page weight without worrying about shaving bytes of JavaScript that are supplying valuable functionality if you know how to use them right.

If you want to know more about the conditional loading API within Modernizr, head over to yepnope.js documentation and start reading. For more Drupal-specific examples check out the official documentation for conditional loading using the Modernizr module.


Hi there I’m planning out some work of this very nature and it’s great to see people writing about it already. I’m curious as to why you used Modernizr for this test? Why wouldn’t you conditionally load your CSS this way in the first place without javascript?

Without JavaScript, the only other conditional CSS downloads available would be a <link> tag with a media attribute, and it’s pretty well documented that media attributes do not prevent CSS from downloading in most browsers right now.

Chrome is a notable exception, and hopefully in the future browsers will continue to improve, allowing us to rely on simple mechanisms like media attributes to minimize the amount of stuff a browser downloads. In the mean time, JS is the most reliable way to conditionally load assets.

But why not use a media-query within your stylesheet? To my knowledge, all modern browsers lazy-load resources which prevents your mountain to be loaded on smaller screens.

As demonstrated by your screenshots; the conditionally loaded assets are ‘sprite.png’, ‘bg-mountain.png’ and ‘big.css’ (twice?). Using a media-query both images won’t be loaded. If you’d combine all your CSS files into one, the media-queries can live there eliminating the request to big.css. Heck, it’ll even eliminate another 3 requests because your CSS is currently spread out over 4 (or 5 counting big.css) files.

I’d rather use these mechanisms to lazy-load images then a Javascript solution for conditionally loading my stylesheets.

Your guess about assets referenced within inline media queries is not true. I already posted links to data (in the comment you replied to) proving that most browsers download inapplicable assets.

As for the double download, that’s a non-issue with yepnope.js (scroll up a bit, the anchor isn’t precise for some reason). Basically, it doesn’t download twice but it looks like it in the waterfall as yepnope downloads the file, then finally executes it by hitting your cache.

My comment wasn’t based on guesses, I tested this in multiple browsers by watching my accesslogs. Concluded; the tested browsers Chrome & Firefox both on Mac, IE9 on Win7 and Safari mobile on iOS 6 all ignore the assets which will not be displayed and load them only after the screen has been resized.

You can find my testcase at, change the background url to check it yourself using the server accesslogs.

You’re testing bleeding edge browsers. I’m trying to be more inclusive. I get that newer stuff does what you want it to do (and I’m really glad!), but most mobile browsers do not behave the way you’re asserting.

The point I’m trying to make is that you cannot compare the referenced conditional loading using a <link> tag with a media query within a stylesheet. The behaviour is just different and the referenced test results cannot be used as comparison.

You might be right that it won’t work on certain browsers, but it certainly does on the majority of mobile browsers according to… . Confirmed using the method above, that’s Safari 5 & 6, Android 4.x and IE9 with a combined marketshare of almost 81%. There’s a good chance that a portion of the other browsers support this method as well, but that I can’t test.

I do agree that your solution probably covers all mobile devices (or at least those supported by modernizr) but 81%+ is enough to consider not supporting the remainder and eliminating that one additional (or even two counting modernizr) HTTP request.

You can use inline media queries to do this. eg:

@media screen and (min-width: 42em) {
body {
background-image: url(;

Also, mobile performance is about more than just reducing file size. HTTP requests have much higher latency on mobile, and can in many cases cause worse performance than a few extra kbs of total page size. Ilya Grigorik’s recent presentation “Breaking the 1000ms Time to Glass Mobile Barrier” is really informative about that subject:

It’s always good to see people advocating improving mobile performance though!

No, you can’t. Not in most browsers today. See the comment above (and the one above that). Things will change eventually, but media attributes and inline media queries don’t prevent downloads in most mobile browsers currently in the wild.

As for the 1000ms time-to-glass presentation, it is awesome enough that I’ve already blogged about it twice, and I’d recommend that everyone check it out!

Apologies for a delayed reply!

When you say most mobile browsers, which ones are you referring to? My iPhone certainly doesn’t download things that are inlined inside stylesheets - I created a test:

Here’s the waterfall diagram from using Safari to remote debug it: And using mobitest to simulate for Android:

Neither of those contain the large background image… I’m pretty sure Scott Jehl’s tests were only for link elements with media attributes, not for inline styles.

One comment.
What did those waterfalls looks like before?
By increasing the dependencies in “big screen”, “big mountain” no longer loads until far after everything else on the page. (Which is very snappy btw.)
1) We have to load and run modernizer
2) We have to get big.css
3) We have to get big mountain.

All this happens in serial. Causing big mountain to load significantly (and noticeably) later than anything else on the page.

Whereas if you had just let it load, the whole page would likely have been done loading 30%is faster? It seems like the optimization for mobile has detracted from the experience for non-mobile users. It might be more appropriate to detect the user-agent server side, then include the conditional css there, which would dramatically speed up the load of “big mountain”.

Also worth considering is how much does allowing big mountain to load really affect the performance of mobile? Maybe hiding it and just letting it load does not significantly impact the mobile user experience.

Just a thought.

Before the code change the waterfalls showed 372K for both, with mobile taking something like 1.4s on wifi, accented by that large background image slowly painting in (or holding things up until it downloaded). So this is a pretty good improvement!

Glad you brought up the thing about server-side UA detection. That’s not an option in my case, because the site in question — although dynamically generated — sits behind Varnish, so anonymous traffic is served through its static cache. It can’t change the HTML for each user; there’s one set of assets which expire every 10 minutes or so.

Regardless of static or dynamic HTML, in my opinion it’s more scalable long-term to assemble sites by relying on feature detection, allowing each browser to decide what it needs to build a page. Obviously there are other approaches in JavaScript, such as window.innerWidth, but I went with this time.

I agree with the general aim behind this article—reducing total page size is always a good idea—but it ignores the way that JavaScript and image files each affect page rendering. Since loading external JavaScript files blocks page rendering, it’s important to get those files downloaded, parsed and executed as fast as possible. In many cases, shaving a few kb off of your JavaScript can have a bigger impact on performance than a reduction in the weight of your images. Total weight isn’t everything.

This is a very common response, and it it always comes from people already familiar with the concept of frontend performance. Think of this article as a starting point for people who are just learning how to speed up their pages.

I absolutely agree that there are other approaches to making pages faster, from minification to lazy parsing of files, but in terms of quick, easy wins this is a really basic one that often gets overlooked.

“parsed and executed as fast as possible”
Actually blocking is a good reason to delay JavaScript until the end by placing it at the bottom of the page. The other option is proper use of the defer or async attributes, unless your using document.write .

I don’t think this is true for the small sizes you’ve mentioned. If there were a really large image then parallel loading of megabyte-sized images would be more likely to result in a gain, but I doubt you come out ahead with three 20K chunks.

Actually this would have the opposite effect. You’d be adding http header information for each new request, image header information and meta information on each image, and increasing the total bytes downloaded. Image Slicing is bad for the web.

Google’s PageSpeed mod even has a tool to automatically sprite images:

Google and Yahoo! actually suggest the opposite:
- Sprite images as much as possible.
- Crop or resize images to the size they are being displayed at (don’t force browsers resize them).
- Optimize images.

Other recomendations from Google and Yahoo!:
- Serve 1 only JavaScript file, serve only 1 CSS file.
- Minify CSS and JavaScript
- Set far futures expire on images, JavaScript and CSS.
- Configure eTags, or turn them off (I typically choose the latter).
- Set enable deflate or gzip compression module for JavaScript, CSS, rendered HTML, and text files.

Thanks for the article - I think the JS driven approach is probably the ideal one for most websites, although I think needs careful testing with mobile, as the JS dependencies/parsing can have unexpected effects on actual device timings.

This is probably known already, but I wanted to point out that the image in question is actually a PNG, not a JPG as the title suggests. I think for mobile users (and probably Desktop too), going with the reduced size and increased load time of a JPG may have more impact that barely perceptable more detail - especially on a background image with limited pallete. Going with 80% quality cuts the size in half, going with 50% (which looked still quite acceptable to me) cut it down to less than ~35%. JPG would also allow progressive rendering, which reduces the size slightly and also results in much faster initial display (

Heh, yeah it’s a PNG. One less photo, whatever it is.

If using a huge JPG were possible in this scenario I might have attempted it instead, but we have a tiling noise background I wanted to retain behind the mountain, and JPG doesn’t have transparency. In the post I mentioned trying to merge the two, but the size of the resulting image was monstrous, since the quality had to be very high to avoid compressing the texture behind the mountain.

While I agree with the premise that shaving other extraneous elements out of the page can achieve good size savings this seems to be compromise for the sake of expediency.

The “bigger difference” mentioned in the second paragraph would be properly achieved by removing or optimizing all parts of the page from the copy to the images to the javascript to the CSS. Then determining if a CDN could be used to serve faster from more local locations etc etc.

Doing bits and pieces gets you part of the way there but nothing more.

“removing or optimizing all parts of the page” and setting up a CDN is a ton of work and not everyone knows how to do that stuff. I’m glad that you do, and that’s great! However, my tip was specifically aimed at people who are just getting started with these types of problems.

Not all solutions have to be comprehensive to be effective.

Just wondering if you’re testing this using remote debugging with the DevTools on an Android browser.

Your site loads in under a second. This is some crazy extra optimization. How many hours did this require? Has it been worth it?


I do remote debugging for regular work but these particular screenshots came out of OSX Chrome using browser overrides in devtools.

Yes, the effort was worth it. The work described in the article took approximately 15 minutes — if you don’t count the time I spent writing the Drupal module ^_^

The goal of download size is great but why did you choose to use Modernizer rather than just include the style in another CSS file? (unless you were aiming to defer it’s load until later in the page)

Any ideas why big.css is being loaded twice in the second waterfall (second time looks like it’s come from cache)?

It’s alright now may browser has refreshed I can see the answers to other people.

I still don’t get why you just didn’t merge all the CSS into one file though, it’ll compress down really well as it’s likely to be quicker than several separate requests.

My guess is probably the biggest performance optimisation would be removing jQuery but that brings other compromises

Is your site down right now? Wanted to check if your image could be optimized using ImageAlpha or CryoPNG.

Our DNS provider got DDoSed over the weekend :(

You can go ahead and check but I’ll tell you now that neither were applied. I know that those tools could save me 1-3 kb per image, but that is basically nothing compared to the optimization I already applied.

The point of this post was to show a tiny code change resulting in massive savings, not to be a comprehensive guide to bandwidth reduction.

Chris Ruppel
April 24, 2013
Chris Ruppel is a frontend developer who makes websites load fast and shrink on your phone. He currently lives in the beautiful town of Freiburg, Germany.