If the number 1 rule for making faster websites is to "Minimize HTTP Requests", then, let's try it.

On this site, almost all pages are served entirely from memcache. Django renders the template with the database content and the generated HTML is cached. So I thought I insert a little post processing script that converts all <img src="...something..."> into <img src="data:image/png;base64,iVBORw0KGgo..."> which basic means the HTML gets as fat as the sum of all referenced images combined.

It's either 10Kb HTML followed by (rougly) 10 x 30Kb images or it's 300Kb HTML and 0 images. The result is here: https://www.peterbe.com/about2 (open and view source)

You can read more about the Data URI scheme here if you're not familiar with how it works.

The code is a hack but that's after all what a personal web site is all about :)

So, how much slower is it to serve? Well, actual server-side render time is obviously slower but it's a process you only have to do a small fraction of the total time since the HTML can be nicely cached.

Running..
ab -n 1000 -c 10 https://www.peterbe.com/about

BEFORE:

Document Path:          /about
Document Length:        12512 bytes

Concurrency Level:      10
Time taken for tests:   0.314 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      12779000 bytes
HTML transferred:       12512000 bytes
Requests per second:    3181.36 [#/sec] (mean)
Time per request:       3.143 [ms] (mean)
Time per request:       0.314 [ms] (mean, across all concurrent requests)
Transfer rate:          39701.75 [Kbytes/sec] received

AFTER:

Document Path:          /about2
Document Length:        306965 bytes

Concurrency Level:      10
Time taken for tests:   1.089 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      307117000 bytes
HTML transferred:       306965000 bytes
Requests per second:    918.60 [#/sec] (mean)
Time per request:       10.886 [ms] (mean)
Time per request:       1.089 [ms] (mean, across all concurrent requests)
Transfer rate:          275505.06 [Kbytes/sec] received

So, it's basically 292Mb transferred instead of 12Mb in the test and the requests per second is a third of what it used to be. But it's not too bad. And with web site optimization, what matters is the individual user's impression, not how much or how little the server can serve multiple users.

Next, how does the waterfall of this look?

BEFORE:

WebPagetest WebpageTest before

Pingdom Tools Pingdom Tools before

AFTER:

WebPagetest WebpageTest after

Pingdom Tools Pingdom Tools after

Note! All images when served individually (the "before" version) are all served from a fast CDN. The HTML is served from London, United Kingdom and the Webpagetest was run from Virginia, USA.

What can we conclude from this:

  • It worked! There are less requests. 18 requests becomes 6 requests.
  • The "Start Render" time is significantly started earlier.
  • The "Document Complete" event happens slightly earlier
  • The total file size goes from 286Kb to 283Kb!
  • Before: First load takes 2 seconds, repeated view takes 0.4 seconds
  • After: First load takes 2 seconds, repeated view takes 2 secondsd :(
  • Pingdom Tools sums the kilobytes which gives a rounding error compared to WebPagetest

Some more thoughts and conclusions:

If you're wondering how the total file size is the same as before (sum of html + images) it's because all images are turned into base64 into one large document which gzip presumably does better on. If there were fewer images I'd suspect the second version would be slightly bigger in total.
Apparently the base64 version + gzip is supposed to be 2-5% bigger than the original JPG/PNG individually.

Don't do this at home kids if you don't have a good server-side cache and a good web server that serves the HTML gzipped.

Although the code I put in place to make this possible is, right now, pretty ugly it is after all pretty convenient to the developer because it's like a plugin you just add to the rendering. You don't even notice this going on in the template or in the view code. However...

More work is needed. And that is the IE <= 7 guys. Basically Internet Explorer 7 and worse don't support it at all so you need a shim for them that looks something like this:


<!--[if lt IE 7]> 
<script>
$('img').each(function() {
  $(this).attr(src, $(this).data('orig-src'));
})
</script>
<![endif]-->

It would need some love and work but the principle is there and it's sound.

Or, just ignore them. After all, only 3% of my visitors are on IE8 and only 0.5% are on IE7. At least they can read the text. This brutal exclusion isn't always and option. But the shim is.

I think I'm going to keep it. The code needs to be packaged up and made neat before I stick to it. There is a lot more interesting things one can do with this. For example, you could in a post processor optimize the CSS used by inspecting the DOM to see which selectors can be dumped.

UPDATE

Some really valuable comments below have pointed out that using data URIs cause a memory bloat in Gecko which means that it might be particularly harmful for people with multiple tabs or using mobile devices.

Hmm... back to the drawing board a bit I guess.

Comments

Post your own comment
Ami Ganguli

Would it make more sense to combine all the images and use sprites? Then you have one extra HTML request, but all of your HTML can download and render quickly.

You can eliminate the other requests by inlining the css and Javascripts.

(Not that I'd actually do this, but it would be interesting to know how this effects performance.)

Ami Ganguli

Oops, s/one extra HTML request/one extra HTTP request/.

pd

the small gain initial load time is more than butchered by the awfully longer repeat view time.

Peter Bengtsson

Of course, but most people only view one page. Some select few go on to view more pages.

A large majority of my visitors come from a twitter link, a HN link or a google search. They rarely read more than one page.

That first impression is important.

pd

I think your colleagues have good points about the memory usage implications of this technique and also the option to use SPDY. I think that SPDY is the better solution for this if I understand it correctly in that SPDY tries to provide a single http request for delivering multiple files.

Axel Hecht

Justin Lebar asked gaia devs to not use data uris in https://groups.google.com/d/msg/mozilla.dev.gaia/f96P-ZzJQE8/Le5iaWUC76gJ, as it's bloating the memory required to render the page.

Kyle Huey

Yes, data URIs are very inefficient in memory for the reasons Justin outlines. I would not encourage using them in production.

Russ Ferriday

Nice experiment.

@pb: nit: "always and option" => an

@Axel base64 inflates image files, gzip compresses out some of this effect. Presumably naîve intermediate buffering for unzip and decoding base46 demand more ram than simply rendering direct from png??

@Ami careless use of sprites can disappoint on iOS and friends, by exceeding image buffering limits. http://bit.ly/VMkmYV Sprites might makes sense in this comparison if they are limited to images from only the page in question.

pd

Sorry, serves me right for following the hype and buying/using a 'smartphone' keyboard.

Laughingly though, "pb" is not me, it's "pd" :)

Justin Dolske

*shudder*. An interesting experiment, but I hope the lesson here isn't "use data uris", for the reasons Kyle/Axel mentioned.

I suspect you digging a local optimum-hole... How's this compare to using SPDY (or even spriting)? How's it compare to other page optimizations (12MB still seems like a lot for what's there). How's this compare on mobile, where memory/cpu/network are often far more limited and slower?

Andy Davies

Interesting experiment but I actually wonder if dataURIs are a performance anti-pattern in some browsers...

Chrome already prioritises html, css and js over media, and Patrick McManus has landed a patch for Mozilla to do the same - http://bitsup.blogspot.co.uk/2012/12/smarter-network-page-load-for-firefox.html

Image dataURIs circumvent this process as they wrap up the media in the html/css. In the short term it might remain a worthwhile trade off for smaller images but with the multiplexing SPDY/HTTP 2.0 brings might become un-desirable.

@andydavies

Full name

"Justin Dolske: How's this compare on mobile, where memory/cpu/network are often far more limited and slower?"

If you are curious about the memory/cpu usage, you can save the webpage as .mht and open that. The MHT format stores every media in base64.

As for network: The "disable images" is a very effective way to save bandwidth. If you embed them using datauri, you make this it worthless.

The only exception is the favicon which seem to get downloaded by every graphical browser, even when images are turned off.

Your email will never ever be published.

Related posts

Go to top of the page