Secs sell: How frickin' fast this site is!
After a lot of optimization work on this website I finally now get a score of 98 on YSlow! Phew! Finally!

I've managed to get near perfect scores in the past but never on something as "big" and mixed and "multimedia" as this, ie. the home page. The home page on this site contains a lot of content. Lots of thumbnails and lots of code.

As always, it really helps if you can control the requirements. Meaning you can say "No, we don't want an embedded Flash widget with 30kb Javascript". In my case I didn't want content to be dynamic per each user request so the underlying HTML can be properly cached. Also, I don't need any Javascript for the home page because all it does is static content.

Secs sell: How frickin' fast this site is!
My individual blog pages are the only pages that require Javascript. What I did there was let Google host a copy of the latest jQuery and I just add some minified code to handle the AJAX of the comment posting. It's pretty cool that the individual blog post pages get a score of 99 on YSlow even though they contain a decent amount of Javascript.

What I've also done is moved every single image, css and javascript element to the Amazon CloudFront CDN. Yes, this costs money but certainly not much. My web server is located in London, England which is a good location but considering that 70% of my visitors are based in north America it's more fair that 90% of the web page content is served near them instead. This is clearly illustrated with this screenshot from Pingdom.
Secs sell: How frickin' fast this site is!

I'm quite aware that it's 100 times easier to build a fast website when you can simply disregard certain features such as fat picture galleries and massive blocks of Javascript stuff. But mind you, choosing not to add those features is a large part of making fast websites too. The number one rule of making a request fast is to not make it at all.

I'll soon blog more about how I made these things happen from a technical point of view.

Anonymous - 30 March 2012 [«« Reply to this]
thanks !!
mario - 02 April 2012 [«« Reply to this]
It is indeed crazy fast! I had a look at your source on github and it looks really clean and proper.
nagisa - 05 April 2012 [«« Reply to this]
If you can – don't use jQuery. There several advantages to it.

First of all people wouldn't need to download that much garbage, which won't be used anyway (you use only some jQuery functions, right?). Prefer using libraries suited for your specific needs – these in most cases are lighter (about 80%) and faster...

Secondly, all that garbage wouldn't be executed – that too drains several microseconds of CPU time.

Thirdly, hand optimized plain JavaScript itself executes faster.
Peter Bengtsson - 05 April 2012 [«« Reply to this]
Will it make a big difference? Obviously a lighter version of jQuery is fewer bytes to download. But then again, there's no Google CDN to use maybe and because the Google CDN URL for jQuery is the same across different sites people who visit my site might already have it in cache.

The question is, does loading big standard jQuery result in excessive amounts of Javascript executions? That might really matter on a mobile device. Perhaps more so than saving 10Kb.
nagisa - 05 April 2012 [«« Reply to this]
I skimmed trough your JavaScript code and basically all you do is AJAX requests. Everything else could be written with same amount pure-JS (and some polyfills for older IEs).
Now jQuery is ~32kB gzipped. For AJAX you could use for example https://github.com/visionmedia/superagent which is only 3kB when gzipped.

As for library evaluation speed I mixed up some test cases (http://jsperf.com/jquery-vs-superagent).
Bear in mind, that even tough jQuery can be cached, it needs to be reexecuted from scratch for EVERY load. So you can think that every load is counting 2*2 for 1 million times. Waste of processing power, isn't it?

On side note it is pretty hard to write Javascript that IE is willing to execute.
Aidiakapi - 24 June 2012 [«« Reply to this]
So because on an average computer it can only do it 250 times per second it's an issue...

This is typically one of those cases of optimization where it's not necessary. Saving 4ms on a request isn't going to change the users experience. In fact, if you have a high concurrent amount of users, transmitting that small library from your own servers would probably cause more delay than 50 page loads combined, and it might even influence the concurrency. (Of course, this is just as minor and neglectable as the 4ms it takes to load jQuery.)

Think about the whole picture, before talking about optimizing. In fact, there are sites that load slow, because they're built to serve 10K requests per second to all people, if they'd optimize the experience for each individual person, the page might load half a second faster, but only 2K requests could be handled per second.
Peter Bengtsson - 05 April 2012 [«« Reply to this]
Here's an interesting back-of-an-envelope calculation
http://forum.jquery.com/topic/make-a-light-version-of-jquery-jquery-lite#14737000002712094


Your email will never ever be published