After a lot of optimization work on this website I finally now get a score of 98 on YSlow! Phew! Finally!
I've managed to get near perfect scores in the past but never on something as "big" and mixed and "multimedia" as this, ie. the home page. The home page on this site contains a lot of content. Lots of thumbnails and lots of code.
I'll soon blog more about how I made these things happen from a technical point of view.
CommentsPost your own comment
It is indeed crazy fast! I had a look at your source on github and it looks really clean and proper.
If you can – don't use jQuery. There several advantages to it.
First of all people wouldn't need to download that much garbage, which won't be used anyway (you use only some jQuery functions, right?). Prefer using libraries suited for your specific needs – these in most cases are lighter (about 80%) and faster...
Secondly, all that garbage wouldn't be executed – that too drains several microseconds of CPU time.
Will it make a big difference? Obviously a lighter version of jQuery is fewer bytes to download. But then again, there's no Google CDN to use maybe and because the Google CDN URL for jQuery is the same across different sites people who visit my site might already have it in cache.
Now jQuery is ~32kB gzipped. For AJAX you could use for example https://github.com/visionmedia/superagent which is only 3kB when gzipped.
As for library evaluation speed I mixed up some test cases (http://jsperf.com/jquery-vs-superagent).
Bear in mind, that even tough jQuery can be cached, it needs to be reexecuted from scratch for EVERY load. So you can think that every load is counting 2*2 for 1 million times. Waste of processing power, isn't it?
So because on an average computer it can only do it 250 times per second it's an issue...
This is typically one of those cases of optimization where it's not necessary. Saving 4ms on a request isn't going to change the users experience. In fact, if you have a high concurrent amount of users, transmitting that small library from your own servers would probably cause more delay than 50 page loads combined, and it might even influence the concurrency. (Of course, this is just as minor and neglectable as the 4ms it takes to load jQuery.)
Think about the whole picture, before talking about optimizing. In fact, there are sites that load slow, because they're built to serve 10K requests per second to all people, if they'd optimize the experience for each individual person, the page might load half a second faster, but only 2K requests could be handled per second.
Here's an interesting back-of-an-envelope calculation