Damn lies and benchmark comparing Apache and Nginx

03 June 2008   7 comments   Linux

Mind That Age!

This blog post is 9 years old! Most likely, it's content is outdated. Especially if it's technical.

Powered by Fusion×

Today I moved a bunch of sites over from Apache to Nginx but still keeping Squid in between as a http accelerator (I hope to replace Squid with Varnish soon). I did a quick benchmark of a HTML page that is cached by Squid, 4 times via Apache and 4 times via Nginx. The results:

Requests per second:    1601.34 [#/sec] (mean)
Time per request:       6.268 [ms] (mean)
Time per request:       0.627 [ms] (mean, across all concurrent requests)
Transfer rate:          13020.50 [Kbytes/sec] received

Requests per second:    1810.02 [#/sec] (mean)
Time per request:       5.6435 [ms] (mean)
Time per request:       0.5645 [ms] (mean, across all concurrent requests)
Transfer rate:          14591.35 [Kbytes/sec] received

That's "only" 13% faster and I had hoped for a bigger difference but the test is very simple and depends on how Squid feels. The other important test would be to see how much less CPU and memory Nginx uses during the stresstest period but that's for another day.

One note: This is Nginx 0.4.3 on Debian Etch. The current stable release is Nginx 0.6.13. I'll need to talk to my sys admins to remedy this. Perhaps it makes a difference on the benchmark, I don't know.


Igor Clark
Hey Peter - long shot, because I don't know your setup or exactly what you're trying to do, but for static content, have you thought of trying it without squid?

We've been using nginx a lot and get pretty stellar results in comparison to Apache when it comes to straight html, particularly when the concurrency goes up. We served 440K ~1MB (!) PIs in about 5 hours with nginx on a dual-core Xeon with CentOS 5 with 2GB RAM and the thing didn't break into a sweat; use system sendfile, keep everything in kernel cache RAM and you're laughing. In another project, using a mix of static content and dynamic upstream (FastCGI), nginx served up 38.3 million static files and over 27 million upstream requests in 7 days, and I never once saw the load on the nginx box go above 0.5 - I've seen Apache boxes crying under much, much lighter load than that.

If you're testing both Apache and nginx against Squid, at worst squid might be the bottleneck, and also it seems maybe you're really testing the respective upstream modules, rather than the serving capacity. If you just want to chuck out flat HTML, do try nginx on its own if you haven't already. I'm just guessing, but intuitively I reckon introducing Squid into the equation might slow nginx down, particularly if it's all on the same box, because of squid's memory usage, and the extra pile of context switches.

Like I say, I don't know anything about your setup so I know I'm just guessing wildly about this, but my experience of nginx has been so overwhelmingly impressive that I'm loath to use Apache at all these days.

Anyway, hope all's well!
Igor Clark
Incidentally, forgot to mention http://code.google.com/p/ncache/ - a Squid replacement based on nginx.
Cliff Wells
I'd also suggest that such a light test didn't really test either server. It's highly unlikely you'll be able to significantly load Nginx with only a single client making requests.

Also, you didn't publish any information on CPU/RAM utilization for each server under the same load. Even if you can't load the servers up enough to create a significant req/s difference you should definitely see a large difference in resource utilization at whatever load you are able to generate.
Your "sys admins"? lol don't make me laugh. I think you meant to say "the support department of my piss-ant shared hosting company". Just take a look at your blog and the consider how ridiculous you look

P.S. when you put a picture of your face on your site and then spill your pathetic ego all over your pages - people will start imagining how satisfying it would be to punch you in your ugly, stupid, curly-haired face.
Cliff Wells
Wow. Considered decaf lately?
Here are tests comparing Apache, Nginx, Cherokee, G-wan and even a proxy (Varnish):


This should help find what works best.
Peter Bengtsson
That is awesome! Looking forward to studying G-wan more. I'm surprised they managed to get Varnish that fast.
Thank you for posting a comment

Your email will never ever be published

Related posts

zope-memory-readings - Tracking Zope2's memory usage by URL 30 May 2008
Difference between Sweden and UK: renewable energy 13 June 2008