The advantage with WebSockets (over AJAX) is basically that there's less HTTP overhead. Once the connection has been established, all future message passing is over a socket rather than new HTTP request/response calls. So, you'd assume that WebSockets can send and receive much more messages per unit time. Turns out that that's true. But there's a very bitter reality once you add latency into the mix.

So, I created a simple app that uses SockJS and an app that uses jQuery AJAX to see how they would perform under stress. Code is here. All it does is basically, send a simple data structure to the server which echos it back. As soon as the response comes back, it starts over. Over and over till it's done X number of iterations.

Here's the output when I ran this on localhost here on my laptop:

# /ajaxtest (localhost)
start!
Finished
10 iterations in 0.128 seconds meaning 78.125 messages/second
start!
Finished
100 iterations in 0.335 seconds meaning 298.507 messages/second
start!
Finished
1000 iterations in 2.934 seconds meaning 340.832 messages/second

# /socktest (localhost)
Finished
10 iterations in 0.071 seconds meaning 140.845 messages/second
start!
Finished
100 iterations in 0.071 seconds meaning 1408.451 messages/second
start!
Finished
1000 iterations in 0.466 seconds meaning 2145.923 messages/second

Wow! It's so fast that the rate doesn't even settle down. Back-of-an-envelope calculation tells me the WebSocket version is 5 times faster roughly. Again; wow!

Now reality kicks in! It's obviously unrealistic to test against localhost because it doesn't take latency into account. I.e. it doesn't take into account the long distance the data has to travel from the client to the server.

So, I deployed this test application on my server in London, England and hit it from my Firefox here in California, USA. Same number of iterations and I ran it a number of times to make sure I don't get hit by sporadic hickups on the line. Here are the results:

# /ajaxtest (sockshootout.peterbe.com)
start!
Finished
10 iterations in 2.241 seconds meaning 4.462 messages/second
start!
Finished
100 iterations in 28.006 seconds meaning 3.571 messages/second
start!
Finished
1000 iterations in 263.785 seconds meaning 3.791 messages/second

# /socktest (sockshootout.peterbe.com) 
start!
Finished
10 iterations in 5.705 seconds meaning 1.752 messages/second
start!
Finished
100 iterations in 23.283 seconds meaning 4.295 messages/second
start!
Finished
1000 iterations in 227.728 seconds meaning 4.391 messages/second

Hmm... Not so cool. WebSockets are still slightly faster but the difference is negligable. WebSockets are roughly 10-20% faster than AJAX. With that small a difference I'm sure the benchmark is going to vastly effected by other factors that make it unfair for one or the the other such as quirks in my particular browser or the slightest hickup on the line.

What can we learn from this? Well, latency kills all the fun. Also, it means that you don't necessarily need to re-write your already working AJAX heavy app just to gain speed because even though it's ever so slightly faster, the switch from AJAX to WebSocket comes with other risks and challenges such as authentication cookies, having to deal with channel concurrency, load balancing on the server etc.

Before you say it, yes I'm aware than WebSocket web apps comes with other advantages such as being able to hold on to sockets and push data at will from the server. Those are juicy benefits but massive performance boosts ain't one.

Also, I bet that writing this means that peeps will come along and punch hole in my code and my argument. Something I welcome with open arms!

David Illsley - 22 April 2012 [«« Reply to this]
I may be missing something, but don't websockets have a pretty substantial benefit if you don't wait for the response before sending the next request? Because we don't have http pipelining, the AJAX version would wait for the response before sending the next request even if they're being submitted asynchronously (once we're using max-connections)
Peter Bengtsson - 22 April 2012 [«« Reply to this]
You mean we'd bombard the server X number of messages and count how long it took to send them. Could do that.

Sending stuff back and forth like I do now is, I guess, a bit more realistic.

However, a more common use case would be to bombard the client with messages. E.g.
for i in range(1000):
self.send({'msg': i})
David Illsley - 22 April 2012 [«« Reply to this]
I mean send the requests as fast as possible and time how long it takes to receive all the responses.

I wouldn't expect to see much improved latency otherwise when the network latency dominates the transfer time for the http headers.
tom - 22 April 2012 [«« Reply to this]
I guess I'm missing something here, but shouldn't AJAX be slowed down by latency a lot more than WebSockets? AJAX (usually) opens a new TCP connection for every message, while WebSockets establish a connection once and use it for all messages. So shouldn't AJAX need one roundtrip more (SYN/SYN+ACK) than WebSockets and thus take twice the latency (assuming that the payload is small enough to fit in one IP packet)?
Peter Bengtsson - 22 April 2012 [«« Reply to this]
Is this perhaps skewed because the server sends a Keep-Alive header?
Andy Davies - 23 April 2012 [«« Reply to this]
Would be interesting to compare with and without Keep-Alive and also the difference between GET and POST
Aidiakapi - 24 June 2012 [«« Reply to this]
There's not really a difference between GET and POST requests, except how the server handles them.
Andy Davies - 24 June 2012 [«« Reply to this]
Need to check up on the behaviour but AJAX POSTs used to take two packets - headers get sent in first and then once the server ACKs the client sends the body of the request.

Yahoo discovered and documented it several years back
Marty Zalega - 31 May 2012 [«« Reply to this]
The client-side benefit might not be great but on the server side I'm sure there would be massive gains. I'd be interested to see what the difference would be on the server.
Peter Bengtsson - 31 May 2012 [«« Reply to this]
You mean In terms of maximum throughput?
Marty Zalega - 31 May 2012 [«« Reply to this]
Yeah, and whether the request overhead might be a lot less than a regular HTTP request
Aidiakapi - 24 June 2012 [«« Reply to this]
It all depends on the data, if you're sending 10KB data, that 500 bytes header is neglectable.
If you're sending 1KB you're already cutting of 33% of the data.

Nevertheless, the main cost is opening the connection, and since modern browsers keep connection open when they see fit and are allowed to by the server there's not too much to gain.
Peter Bengtsson - 24 June 2012 [«« Reply to this]
Note that I'm sending small packets. In fact so small that in the Ajax case, there's more header data than there's payload.
Benny - 15 November 2012 [«« Reply to this]
Hmm...
Your experiment literally misses the entire point of using WebSockets over AJAX.

Let me know when your AJAX API is capable of handling a million concurrent connections while responding to 20,000 requests per second.

AJAX starts sucking when scaling becomes an issue; WebSockets beat AJAX at both small AND large scales, and something like socket.io (or a node.js server) is incredibly easy to set up... so why not do it?
Peter Bengtsson - 23 November 2012 [«« Reply to this]
Why not? Because 20,000 requests per second is extremely rare.
Patrick - 31 May 2013 [«« Reply to this]
Well, I have to say that 20,000/s is rare but it would give a good statistical result.
Its the same with normal Socket connections.....They say, until 10,000 simultaneous are reached between "Socket per Thread" and Socket.IO (Channels), there is not much a difference but reaching this point you could get in trouble.

So, we could apply the same to WebSockets vs. AJAX. and check where we get in trouble....
Valerio - 26 September 2013 [«« Reply to this]
Hi Peter,
My 5 cents - if you are looking for low latency and scalability, have you taken a third way in consideration other than socks.js and classical' ajax/jquery? Lightstreamer recently issued an apple-to-apple data broadcasting comparison with socket.io (messages generated on the server side and sent to over 4 thousand clients, ran over two Amazon EC2 Machines) and it proved to be able to scale better than plain websockets with socket.io in CPU usage, data latency, and bandwidth consumption, with some other useful features to improve the overall performance. Have a look at here http://blog.lightstreamer.com/2013/05/benchmarking-socketio-vs-lightstreamer.html. The same benchmarking kit has been left on GitHub, so you can get it and test other scenarios. [disclosure: I work for LS].
Peter Bengtsson - 02 October 2013 [«« Reply to this]
Interesting stuff! I'll take a closer look when time allows.
Valerio - 03 October 2013 [«« Reply to this]
Feel free to drop me a note or ask for clarifications :)
Jason - 04 January 2014 [«« Reply to this]
You're cool because you actually did real testing. Thanks for the perspective in narrowing down the approach. Seems best to keep the status quo for now and keep doing socket stuff for real-time needs like chat and the like.
William Beckler - 04 March 2014 [«« Reply to this]
I see something else in your numbers about latency. I'm not sure how you are running your test, but it looks like the initial connection setup is taking a big chunk of time for sockjs. In the local version you have 0.07 seconds both for 10 requests and 100 requests. Maybe that's a typo. Or maybe all the time is sucked up by that socket startup delay and 90 requests happen in a negligible time.

More likely a typo, but look at the second test. In that case you have the first 10 requests taking 5.7 seconds (.57s/request), but according to next blast, you have 100 requests taking 23.2 seconds, so the last 90 requests only took .19s/request. The only explanation for this sequence is that there is a huge rampup time for the very first request out of the gate. This delay of a couple seconds is huge!


Your email will never ever be published