About 5 years ago I switched from Apache to Nginx. And with that switch I could practically stop stabbing my feet with HTTP accelerators like Squid and Varnish because Nginx serves files from the filesystem both faster and more efficient than the accelerators. And, it's one less moving part that can go wrong.

Then in late 2010 Amazon introduced Custom Origins on their Amazon CloudFront CDN service. Compared to other competing CDNs I guess CloudFront loses some benchmarks and win some others. Nevertheless, network latency is the speed-freaks biggest enemy and CDNs are awesome.

With Custom Origin all you do is tell CloudFront to act as a "proxy". It takes and URL and replaces the domain name to go and fetch the original from your own server. For example...

  1. You prepare http://mydomain.com/static/foo.css
  2. You configure your CloudFront get your new domain (aka. "Distribution")
  3. You request http://efac1bef32rf3c.cloudfront.net/static/foo.css
  4. CloudFront fetches the resource from http://mydomain.com/static/foo.css and saves a copy
  5. CloudFront observes which cache headers were used and repeat that. Forever.

So, if I make my Nginx server serve /static/foo.css with:

Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Cache-Control: public

Then CloudFront will do the same and it means it will never come back to your Nginx again. In other words, your Nginx server serves the cacheable static assets once and all other requests are just the usual HTML and JSON and whatever your backend web server spits out.

So, what does this mean? It means that we can significantly re-think they way we write code that prepares and builds static assets. Instead of a complex build or a run-time process that ultimately writes files to the filesystem we can basically do it all in run-time and not worry about speed. E.g. something like this::


# urls.py
  url(r'/static/(.*\.css)', views.serve_css)

# views.py
def serve_css(request, filename):
    response = http.HttpResponse(mimetype="text/css")
    response.setHeader('Cache-Control': 'public, max-age:315360000')
    content = open(filename).read()
    content = cssmin.cssmin(content)
    content = '/* copyright: you */\n%s' % content
    response.write(content)
    return response

That's untested code that can be vastly improved but I hope you get the idea. Obviously there are lots more things you can and should do such concatenating files.

So, what does this also mean? You don't need Nginx. At least not for serving static files faster. I've shown before that something like Nginx + uWSGI is "better" (faster and less memory) than something like Apache + mod_wsgi but oftentimes the difference is negligable.

I for one am not going to re-write all my various code I have to prepare for optimal static assets hosting but I'll definietly keep this stuff in mind. After all, there are other nifty things Nginx can do too.

By the way, here's a really good diagram that explains CloudFront

UPDATE

Want to read this in Serbian? Thank you Anja Skrba for the translation!

Comments

Post your own comment
Alex Clark

Interesting, this sounds a lot like what CloudFlare offers…

Peter Bengtsson

But much more expensive :)
 CloudFlare appear to have a nice stats dashboard.

TC

Cloud flare has a completely free tier.

Peter Bengtsson

Thanks! I've been so impressed and satisfied with CloudFront that I haven't bothered to look at alternatives.

Thanks Alex and TC.

Matt DeBoard

Which is great if you're only looking for a CDN for HTML content. From CloudFlare's TOS:

"Using an account primarily as an online storage space, including the storage or caching of a disproportionate percentage of pictures, movies, audio files, or other non-HTML content, is prohibited."

So for companies like mine where images, video and other media is our bread and butter, CloudFlare isn't an option.

Mase B.

Stuff with same–origin policies may not work as predicted.

Web Workers using importScripts require the script be loaded from the same location as the host page.

Last time I deployed a Java Applet, the behavior varied by browser. Some browsers considered the applet "same-origin" if it was loaded off the same domain as the script invoking the embed, while others still required it be loaded from the same origin as the host page. I don't know if this was a bug or expected behavior...

Same story for iframed content. If you need script access between the parent and child frames, they need to share the same protocol and port, at the very least. You can script around the same-domain issue by using CNAMEs and setting the document.domain to a shared superdomain. (So, parent is on www.example.com, iframe docs served off CloudFront via CNAME iframe.example.com, both parent and child frames set document.domain to "example.com" - bam, inter-iframe comm.) Sadly, the moment you want to use SSL, CNAMEs go out the window.

For a crazy example of all of the above things somehow working in tandem, look at the source for AWS's S3 Management Console (and/or Storage Gateway Console) (disclosure: I helped build them). Static assets (including the huge GWT-compiled JS app) served off CF, applets (used for folder upload) and web workers (for MD5 checksums) loaded off normal servers, inter-frame comm for regionalized requests.

I think with the continued development of technologies like CORS and WebSockets, we'll see more and more "smart" client applications - static apps that pull in data via web services and assemble dynamic bits on the client, rather than relying on server-side magic and templating - at which point it'll matter less and less where resources are loaded from, as they themselves will be static. Graphs by region for time-to-first-byte improved pretty significantly after moving to CF, especially for more remote locations where we didn't have AWS data centers, but did have CF PoPs.

However, these types of web applications can be slightly harder to build, as your clients are suddenly hanging on to a bunch of view state and interacting with services directly. We had people leaving the S3 Console open for weeks at a time. That introduces a whole new slew of things you have to think about (rich client side application logging, careful memory management, what to do when an API changes beneath the client, how do you kick a new version of the app to users, etc). New tools and frameworks are starting to come out which alleviate this pain, but they're all still very young.

Once they mature, however, we may find that not only is nginx obsolete, but the whole presentation layer of application servers as well. :)

Peter Bengtsson

Note, that both Gmail and Pandora now have a "solution" to the problem of the original load getting out-of-date with a little pop-alert box that says "This page needs to be freshed. _Click here to reload_" (or something like that, can't remember exactly)

Yoav Aner

Nice post. You do however have to think about some way to invalidate content, otherwise when you change your foo.css file, Cloudfront will happily ignore and keep serving the cached version... So then you need to start adding some random parameter at the end so it looks like foo.css?d82347jslq ... or rename foo.css to foo_s398ls9cmp.css and this goes back to some kind of compiling of your static assets.

Also worth noting that CF doesn't always give you the lowest latency or overall best performance. If you know where your users are or serving a limited target geographic area, you might be better off serving it yourself with nginx from a close-by location. I once did a quick test with images. Serving them on CF took about 250ms on average, and from my own nginx server about 70ms if I remember correctly. The test was carried out from an EC2 machine, so couldn't have been that far off for Amazon to serve fast, yet my nginx server outside the Amazon network was doing a better job.

Peter Bengtsson

Regarding invalidation; this is a non-problem if you always keep unique URLs.

Interesting note there about a local Nginx vs. CF. I'm amazed that the difference was 70ms vs 250ms. However, the case is certainly worth keeping in mind, but I suspect it's rare. Most of my sites are hosted in the UK but the audience is mainly USA.

Mase B.

Validation: we always prefix our assets with something effectively corresponding to a deployment ID (usually a hash of the content being deployed). It's a lot easier than trying to manage it on a per-asset level, and makes it really easy to clean up old assets.

Response times: you might use something like Gomez to see your response time waterfall from various locations throughout the world. It's all about the PoPs. Depending on how CF is routing your request, you (specifically) maybe going a little farther to hit your content.

N Nguyen

This solution is good... until some dude finds out that you're actually serving static file from Django. He will then make requests again and again for that CSS you are serving to cause your server slow down to a crawl.

Would you want to tag on CDN for help?

Peter Bengtsson

That is indeed a problem. There are of course, solutions to that too but they come with their own set of complexities.

Eugene MechanisM

I'm using nginx + a lot of modules in my projects.
Some nginx modules works with redis, some nginx modules offers weboscket or httppush, other module can stream rtmp etc. My use case of nginx is not just static files and load balancer.

Peter Bengtsson

What?

Karoly Negyesi

Incredible crap. Someone finds out CDNs exist and immediately jump on to a) let's use Amazon's because it's the cloud! Cloud! Cloud! - Yes! We're all individuals! - You're all different! - Yes, we ARE all different! b) nginx is suddenly obsolete.

CDNs acting as distributed reverse proxies have been around for many, many years. Amazon or not.

Peter Bengtsson

I don't care about buzzwords. I care about delivering something great and I don't care what buzzwords that involve.

Stealth P

Pretty much the simple truth. You might consider a major revision of the article in all honesty.

Everyone who has access to a CDN working on major sites should be or become aware that using it this way is the [intended] usage of a CDN. If you get down to the nitty-gritty, ngnix can be used to handle more temporary caching of json ....or i guess soap.. responses.. as well as being a good place to keep your content pages fresh.

Ideally in most situations, you want the application server to be there for one time requests for updates and to enable a bajillion concurrent connections on your AJAX calls. We mostly have one site mirror/publishing server per cluster, a distributed db, and the rest are service comps using the CDN for primary storage. The server's disk space is for persistent thinking!

Stealth P

Though don't get me wrong, Amazon and similar providers offering these features at the low end DOES enable sites on shared hosting and startups to join in the tech. This is very 'cloudy'...cloudesque?

My preference, spin up extra micro-app services with clouds. Let files be just files on cdn otherwise ;p

Anonymous

I don't think it is correct to say that cloudfront will only ever go to the origin server once. Just think what kind of storage requirements they would need to have if every max-age'd resource was kept around forever. In reality, they only have so much space, so only very hot and active content will stay on their servers indefinitely. Less active content will fall off eventually and need to be retrieved from the origin again.

Peter Bengtsson

If that's the case, that CF leaks, then it'll miniscule amount of traffic. However, I guess it means you simply can't remove the origin resource.

Gilles B.

Hey,

this post makes obviously wrong assumption. CDNs take into account the HTTP headers (max age, etag etc) to determine the validity of a content. It does not mean that CDNs keep valid files forever in cache. CDNs usually run algorithms derived from LFU / LRU to determine which files they delete, when they decided to cache a new content and their disks are full. Your content will be in cache forever if it's indeed extremely popular. Otherwise, it will be deleted and potentially put again in cache later when some people request it again.

Cheers,
Gilles

Peter Bengtsson

True but not a problem. If you have 1,000 hits on a resource over a long time. Then, sure, it'll drop it and re-fetch from the origin when it needs. So, perhaps instead of just fetching it once, it might fetch it 10 times. That's rare enough that you need something performant on the origin. A web app is fast enough so not an excuse to use Nginx.

Wim Leers

Interestingly, this is *precisely* what I did in the CDN module for Drupal [1].

At the beginning of 2012, I added support for "Far Future expiration". Based on the directory a file lives in and the extension of the file, you can choose a different "unique file identifier" method (mtime, md5, perpetual, deployment ID, Drupal version, custom ones — anything). This ensures unique file URLs.

I then have Drupal serve the files with as optimal headers as possible. The only reason this can work without causing too much load on the server is by having a reverse proxy in front of it — precisely the key point of your article.

To prevent access to files that shouldn't be accessed, each file URL also contains a security token (based on a site's private key and a salt). This also somewhat helps prevent overloading the origin server, even when asking an Origin Pull CDN to get the file, because if the security token isn't valid, it'll bail early.

In case you're interested with which headers I serve files, see cdn_basic_farfuture_download() [2] — if you have suggestions to make it better, please let me know! :)

It also does: CORS, DNS prefetching, auto-balancing files over multiple CDNs/domains (using hashing so each file is always served from the same domain to maximize client-side caching effectiveness).

[1]: http://drupal.org/project/cdn
[2]: "http://drupalcode.org/project/cdn.git/blob/0f19fca6c4c382cdd751ac97346c0e6446df9c14:/cdn.basic.farfuture.inc#l12"

Your email will never ever be published.

Related posts

Go to top of the page