Filtered by Web Performance

Reset

Progressive CSS rendering with or without data URLs

September 26, 2020
0 comments Web development, Web Performance, JavaScript

You can write your CSS so that it depends on images. Like this:


li.one {
  background-image: url("skull.png");
}

That means that the browser will do its best to style the li.one with what little it has from the CSS. Then, it'll ask the browser to go ahead and network download that skull.png URL.

But, another option is to embed the image as a data URL like this:


li.one{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHL...rkJggg==)

As a block of CSS, it's much larger but it's one less network call. What if you know that skull.png will be needed? Is it faster to inline it or to leave it as a URL? Let's see!

First of all, I wanted to get a feeling for how much larger an image is in bytes if you transform them to data URLs. Check out this script's output:

▶ ./bin/b64datauri.js src/*.png src/*.svg
src/lizard.png       43,551     58,090     1.3x
src/skull.png        7,870      10,518     1.3x
src/clippy.svg       483        670        1.4x
src/curve.svg        387        542        1.4x
src/dino.svg         909        1,238      1.4x
src/sprite.svg       10,330     13,802     1.3x
src/survey.svg       2,069      2,786      1.3x

Basically, as a blob of data URL, the images become about 1.3x larger. Hopefully, with HTTP2, the headers are cheap for each URL downloaded over the network, but it's not 0. (No idea what the CPU-work multiplier is)

Experiment assumptions and notes

  • When you first calculate the critical CSS, you know that there's no url(data:mime/type;base64,....) that goes to waste. I.e. you didn't put that in the CSS file or HTML file, bloating it, for nothing.
  • The images aren't too large. Mostly icons and fluff.
  • If it's SVG images should probably inline them in the HTML natively so you can control their style.
  • The HTML is compressed for best results.
  • The server is HTTP2

It's a fairly commonly known fact that data URLs have a CPU cost. That base64 needs to be decoded before the image can be decoded by the renderer. So let's stick to fairly small images.

The experiment

I made a page that looks like this:


li {
  background-repeat: no-repeat;
  width: 150px;
  height: 150px;
  margin: 20px;
  background-size: contain;
}
li.one {
  background-image: url("skull.png");
}
li.two {
  background-image: url("dino.svg");
}
li.three {
  background-image: url("clippy.svg");
}
li.four {
  background-image: url("sprite.svg");
}
li.five {
  background-image: url("survey.svg");
}
li.six {
  background-image: url("curve.svg");
}

and


<ol>
  <li class="one">One</li>
  <li class="two">Two</li>
  <li class="three">Three</li>
  <li class="four">Four</li>
  <li class="five">Five</li>
  <li class="six">Six</li>
</ol>

See the whole page here

The page also uses Bootstrap to make it somewhat realistic. Then, using minimalcss combine the external CSS with the CSS inline and produce a page that is just HTML + 1 <style> tag.

Now, based on that page, the variant is that each url($URL) in the CSS gets converted to url(data:mime/type;base64,blablabla...). The HTML is gzipped (and brotli compressed) and put behind a CDN. The URLs are:

Also, there's this page which is without the critical CSS inlined.

To appreciate what this means in terms of size on the HTML, let's compare:

  • inlined.html with external URLs: 2,801 bytes (1,282 gzipped)
  • inlined-datauris.html with data URLs: 32,289 bytes (17,177 gzipped)

Considering that gzip (accept-encoding: gzip,deflate) is almost always used by browsers, that means the page is 15KB more before it can be fully downloaded. (But, it's streamed so maybe the comparison is a bit flawed)

Analysis

WebPagetest.org results here. I love WebPagetest, but the results are usually a bit erratic to be a good enough for comparing. Maybe if you could do the visual comparison repeated times, but I don't think you can.

WebPagetest comparison
WebPagetest visual comparison

And the waterfalls...

WebPagetest waterfall, with regular URLs
With regular URLs

WebPagetest waterfall with data URLs
With data URLs

Fairly expected.

  • With external image URLs, the browser will start to display the CSSOM before the images have downloaded. Meaning, the CSS is render-blocking, but the external images are not.

  • The final result comes in sooner with data URLs.

  • With data URLs you have to stare at a white screen longer.

Next up, using Google Chrome's Performance dev tools panel. Set to 6x CPU slowdown and online with Fast 3G.

I don't know how to demonstrate this other than screenshots:

Performance with external images
Performance with external images

Performance with data URLs
Performance with data URLs

Those screenshots are rough attempts at showing the area when it starts to display the images.

Whole Performance tab with external images
Whole Performance tab with external images

Whole Performance tab with data URLs
Whole Performance tab with data URLs

I ran these things 2 times and the results were pretty steady.

  • With external images, fully loaded at about 2.5 seconds
  • With data URLs, fully loaded at 1.9 seconds

I tried Lighthouse but the difference was indistinguishable.

Summary

Yes, inlining your CSS images is faster. But it's with a slim margin and the disadvantages aren't negligible.

This technique costs more CPU because there's a lot more base64 decoding to be done, and what if you have a big fat JavaScript bundle in there that wants a piece of the CPU? So ask yourself, how valuable is to not hog the CPU. Perhaps someone who understands the browser engines better can tell if the base64 decoding cost is spread nicely onto multiple CPUs or if it would stand in the way of the main thread.

What about anti-progressive rendering

When Facebook redesigned www.facebook.com in mid-2020 one of their conscious decisions was to inline the SVG glyphs into the JavaScript itself.

"To prevent flickering as icons come in after the rest of the content, we inline SVGs into the HTML using React rather than passing SVG files to <img> tags."

Although that comment was about SVGs in the DOM, from a JavaScript perspective, the point is nevertheless relevant to my experiment. If you look closely, at the screenshots above (or you open the URL yourself and hit reload with HTTP caching disabled) the net effect is that the late-loading images do cause a bit of "flicker". It's not flickering as in "now it's here", "now it's gone", "now it's back again". But it's flickering in that things are happening with progressive rendering. Your eyes might get tired and they say to your brain "Wake me up when the whole thing is finished. I can wait."

This topic quickly escalates into perceived performance which is a stratosphere of its own. And personally, I can only estimate and try to speak about my gut reactions.

In conclusion, there are advantages to using data URIs over external images in CSS. But please, first make sure you don't convert the image URLs in a big bloated .css file to data URLs if you're not sure they'll all be needed in the DOM.

Bonus!

If you're not convinced of the power of inlining the critical CSS, check out this WebPagetest run that includes the image where it references the whole bootstrap.min.css as before doing any other optimizations.

With baseline that isn't just the critical CSS
With baseline that isn't just the critical CSS

Lazy-load Firebase Firestore and Firebase Authentication in Preact

September 2, 2020
0 comments Web development, Web Performance, JavaScript, Preact

I'm working on a Firebase app called That's Groce! based on preact-cli, with TypeScript, and I wanted to see how it appears with or without Firestore and Authenticated lazy-loaded.

In the root, there's an app.tsx that used look like this:


import { FunctionalComponent, h } from "preact";
import { useState, useEffect } from "preact/hooks";

import firebase from "firebase/app";
import "firebase/auth";
import "firebase/firestore";

import { firebaseConfig } from "./firebaseconfig";

const app = firebase.initializeApp(firebaseConfig);

const App: FunctionalComponent = () => {
  const [auth, setAuth] = useState<firebase.auth.Auth | null>(null);
  const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

  useEffect(() => {
    const appAuth = app.auth();
    setAuth(appAuth);
    appAuth.onAuthStateChanged(authStateChanged);

    const db = firebase.firestore();
    setDB(db);
  }, []);

...

While this works, it does make a really large bundle when both firebase/firestore and firebase/auth imported in the main bundle. In fact, it looks like this:

▶ ls -lh build/*.esm.js
-rw-r--r--  1 peterbe  staff   510K Sep  1 14:13 build/bundle.0438b.esm.js
-rw-r--r--  1 peterbe  staff   5.0K Sep  1 14:13 build/polyfills.532e0.esm.js

510K is pretty hefty to have to ask the client to download immediately. It's loaded like this (in build/index.html):


<script crossorigin="anonymous" src="/bundle.0438b.esm.js" type="module"></script>
<script nomodule src="/polyfills.694cb.js"></script>
<script nomodule defer="defer" src="/bundle.a4a8b.js"></script>

To lazy-load this

To lazy-load the firebase/firestore and firebase/auth you do this instead:


...

const App: FunctionalComponent = () => {
  const [auth, setAuth] = useState<firebase.auth.Auth | null>(null);
  const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

  useEffect(() => {
    import("firebase/auth")
      .then(() => {
        const appAuth = app.auth();
        setAuth(appAuth);
        appAuth.onAuthStateChanged(authStateChanged);
      })
      .catch((error) => {
        console.error("Unable to lazy-load firebase/auth:", error);
      });

    import("firebase/firestore")
      .then(() => {
        const db = firebase.firestore();
        setDB(db);
      })
      .catch((error) => {
        console.error("Unable to lazy-load firebase/firestore:", error);
      });
  }, []);

...

Now it looks like this instead:

▶ ls -lh build/*.esm.js
-rw-r--r--  1 peterbe  staff   173K Sep  1 14:24 build/11.chunk.b8684.esm.js
-rw-r--r--  1 peterbe  staff   282K Sep  1 14:24 build/12.chunk.3c1c4.esm.js
-rw-r--r--  1 peterbe  staff    56K Sep  1 14:24 build/bundle.7225c.esm.js
-rw-r--r--  1 peterbe  staff   5.0K Sep  1 14:24 build/polyfills.532e0.esm.js

The total sum of all (relevant) .esm.js files is the same (minus a difference of 430 bytes).

But what does it really look like? The app is already based around that


const [db, setDB] = useState<firebase.firestore.Firestore | null>(null);

so it knows to wait until db is truthy and it displays a <Loading/> component until it's ready.

To test how it loads I used the Chrome Performance devtools with or without the lazy-loading and it's fairly self-explanatory:

Before
Before, no lazy-loading

After
After, with lazy-loading

Clearly, the lazy-loaded has a nicer pattern in that it breaks up the work by the main thread.

Conclusion

It's fairly simple to do and it works. The main bundle becomes lighter and allows the browser to start rendering the Preact component sooner. But it's not entirely obvious if it's that much better. The same amount of JavaScript needs to downloaded and parsed no matter what. It's clearly working as a pattern but it's still pretty hard to judge if it's worth it. Now there's more "swapping".

And the whole page is server-side rendered anyway so in terms of immediate first-render it's probably the same. Hopefully, HTTP2 loading does the right thing but it's not yet entirely clear if the complete benefit is there. I certainly hope that this can improve the "Total Blocking Time" and "Time to Interactive".

The other important thing is that not all imports from firebase/* work in Node because they depend on window. It works for firebase/firestore and firestore/auth but not for firestore/analytics and firestore/performance. Now, I can add those lazy-loaded in the client and have the page rendered in Node for that initial build/index.html.

Update to speed comparison for Redis vs PostgreSQL storing blobs of JSON

September 30, 2019
2 comments Redis, Nginx, Web Performance, Python, Django, PostgreSQL

Last week, I blogged about "How much faster is Redis at storing a blob of JSON compared to PostgreSQL?". Judging from a lot of comments, people misinterpreted this. (By the way, Redis is persistent). It's no surprise that Redis is faster.

However, it's a fact that I have do have a lot of blobs stored and need to present them via the web API as fast as possible. It's rare that I want to do relational or batch operations on the data. But Redis isn't a slam dunk for simple retrieval because I don't know if I trust its integrity with the 3GB worth of data that I both don't want to lose and don't want to load all into RAM.

But is it entirely wrong to look at WHICH database to get the best speed?

Reviewing this corner of Song Search helped me rethink this. PostgreSQL is, in my view, a better database for storing stuff. Redis is faster for individual lookups. But you know what's even faster? Nginx

Nginx??

The way the application works is that a React web app is requesting the Amazon product data for the sake of presenting an appropriate affiliate link. This is done by the browser essentially doing:


const response = await fetch('https://songsear.ch/api/song/5246889/amazon');

Internally, in the app, what it does is that it looks this up, by ID, on the AmazonAffiliateLookup ORM model. Suppose it wasn't there in the PostgreSQL, it uses the Amazon Affiliate Product Details API, to look it up and when the results come in it stores a copy of this in PostgreSQL so we can re-use this URL without hitting rate limits on the Product Details API. Lastly, in a piece of Django view code, it carefully scrubs and repackages this result so that only the fields used by the React rendering code is shipped between the server and the browser. That "scrubbed" piece of data is actually much smaller. Partly because it limits the results to the first/best match and it deletes a bunch of things that are never needed such as ProductTypeName, Studio, TrackSequence etc. The proportion is roughly 23x. I.e. of the 3GB of JSON blobs stored in PostgreSQL only 130MB is ever transported from the server to the users.

Again, Nginx?

Nginx has a built in reverse HTTP proxy cache which is easy to set up but a bit hard to do purges on. The biggest flaw, in my view, is that it's hard to get a handle of how much RAM this it's eating up. Well, if the total possible amount of data within the server is 130MB, then that is something I'm perfectly comfortable to let Nginx handle cache in RAM.

Good HTTP performance benchmarking is hard to do but here's a teaser from my local laptop version of Nginx:

▶ hey -n 10000 -c 10 https://songsearch.local/api/song/1810960/affiliate/amazon-itunes

Summary:
  Total:    0.9882 secs
  Slowest:  0.0279 secs
  Fastest:  0.0001 secs
  Average:  0.0010 secs
  Requests/sec: 10119.8265


Response time histogram:
  0.000 [1] |
  0.003 [9752]  |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.006 [108]   |
  0.008 [70]    |
  0.011 [32]    |
  0.014 [8] |
  0.017 [12]    |
  0.020 [11]    |
  0.022 [1] |
  0.025 [4] |
  0.028 [1] |


Latency distribution:
  10% in 0.0003 secs
  25% in 0.0006 secs
  50% in 0.0008 secs
  75% in 0.0010 secs
  90% in 0.0013 secs
  95% in 0.0016 secs
  99% in 0.0068 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0000 secs, 0.0001 secs, 0.0279 secs
  DNS-lookup:   0.0000 secs, 0.0000 secs, 0.0026 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0011 secs
  resp wait:    0.0008 secs, 0.0001 secs, 0.0206 secs
  resp read:    0.0001 secs, 0.0000 secs, 0.0013 secs

Status code distribution:
  [200] 10000 responses

10,000 requests across 10 clients at rougly 10,000 requests per second. That includes doing all the HTTP parsing, WSGI stuff, forming of a SQL or Redis query, the deserialization, the Django JSON HTTP response serialization etc. The cache TTL is controlled by simply setting a Cache-Control HTTP header with something like max-age=86400.

Now, repeated fetches for this are cached at the Nginx level and it means it doesn't even matter how slow/fast the database is. As long as it's not taking seconds, with a long Cache-Control, Nginx can hold on to this in RAM for days or until the whole server is restarted (which is rare).

Conclusion

If you the total amount of data that can and will be cached is controlled, putting it in a HTTP reverse proxy cache is probably order of magnitude faster than messing with chosing which database to use.

A React vs. Preact case study for a widget

July 24, 2019
0 comments Web development, React, Web Performance, JavaScript

tl;dr; The previous (React) total JavaScript bundle size was: 36.2K Brotli compressed. The new (Preact) JavaScript bundle size was: 5.9K. I.e. 6 times smaller. Also, it appears to load faster in WebPageTest.

I have this page that is a Django server-side rendered page that has on it a form that looks something like this:


<div id="root">  
  <form action="https://songsear.ch/q/">  
    <input type="search" name="term" placeholder="Type your search here..." />
    <button>Search</button>
  </form>  
</div>

It's a simple search form. But, to make it a bit better for users, I wrote a React widget that renders, into this document.querySelector('#root'), a near-identical <form> but with autocomplete functionality that displays suggestions as you type.

Anyway, I built that React bundle using create-react-app. I use the yarn run build command that generates...

  • css/main.83463791.chunk.css - 1.4K
  • js/main.ec6364ab.chunk.js - 9.0K (gzip 2.8K, br 2.5K)
  • js/runtime~main.a8a9905a.js - 1.5K (gzip 754B, br 688B)
  • js/2.b944397d.chunk.js - 119K (gzip 36K, br 33K)

Then, in Python, a piece of post-processing code copies the files from the build/static/ directory and inserts it into the rendered HTML file. The CSS gets injected as an inline <style> tag.

It's a simple little widget. No need for any service-workers or react-router or any global state stuff. (Actually, it only has 1 single runtime dependency outside the framework) I thought, how about moving this to Preact?

In comes preact-cli

The app used a couple of React hooks but they were easy to transform into class components. Now I just needed to run:


npx preact create --yarn widget name-of-my-preact-project
cd name-of-my-preact-project
mkdir src
cp ../name-of-React-project/src/App.js src/
code src/App.js

Then, I slowly moved over the src/App.js from the create-react-app project and slowly by slowly I did the various little things that you need to do. For example, to learn to build with preact build --no-prerender --no-service-worker and how I can override the default template.

Long story short, the new built bundles look like this:

  • style.82edf.css - 1.4K
  • bundle.d91f9.js - 18K (gzip 6.4K, br 5.9K)
  • polyfills.9168d.js - 4.5K (gzip 1.8K, br 1.6K)

(The polyfills.9168d.js gets injected as a script tag if window.fetch is falsy)

Unfortunately, when I did the move from React to Preact I did make some small fixes. Doing the "migration" I noticed a block of code that was never used so that gives the build bundle from Preact a slight advantage. But I think it's nominal.

In conclusion: The previous total JavaScript bundle size was: 36.2K (Brotli compressed). The new JavaScript bundle size was: 5.9K (Brotli compressed). I.e. 6 times smaller. But if you worry about the total amount of JavaScript to parse and execute, the size difference uncompressed was 129K vs. 18K. I.e. 7 times smaller. I can only speculate but I do suspect you need less CPU/battery to process 18K instead of 129K if CPU/batter matters more (or closer to) than network I/O.

WebPageTest - Visual Comparison - Mobile Slow 3G

Rendering speed difference

Rendering speed is so darn hard to measure on the web because the app is so small. Plus, there's so much else going on that matters.

However, using WebPageTest I can do a visual comparison with the "Mobile - Slow 3G" preset. It'll be a somewhat decent measurement of the total time of downloading, parsing and executing. Thing is, the server-side rended HTML form has a button. But the React/Preact widget that takes over the DOM hides that submit button. So, using the screenshots that WebPageTest provides, I can deduce that the Preact widget completes 0.8 seconds faster than the React widget. (I.e. instead of 4.4s it became 3.9s)

Truth be told, I'm not sure how predictable or reproducible is. I ran that WebPageTest visual comparison more than once and the results can vary significantly. I'm not even sure which run I'm referring to here (in the screenshot) but the React widget version was never faster.

Conclusion and thoughts

Unsurprisingly, Preact is smaller because you simply get less from that framework. E.g. synthetic events. I was lucky. My app uses onChange which I could easily "migrate" to onInput and I managed to get it to work pretty easily. I'm glad the widget app was so small and that I don't depend on any React specific third-party dependencies.

But! In WebPageTest Visual Comparison it was on "Mobile - Slow 3G" which only represents a small portion of the traffic. Mobile is a huge portion of the traffic but "Slow 3G" is not. When you do a Desktop comparison the difference is roughtly 0.1s.

Also, in total, that page is made up of 3 major elements

  1. The server-side rendered HTML
  2. The progressive JavaScript widget (what this blog post is about)
  3. A piece of JavaScript initiated banner ad

That HTML controls the "First Meaningful Paint" which takes 3 seconds. And the whole shebang, including the banner ad, takes a total of about 9s. So, all this work of rewriting a React app to Preact saved me 0.8s out of the total of 9s.

Web performance is hard and complicated. Every little counts, but keep your eye on the big ticket items assuming there's something you can do about them.

At the time of writing, preact-cli uses Preact 8.2 and I'm eager to see how Preact X feels. Apparently, since April 2019, it's in beta. Looking forward to giving it a try!

WebSockets vs. XHR 2019

May 5, 2019
0 comments Web development, Web Performance, JavaScript

Back in 2012, I did an experiment to compare if and/or how much faster WebSockets are compared to AJAX (aka. XHR). It would be a "protocol benchmark" to see which way was faster to schlep data back and forth between a server and a browser in total. The conclusion of that experiment was that WebSockets were faster but when you take latency into account, the difference was minimal. Considering the added "complexities" of WebSockets (keeping connections, results don't come where the request was made, etc.) it's not worth it.

But, 7 years later browsers are very different. Almost all browsers that support JavaScript also support WebSockets. HTTP/2 might make things better too. And perhaps the WebSocket protocol is just better implemented in the browsers. Who knows? An experiment knows.

So I made a new experiment with similar tech. The gist of the code is best explained with some code:


// Inside App.js

loopXHR = async count => {
  const res = await fetch(`/xhr?count=${count}`);
  const data = await res.json();
  const nextCount = data.count;
  if (nextCount) {
    this.loopXHR(nextCount);
  } else {
    this.endXHR();
  }
};

Basically, pick a big number (e.g. 100) and send that integer to the server which does this:


# Inside app.py 

# from the the GET querystring "?count=123"
count = self.get_argument("count")   
data = {"count": int(count) - 1}
self.write(json.dumps(data))

So the browser keeps sending the number back to the server that decrements it and when the server returns 0 the loop ends and you look how long the whole thing took.

Try It

The code is here: https://github.com/peterbe/sockshootout2019

And the demo app is here: https://sockshootout.app (Just press "Start!", wait and press it 2 or 3 more times)

Location, location, location

What matters is the geographical distance between you and the server. The server used in this experiment is in New York, USA.

What you'll find is that the closer you are to the server (lower latency) the better WebSocket performs. Here's what mine looks like:

My result between South Carolina, USA and New York, USA
My result between South Carolina, USA and New York, USA

Now, when I run the whole experiment all on my laptop the results look very different:

Running all locally
Running all locally

I don't have a screenshot for it but a friend of mine ran this from his location in Perth, Australia. There was no difference. If any difference it was "noise".

Same Conclusion?

Yes, latency matters most. The technique, for the benefit of performance, doesn't matter much.

No matter how fancy you're trying to be, what matters is the path the bytes have to travel. Or rather, the distance the bytes have to travel. If you're far away a large majority of the total time is sending and receiving the data. Not the time it takes the browser (or the server) to process it.

However, suppose you do have all your potential clients physically near the server, it might be beneficial to use WebSockets.

Thoughts and Conclusions

My original thought was to use WebSockets instead of XHR for an autocomplete widget. At almost every keystroke, you send it to the server and as search results come in, you update the search result display. Things like that need to be fast and "snappy". But that's not where WebSockets shine. They shine in their ability to actively await results without having a loop that periodically pulls. There's nothing wrong with WebSocket and it has its brilliant use cases.

In summary, don't bother just to get a single-digit percentage performance increase if the complexity of the code and infrastructure is non-trivial. Keep building cool stuff with WebSockets but if you expect one result per action, XHR is good enough.

Bonus

The experiment app does collect everyone's results (just the timings and IP) and I hope to find the time to process this and build graph a correlating the geographical distance compared to the difference between the two techniques. Watch this space!

By the way, if you do plan on writing some WebSocket implementation code I highly recommend Sockette. It's solid and easy to use.

KeyCDN vs AWS CloudFront

April 29, 2019
3 comments Web development, Web Performance

Before I commit to KeyCDN for my little blog I wanted to check if CloudFront is better. Why? Because I already have an AWS account set up, familiar with boto3, it's what we use for work, and it's AWS so it's usually pretty good stuff. As an attractive bonus, CloudFront has 44 edge locations (KeyCDN 34).

Price-wise it's hard to compare because the AWS CloudFront pricing page is hard to read because the costs are broken up by regions. KeyCDN themselves claim KeyCDN is about 2x cheaper than CloudFront. This seems to be true if you look at cdnoverview.com's comparison too. CloudFront seems to have more extra specific costs. For example, with AWS CloudFront you have to pay to invalidate the cache whereas that's free for KeyCDN.

I also ran a little global latency test comparing the two using Hyperping using 7 global regions. The results are as follows:

KeyCDN on Hyperping.io
KeyCDN on Hyperping.io

CloudFront on Hyperping.io
CloudFront on Hyperping.io

Region KeyCDN CloudFront Winner
London 27 ms 36 ms KeyCDN
San Francisco 29 ms 46 ms KeyCDN
Frankfurt 47 ms 1001 ms KeyCDN
New York City 52 ms 68 ms KeyCDN
São Paulo 105 ms 162 ms KeyCDN
Sydney 162 ms 131 ms CloudFront
Mumbai 254 ms 76 ms CloudFront

Take these with a pinch of salt because it's only an average for the last 1 hour. Let's agree that they both faster than your regular Nginx server in a single location.

By the way, both KeyCDN and CloudFront support Brotli compression. For CloudFront, this was added in July 2018 and if your origin can serve according to Content-Encoding you simply tell CloudFront to cache based on that header.

Although I've never tried it CloudFront does have an API for doing cache invalidation (aka. purging) and you can use boto3 to do it but I've never tried it. For KeyCDN here's how you do cache invalidation with the python-keycdn-api:


api = keycdn.Api(settings.KEYCDN_API_KEY)
call = "zones/purgeurl/{}.json".format(settings.KEYCDN_ZONE_ID)
all_urls = [
    'origin.example.com/static/foo.css',
    'origin.example.com/static/foo.cssbr',
    'origin.example.com/images/foo.jpg',
]
params = {"urls": all_urls}
response = api.delete(call, params)
print(response)

I'm not in love with that API but I know it issues the invalidation fast whereas with CloudFront I heard it takes a while to take effect.

Previous page
Next page