Filtered by JavaScript

Page 10

Reset

React 16.6 with Suspense and lazy loading components with react-router-dom

October 26, 2018
7 comments Web development, JavaScript, React

If you're reading this, you might have thought one of two thoughts about this blog post title (or both); "Cool buzzwords!" or "Yuck! So much hyped buzzwords!"

Either way, React v16.6 came out a couple of days ago and it brings with it React.lazy: Code-Splitting with Suspense.

React.lazy is React's built-in way of lazy loading components. With Suspense you can make that lazy loading be smart and know to render a fallback component (or JSX element) whilst waiting for that slowly loading chunk for the lazy component.

The sample code in the announcement was deliciously simple but I was curious; how does that work with react-router-dom??

Without furher ado, here's a complete demo/example. The gist is an app that has two sub-components loaded with react-router-dom:


<Router>
  <div className="App">
    <Switch>
      <Route path="/" exact component={Home} />
      <Route path="/:id" component={Post} />
    </Switch>
  </div>
</Router>

The idea is that the Home component will list all the blog posts and the Post component will display the full details of that blog post. In my demo, the Post component never bothers to actually do the fetching of the full details to display. It just displays the passed in ID from the react-router-dom match prop. You get the idea.

That's standard React with react-router-dom stuff. Next up, lazy loading. Basically, instead of importing the Post component, you make it lazy:


-import Post from "./post";
+const Post = React.lazy(() => import("./post"));

And here comes the magic sauce. Instead of referencing component={Post} in the <Route/> you use this badboy:


function WaitingComponent(Component) {
  return props => (
    <Suspense fallback={<div>Loading...</div>}>
      <Component {...props} />
    </Suspense>
  );
}

Complete prototype

The final thing looks like this:


import React, { lazy, Suspense } from "react";
import ReactDOM from "react-dom";
import { MemoryRouter as Router, Route, Switch } from "react-router-dom";

import Home from "./home";
const Post = lazy(() => import("./post"));

function App() {
  return (
    <Router>
      <div className="App">
        <Switch>
          <Route path="/" exact component={Home} />
          <Route path="/:id" component={WaitingComponent(Post)} />
        </Switch>
      </div>
    </Router>
  );
}

function WaitingComponent(Component) {
  return props => (
    <Suspense fallback={<div>Loading...</div>}>
      <Component {...props} />
    </Suspense>
  );
}

const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);

(sorry about the weird syntax highlighting with the red boxes.)

And it totally works! It's hard to show this with the demo but if you don't believe me, you can download the whole codesandbox as a .zip, run yarn && yarn run build && serve -s build and then you can see it doing its magic as if this was the complete foundation of a fully working client-side app.

1. Loading the "Home" page, then click one of the links

Loading the "Home" page

2. Lazy loading the Post component

Lazy loading the Post component

3. Post component lazily loaded once and for all

Post component lazily loaded once and for all

Bonus

One thing that can happen is that you might load the app when the Wifi is honky dory but when you eventually make a click that causes a lazy loading to actually need to go out on the Internet and download that .js file it might fail. For example, because the file has been removed from the server or your network just fails for some reason. To deal with that, simply wrap the whole <Suspense> component in an error boundary component.

See this demo which is a fork of the main demo but with error boundaries added.

In conclusion

No surprise that it works. React is pretty awesome. I just wasn't sure how it would look like with react-router-dom.

A word of warning, from the v16.6 announcement: "This feature is not yet available for server-side rendering. Suspense support will be added in a later release."

I think lazy loading isn't actually that big of a deal. It's nice that it works but how likely is it really that you have a sub-tree of components that is so big and slow that you can't just pay for it up front as part of one big fat build. If you really care about a really great web performance for those people who reach your app rarely and sporadically, the true ticket to success is server-side rendering and shipping a gzipped HTML document with all the React client-side code non-blocking rendering so that the user can download the HTML, start reading/consuming it immediately and then whilst the user is doing that you download the rest of the .js that is going to be needed once the user clicks around. Start there.

Inline scripts in create-react-app 2.0 and CSP hashes

October 5, 2018
0 comments Web development, JavaScript, React

UPDATE (1)

My understanding of how to generate the CSP nonces was wrong. What I initially posted was a confusion between nonces and hashes. Sorry. The blog post has been updated to use hashing.

UPDATE (2)

Shortly after publishing this I changed my mind entirely. I decided I don't want any inline scripts no matter how small. Reasons are: 1) with HTTP2 it's cheap to send another file and thus that critical precious first HTML document becomes smaller and 2) when you load it as an external you have the power to load it async if it's applicable.

Check out this new script, it's hackish but works: uninline_scripts.js

UPDATE (Oct 18, 2018)

If you use INLINE_RUNTIME_CHUNK=false yarn run build no scripts, independent of size, are inlined. See this pull request for details.

END UPDATES

I have an app that is hosted on github-pages and because I can't control Content Security Policy HTTP headers I have to do it with a <meta http-equiv="Content-Security-Policy" content="${csp}"> tag in the HTML. That's working fine and the way I do it is that I have a script that looks like this:


#!/usr/bin/env node
const fs = require("fs");
const crypto = require("crypto");

const CSP_TEMPLATE = `
default-src 'none';
connect-src 'self' kinto.workon.app peterbecom.auth0.com;
frame-src peterbecom.auth0.com;
img-src 'self' avatars2.githubusercontent.com https://*.googleusercontent.com;
script-src 'self'%SCRIPT_HASHES%;
style-src 'self' 'unsafe-inline';
font-src 'self' data:;
manifest-src 'self'
`.trim();

const htmlFile = process.argv[2];
if (!htmlFile) throw new Error("missing file argument");
let html = fs.readFileSync(htmlFile, "utf8");

let hashes = "";
let csp = CSP_TEMPLATE;
const matches = html.match(/<script>.*<\/script>/g);
if (matches) {
  matches.forEach(scriptTag => {
    const hash = crypto.createHash("sha256");
    hash.update(scriptTag.replace(/<script>/, "").replace("</script>", ""));
    const digest = hash.digest("hex");
    hashes += ` 'sha256-${digest.toString("base64")}'`;
  });
}
csp = csp.replace(/%SCRIPT_HASHES%/, hashes);

const metatag = `
  <meta http-equiv="Content-Security-Policy" content="${csp}">
`
  .replace(/\n/g, "")
  .trim();
if (html.search(metatag) > -1)
  throw new Error("already has CSP metatag in HTML");
const anchor = '<meta charset="utf-8">';
const newHtml = html.replace(anchor, `${anchor}${metatag}`);
fs.writeFileSync(htmlFile, newHtml, "utf8");

Laugh all you like at my hurried node scripting but it works. It finds any <script>ANYTHING</script> tags (which means it disregards any <script src="... tags), calculates a sha256 hash string out of it and then puts that into the CSP block.

The output becomes something like this:


<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta 
      http-equiv="Content-Security-Policy" 
      content="default-src 'none';script-src 'self' 'sha256-bb84aa7f904e73495b9e99f08531053f3a86f3c1b2e232e3abbac252bf723f1f';">
  </head>
  <body>
    ...
    <script>....</script>
  </body>
</html>

I don't know if I've done it right but at least what didn't use to work now works; the page loads in my browsers now.

An awesome snippet to web performance test a page programmatically

October 1, 2018
0 comments Web development, JavaScript, Web Performance

I found this in an issue discussing measuring page performance with puppeteer and it's pure gold. Especially because it's so accessible and easy to use.

Here's the code:


const puppeteer = require('puppeteer');

async function run() {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  await page.goto('https://www.peterbe.com/');

  console.log('\n==== performance.getEntries() ====\n');
  console.log(
    await page.evaluate(() =>
      JSON.stringify(performance.getEntries(), null, '  ')
    )
  );

  console.log('\n==== performance.toJSON() ====\n');
  console.log(
    await page.evaluate(() => JSON.stringify(performance.toJSON(), null, '  '))
  );

  console.log('\n==== page.metrics() ====\n');
  const perf = await page.metrics();
  console.log(JSON.stringify(perf, null, '  '));

  browser.close();
}

run();

Network waterfall Google Chrome

When you run it you get this output: https://gist.github.com/peterbe/afb09bf9277e5fa9242f8d270c687640
To run it you need to have a decently up-to-date version of puppeteer installed.

I don't claim (far from it actually!) to understand all the metrics points in there but I believe this is basically what the Network panel in the Google Chrome Dev tools is built upon. But some details and facts are easy to figure out and use in your analysis. For example, the fact that the getEntries() lists all the resources that had to be downloaded in the order they were downloaded. Also, at the end of getEntries() you get the first-paint which is often a useful metric.

Anyway, give it a spin. Wrap this up in a platform and see if you can build something really simple and really tailored to your web projects web performance testing.

Merge two arrays without duplicates in JavaScript

September 20, 2018
0 comments JavaScript

Here's how you do it if you don't care about the order:


const array1 = [1, 2, 3];
const array2 = [2, 3, 4];
console.log([...new Set([...array1, ...array2])]);
// prints [1, 2, 3, 4]

It merges two arrays first. Then it creates a set out of that merged array and lastly convers the set back out to an array.

I searched for a solution and all I found was dated or wrong. This oneliner works and I'm using it to make it possible to add a list of product versions to another list and I don't want to mutate existing arrays because of React state stuff.

If you want to see the ES5 version, check out this Babel repl.

A darn good search filter function in JavaScript

September 12, 2018
8 comments Web development, JavaScript

Demo here. The demo uses React and a list of blog post titles that get immediately filtered when you type in a search. I.e. you have the whole list but show less when a search term is entered.

That the demo uses React isn't important. What's important is the search function. It looks like this:


function filterList(q, list) {
  function escapeRegExp(s) {
    return s.replace(/[-/\\^$*+?.()|[\]{}]/g, "\\$&");
  }
  const words = q
    .split(/\s+/g)
    .map(s => s.trim())
    .filter(s => !!s);
  const hasTrailingSpace = q.endsWith(" ");
  const searchRegex = new RegExp(
    words
      .map((word, i) => {
        if (i + 1 === words.length && !hasTrailingSpace) {
          // The last word - ok with the word being "startswith"-like
          return `(?=.*\\b${escapeRegExp(word)})`;
        } else {
          // Not the last word - expect the whole word exactly
          return `(?=.*\\b${escapeRegExp(word)}\\b)`;
        }
      })
      .join("") + ".+",
    "gi"
  );
  return list.filter(item => {
    return searchRegex.test(item.title);
  });
}

In action
I use this in a single-page content management app. There's a list of records and a search input. Every character you put into the search bar updates the list of records shown.

What it does is that it allows you to search texts based on multiple whole words. But the key feature is that the last word doesn't have to be whole. For example, it will positively match "This is a blog post about JavaScript" if the search is "post javascript" or "post javasc". But it won't match on "pos blog".

The idea is that if a user has typed in a full word followed by a space, all previous words needs to be matched fully. For example if the input is "java " it won't match on "This is a blog post about JavaScript" because the word java, alone, isn't in the search text.

Sure, there are different ways to write this but I think this functionality is good for this kind of filtering search. A different implementation would have a function that returns the regex and then it can be used both for filtering and for highlighting.

Hope it helps.

Replace an item in an array, by number, without mutation in JavaScript (ES6)

August 23, 2018
1 comment JavaScript

Suppose you have an array like this:


const items = ["B", "M", "X"];

And now you want to replace that second item ("J" instead of "M") and suppose that you already know its position as opposed to finding its position by doing an Array.prototype.find.

Here's how you do it:


const index = 1;
const replacementItem = "J";

const newArray = Object.assign([], items, {[index]: replacementItem});

console.log(items); // ["B", "M", "X"]
console.log(newArray); //  ["B", "J", "X"]

Wasn't immediately obvious to me but writing it down will help me remember.

UPDATE

There's a much faster way and that's to use slice and it actually looks nicer too:


function replaceAt(array, index, value) {
  const ret = array.slice(0);
  ret[index] = value;
  return ret;
}
const newArray = replaceAt(items, index, "J");

See this codepen.

UPDATE (Feb 2019)

Here's a more powerful solution that uses Immer. It looks like this:


const items = ["B", "M", "X"];
const index = 1;
const replacementItem = "J";

const newArray = immer.produce(items, draft => {
  draft[index] = "J";
});

console.log(items); // ["B", "M", "X"]
console.log(newArray); //  ["B", "J", "X"]

Codepen

See this codepen.

It's more "powerful", because, if the original array (that you don't want to mutate) contains items that are mutable, you don't want to actually mutate them. This codepen demonstrates that subtlety. And this codepen demonstrates how to solve that with Immer.

HTMLMinifier in use on this blog now

August 7, 2018
3 comments Web development, JavaScript, Web Performance

Last week I enabled HTMLMinifier as a post-build step for server-rendered content here on this blog. Basically, after a page is rendered in Django, it's sent to a Celery queue that does things to the index.html file. The first thing it does its that it extracts the stylesheets and replaces them with a block of inline CSS. More details in this blog post. Secondly, what the background job does it that it sends the index.html file to node_modules/.bin/html-minifier. See the code here.

What that does is that it removes quotation marks where not needed (e.g. <div id=foo> instead of <div id="foo">), removes HTML comments, and lastly removes whitespace that is not needed. The result is that the HTML now looks like this:

View source

I also added a line of logging that spits out a measurement of the size of the HTML size before, before with gzip, after, and after with gzip. Why? Because the optimization of HTML minification is usually insignificant after you gzip. See this blog post about how insignificant space optimization is in comparison to gzip. Look at the sample log lines:

...
Minified before: 38,249 bytes (11,150 gzipped), After: 36,098 bytes (10,875 gzipped), Shaving 2,151 bytes (275 gzipped)
Minified before: 37,698 bytes (10,534 gzipped), After: 35,622 bytes (10,243 gzipped), Shaving 2,076 bytes (291 gzipped)
Minified before: 58,846 bytes (14,623 gzipped), After: 55,540 bytes (14,313 gzipped), Shaving 3,306 bytes (310 gzipped)
...

So this last one saved 3.2KB of HTML document which isn't a sneeze, but since 99% of clients support gzip, it actually only saved 310 bytes. As a matter of fact, I parsed the log lines and calculated the average and it was saving 338 bytes per page.

Worth it? I doubt it. It's not without risks and now it's slightly harder and weirder to view the source. However 338 bytes multiplied by the total number of visitors per month, I estimate to save a total of 161 MB of data less to be sent.

To defer or to async JavaScript tags. That's the question.

June 29, 2018
0 comments Web development, JavaScript, Web Performance

tl;dr; async scores slightly better that defer (on script tags) in this experiment using Webpagetest.

Much has been written about the difference between <script defer src="..."> and <script async src="..."> but nothing beats seeing it visually in Webpagetest.

Here are some good articles/resources:

So I took a page off my own blog. Butchered it and cleaned up the 6 <script> tags. It uses HTTP/2 and some jQuery and some other vanilla JavaScript stuff. See the page here: neither.html
Then I copied that HTML file and replaced all <script src="..."> with <script defer src="...">: defer.html. And lastly, the same with: async.html.

First let's compare all three against each other:

Neither vs defer vs async
Neither vs defer vs async on Webpagetest.

Clearly, making the JavaScript non-blocking is critical for web performance. That's 1.7 seconds instead of 2.8 seconds.

Second, let's compare just defer vs. async on a 4G connection:

defer vs. async on 4G
defer vs. async on 4G Also, if you like here's defer vs. async on a desktop browser instead.

Conclusions

  1. Don't allow your JavaScript to block rendering unless it's OK to have your users staring at a white screen till everything has landed.

  2. There's not much difference between defer and async. async has a slight advantage as per these experiments. I'm only capable of guessing, but I suspect it's because it can "spread out" the work better and get some work done in parallel whilst defer has things that tell it to wait. In particular, since with defer the order of the <script> tags is respected. Suppose that the file some.jquery.plugin.js downloads before jquery.min.js, then that file has to be blocked and execution delayed whilst waiting for jquery.min.js to download, parse and execute. With async it's more of a wild west of executing whenever you can.

  3. The async.html is busted because of the unpredictable order of execution and these .js files depend on the order. Another reason to use defer if your scripts have that order-dependency problem.

  4. Consider using a mix of async and defer. async has the advantage that some parsing/execution can be done by the main thread whilst waiting for other blocking resources like images.

To CDN assets or just HTTP/2

May 17, 2018
3 comments Web development, JavaScript, Web Performance

tl;dr; I see little benefit in using a CDN at this point.

I took two random pages here on my blog. One and Another. Doesn't matter what they say but it's important to notice that they're extremely similar. No big pictures. Both have 1 banner ad each. Both served with HTTP/2. Neither have any blocking linked assets. I.e. there is no blocking <link ref="stylesheet" href="styles.css"> and the script tags are are either async or defer. Both pages reference one little .png that is not deliberately lazy loaded. That's the baseline.

The HTML document, in both URLs, is served with HTTP/2 but it references a the lazy loaded .css and (a bunch of) .js files, via a CDN. In other words, it looks like this:


▶ curl -v https://www.peterbe.com/plog/hashin-0.7.0
...
> GET /plog/hashin-0.7.0 HTTP/2
...
< HTTP/2 200
...
<
...
<link rel="preload" href="/static/css/base.min.e8df96d84663.css" 
 as="style" onload="this.onload=null;this.rel='stylesheet'">
...
<script defer src="/static/js/blogitem-post.min.f6c0be691e73.js"></script>
...

So, cdn-2916.kxcdn.com is a an awesome CDN, but to a first-time visitor, that is going to require a DNS lookup and the creation of a new TCP connection that can be kept alive. The alternative to this is to not put any of the of the .png, .css or .js assets on a CDN. Basically, instead of <script src="https://mycdn.example.com/foo.js">, just do <script src="/foo.js">.

CDNs are really important since latency is a killer to web performance and remember that "Use a CDN" is rule number 2 in the, now dated, YSlow ruleset. However, we're entering an era where HTTP/2 is becoming more and more available in mainstream browsers (hint: nearly 100% of visitors to my site are HTTP/2 support). Buuuuuut, the latency (DNS, connection and SSL negotiation) doesn't matter that much if you have already paid those costs to get to the origin web server (https://www.peterbe.com in this example).

The Experiment

What I'm interested in seeing if there is a way to gauge/measure when it's best to use a CDN and when it's best to use the origin web server to serve all assets. My friend @stereobooster suggested: "Webpagetest.org is all you need"

Ok. Let's measure that then with Webpagetest.org and see what we can learn.

Here's a visual comparison of the two URLs when they both use CDN for the static assets.

  • They load pretty equally.
  • The Waterfall View looks almost identical.
  • Confirmed, there are no render blocking resources as it starts to paint already at about 1.5s.

Here's a visual comparison of one using a CDN for static assets and one does not.

  • They load pretty equally (diff by 0.1s).
  • The Waterfall View looks very different.
  • The second one does not have a second "dns - connection - ssl - download" bar.
  • Almost all the .js are downloaded at about 1.8s when there's no CDN.
  • Almost all the .js are downloaded at about 3.0s when using a CDN.
  • Use the little "Waterfall opacity" widget to slide left and right to see the difference.

You can see their webpagetests individually here and here.

Assets over CDN
Two connection prices paid. Downloads individual assets faster but ultimately takes a longer time.

One HTTP/2 connection only
Only 1 connection price paid. ALL assets downloaded sooner, albeit individually slower.

Analysis

My web server is served from a highly optimized Nginx server in New York, USA. The two Webpagetest visual comparisons above are both done from Virgina, USA. But the killer feature of a CDN is that latency can be so much better thanks to edge locations of the CDN. In particular, KeyCDN have an edge location in Stockholm, Sweden. So what happens when you run the URLs from a Webpagetest machine in Stockholm, Sweden?

The both start to render at the same time (expected since the HTML document is still in New York, USA) but the (rougly) total time to download all the .css and .js is (about) 2.6 seconds when a CDN and 1.9 seconds without a CDN. In other words, despite the CDN geographically so much closer, the static assets are still available sooner without a CDN.

It's pretty clear at this point that it's not a good idea to use a CDN for static assets. Even if they're not critical. The "First Meaningful Paint" and "Time To Interactive" are about the same but when HTTP/2 can download all the .js files faster, their useful JavaScript can start being useful sooner with HTTP/2.

What Else

So in my site, it's easiest to host the whole site on an Nginx server in a Digital Ocean server. It's easy to invalidate its cache (just delete the file from disk and wait for Django to regenerate it). Another advantage with using plain Nginx is that I serve the HTML with Cache-Control headers and then do some post-processing of the .html file and since Nginx is disk-based, I don't have to update a CDN.

An alternative would be to put the whole site behind a CDN. That way, the initial HTML document can be served from a CDN edge location, using HTTP/2 and send the rest of the static assets on the same HTTP/2 connection. But this means that every single dynamic URL (e.g. HTTP POSTs or some per-user XHR requests) has to go via a CDN rather than going straight to the Nginx that is connected to the Django web server.

Last but not least, even though my Nginx server is on a decent machine and pretty well tuned, I very much doubt it's as fast and powerful as a KeyCDN or CloudFront or Akamai or Google Cloud CDN. Those servers are beasts! Mind you, the DNS + connection + SSL negotiation, when requesting from Stockholm, Sweden was about 0.75s to my Nginx in New York, USA. For the KeyCDN edge location the DNS + connection + SSL negotiation was about 0.52s. So not a huge difference actually.

Another important aspect is Service Workers. Perhaps I don't know how to hack it, but it doesn't work when you use differnet domains for the service worker .js file and the URIs it references.

In conclusion; I see little benefit in using a CDN at this point. Perhaps for larger assets like videos, GIFs or high-res images. HTTP/2 changes one of the major web performance rules. End of an era(?)

Webpack Bundle Analyzer for create-react-app

May 14, 2018
0 comments JavaScript, React

webpack-bundle-analyzer is an awesome little program for understanding why and which parts of your bundled .js files are so big. It's a lot more advanced (and pretty) than source-map-explorer.

Thanks to this tip by @trevorwhealy you can now use webpack-bundle-analyzer on a create-react-app bundle. Yay!

Check out the report I made for the client side code of Songsear.ch:

Webpack bundle analyzed for Songsear.ch

One thing I personally noticed from this is that the .png do take up quite a lot of kilobytes. And I'm quite that the whatwg-fetch polyfill uses 12KB before gzip.