Wouter + Vite is the new create-react-app, and I love it
August 16, 2024
0 comments React, Node, Bun
If you've done React for a while, you most likely remember Create React App. It was/is a prepared config that combines React with webpack
, and eslint
. Essentially, you get immediate access to making apps with React in a local dev server and it produces a complete build artefact that you can upload to a web server and host your SPA (Single Page App). I loved it and blogged much about it in distant past.
The create-react-app project died, and what came onto the scene was tools that solved React rendering configs with SSR (Server Side Rendering). In particular, we now have great frameworks like Gatsby, Next.js, Remix, and Astro. They're great, especially if you want to use server-side rendering with code-splitting by route and that sweet TypeScript integration between your server (fs
, databases, secrets) and your rendering components.
However, I still think there is a place for a super light and simple SPA tool that only adds routing, hot module reloading, and build artefacts. For that, I love Vite + Wouter. At least for now :)
What's so great about it? Speed
Quickstart
❯ npm create vite@latest my-vite-react-ts-app -- --template react-ts
...
Done. Now run:
cd my-vite-react-ts-app
npm install
npm run dev
A single-page app needs routing, so let's add wouter
to it and add it to the app entry point:
❯ my-vite-react-ts-app ❯ npm install && npm install wouter
And edit the default created src/App.tsx
to something like this:
import "./App.css";
import { Routes } from "./routes";
function App() {
// You might need other wrapping components such as theming providers, etc.
return <Routes />;
}
export default App;
And the src/routes.tsx
:
import { Route, Router, Switch } from "wouter";
export function Routes() {
return (
<Router>
<Switch>
<Route path="/" component={Home} />
<Route>
<Custom404 />
</Route>
</Switch>
</Router>
);
}
function Custom404() {
return <div>Page not found</div>;
}
function Home() {
return <div>Hello World</div>;
}
That's it! Let's test it with npm run dev
and open http://localhost:5173
❯ npm run dev
VITE v5.4.1 ready in 97 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
Hot module reloading works as expected.
Let's build it now:
❯ npm run build
vite v5.4.1 building for production...
✓ 42 modules transformed.
dist/index.html 0.46 kB │ gzip: 0.30 kB
dist/assets/index-DiwrgTda.css 1.39 kB │ gzip: 0.72 kB
dist/assets/index-Ba6YWXXy.js 148.08 kB │ gzip: 48.29 kB
✓ built in 340ms
It says "built in 340ms" but that doesn't include the total time of the whole npm run build
execution. To get the total time, use the built in time
command:
❯ /usr/bin/time npm run build
...
✓ built in 340ms
1.14 real 2.28 user 0.13 sys
So about 1.1 seconds as the wall clock goes.
Add basic routing
As you can imagine, adding more routes that point to client-side components is as simple as:
import { Route, Router, Switch } from "outer";
+import Charts from "./components/charts"
export function Routes() {
return (
<Router>
<Switch>
<Route path="/" component={Home} />
+ <Route path="/charts" component={Charts} />
<Route>
<Custom404 />
</Route>
</Switch>
</Router>
);
}
Using a stack like Vite and Wouter which isn't as feature-packed as Remix and Next.js doesn't have to be super optimized. It's pretty near to being production deployment-worthy. Just one thing missing in my view; lazy route-based code-splitting. Let's build that. Here's the new routes.tsx
:
import { lazy, Suspense } from "react";
import { Route, Router, Switch } from "wouter";
type LazyComponentT = React.LazyExoticComponent<() => JSX.Element>;
function LC(Component: LazyComponentT, loadingText = "Loading") {
return () => {
return (
<Suspense fallback={<p>{loadingText}</p>}>
<Component />
</Suspense>
);
};
}
const Charts = LC(lazy(() => import("./components/charts")));
export function Routes() {
return (
<Router>
<Switch>
<Route path="/" component={Home} />
<Route path="/charts" component={Charts} />
<Route>
<Custom404 />
</Route>
</Switch>
</Router>
);
}
function Custom404() {
return <div>Page not found</div>;
}
function Home() {
return <div>Hello World</div>;
}
If you run npm run build
now, you'll see something like this:
❯ npm run build
...
dist/index.html 0.46 kB │ gzip: 0.30 kB
dist/assets/index-DiwrgTda.css 1.39 kB │ gzip: 0.72 kB
dist/assets/charts-qo6bqIo2.js 0.12 kB │ gzip: 0.13 kB
dist/assets/index-JUI4kknP.js 149.25 kB │ gzip: 48.81 kB
In particular, there's a new .js
file that is prefixed with the word charts-
. How easy was that?!
Compared to Next.js
Next.js is wonderful. But it's a bit heavy. Let's build something with npx create-next-app@latest
which is very similar. I.e. SPA with React, TypeScript, and route-based code-splitting.
You just have to make this change to next.config.mjs
to make a SPA:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "export",
};
export default nextConfig;
Now, npm run build
will generate a directory called out
which you can upload to a CDN.
But let's add another component to make the comparison fair. Something like this:
// This is src/app/charts/page.tsx
export default function Charts() {
return <div>Charts</div>;
}
So easy! Thank you Next.js. Run npm run build
again and look at the files in out/_next/static
:
out/_next/static
├── X6P949yoE-Ou9K46Apwab
│ ├── _buildManifest.js
│ └── _ssgManifest.js
├── chunks
│ ├── 23-bc0704c1190bca24.js
│ ├── app
│ │ ├── _not-found
│ │ │ └── page-05886c10710171db.js
+│ │ ├── charts
+│ │ │ └── page-3bd6b64ccd38c64e.js
│ │ ├── layout-3e7df178500d1502.js
│ │ └── page-121d7018024c0545.js
│ ├── fd9d1056-2821b0f0cabcd8bd.js
│ ├── framework-f66176bb897dc684.js
│ ├── main-app-00df50afc5f6514d.js
│ ├── main-c3a7d74832265c9f.js
│ ├── pages
│ │ ├── _app-6a626577ffa902a4.js
│ │ └── _error-1be831200e60c5c0.js
│ ├── polyfills-78c92fac7aa8fdd8.js
│ └── webpack-879f858537244e02.js
└── CSS
└── 876d048b5dab7c28.css
(technically I cheated here, because adding another route changes the hashes from some of the other chunks but you get the point)
Building it with npm run build
takes...
❯ /usr/bin/time npm run build
> my-nextjs-ts-app@0.1.0 build
> next build
▲ Next.js 14.2.5
Creating an optimized production build ...
✓ Compiled successfully
✓ Linting and checking validity of types
✓ Collecting page data
✓ Generating static pages (6/6)
✓ Collecting build traces
✓ Finalizing page optimization
Route (app) Size First Load JS
┌ ○ / 140 B 87.3 kB
├ ○ /_not-found 871 B 88 kB
└ ○ /charts 140 B 87.3 kB
+ First Load JS shared by all 87.1 kB
├ chunks/23-bc0704c1190bca24.js 31.6 kB
├ chunks/fd9d1056-2821b0f0cabcd8bd.js 53.6 kB
└ other shared chunks (total) 1.86 kB
○ (Static) prerendered as static content
6.26 real 9.89 user 1.42 sys
So about 6+ seconds.
Comparing the time npm run build
takes, between Next.js and Vite+Wouter looks like this:
❯ hyperfine "cd my-vite-react-ts-app && npm run build" "cd my-nextjs-ts-app && npm run build"
...
Summary
cd my-vite-react-ts-app && npm run build ran
5.90 ± 0.63 times faster than cd my-nextjs-ts-app && npm run build
In other words, the Vite+Wouter SPA is 6x faster at building than the equivalent Next.js SPA.
Summary
The npm run build
time isn't massively important. It's not the kind of operation you do super often and oftentimes it's something you can kick off and walk away from in a sense. Kinda.
Where it matters, to me, is that "instantness" feeling you get when you type npm run dev
and you can (almost) immediately start to work. It makes for happiness.
To properly compare that experience between Vite+Wouter vs. Next.js I wrote a hacky script which spawns the npm run dev
the background and then every 10ms checks if it can successfully HTTP GET the http://localhost:5173
(or http://localhost:3000
.
When run that, a couple of times, the numbers I get are:
- Vite + Wouter: Getting 200 OK: 266.6ms
- Next.js: Getting 200 OK: 2.414s
And that matters; to me! Tease me all you like for short attention span, but I often have a thought that I want to code and that flow gets disrupted if there's a sudden pause before I can test the dev server.
Bonus: Bun
If you know me, you know I'm big fan of Bun and have always thought one of its coolest features is its ability to start up quickly.
Out of curiosity, I used bun create vite
and created a replicate of the Vite+Wouter but using bun
(v1.1.22) instead of node
(v20.16). Comparing their build times...
❯ hyperfine "cd my-vite-react-ts-app && npm run build" "cd my-vite-bun-react-ts-app && bun run build" ... Summary cd my-vite-bun-react-ts-app && bun run build ran 1.53 ± 1.34 times faster than cd my-vite-react-ts-app && npm run build
I.e. Using bun
to build the Vite+Wouter app is 1.53 times faster than using node
.
Notes on porting a Next.js v14 app from Pages to App Router
March 2, 2024
0 comments React, JavaScript
Unfortunately, the app I ported from using the Pages Router to using App Router, is in a private repo. It's a Next.js static site SPA (Single Page App).
It's built with npm run build
and then exported so that the out/
directory is the only thing I need to ship to the CDN and it just works. There's a home page and a few dynamic routes whose slugs depend on an SQL query. So the SQL (PostgreSQL) connection, using knex
, has to be present when running npm run build
.
In no particular order, let's look at some differences
Build times
With caching
After running next build
a bunch of times, the rough averages are:
- Pages Router: 20.5 seconds
- App Router: 19.5 seconds
Without caching
After running rm -fr .next && next build
a bunch of times, the rough averages are:
- Pages Router: 28.5 seconds
- App Router: 31 seconds
Note
I have another SPA app that is built with vite
and wouter
and uses the heavy mantine
for the UI library. That SPA app does a LOT more in terms of components and pages etc. That one takes 9 seconds on average.
Static output
If you compare the generated out/_next/static/chunks
there's a strange difference.
Pages Router
360.0 KiB [##########################] /pages 268.0 KiB [################### ] 726-4194baf1eea221e4.js 160.0 KiB [########### ] ee8b1517-76391449d3636b6f.js 140.0 KiB [########## ] framework-5429a50ba5373c56.js 112.0 KiB [######## ] cdfd8999-a1782664caeaab31.js 108.0 KiB [######## ] main-930135e47dff83e9.js 92.0 KiB [###### ] polyfills-c67a75d1b6f99dc8.js 16.0 KiB [# ] 502-394e1f5415200700.js 8.0 KiB [ ] 0e226fb0-147f1e5268512885.js 4.0 KiB [ ] webpack-1b159842bd89504c.js
In total 1.2 MiB across 15 files.
App Router
428.0 KiB [##########################] 142-94b03af3aa9e6d6b.js 196.0 KiB [############ ] 975-62bfdeceb3fe8dd8.js 184.0 KiB [########### ] 25-aa44907f6a6c25aa.js 172.0 KiB [########## ] fd9d1056-e15083df91b81b75.js 164.0 KiB [########## ] ca377847-82e8fe2d92176afa.js 140.0 KiB [######## ] framework-aec844d2ccbe7592.js 116.0 KiB [####### ] a6eb9415-a86923c16860379a.js 112.0 KiB [####### ] 69-f28d58313be296c0.js 108.0 KiB [###### ] main-67e49f9e34a5900f.js 92.0 KiB [##### ] polyfills-c67a75d1b6f99dc8.js 44.0 KiB [## ] /app 24.0 KiB [# ] 1cc5f7f4-2f067a078d041167.js 24.0 KiB [# ] 250-47a2e67f72854c46.js 8.0 KiB [ ] /pages 4.0 KiB [ ] webpack-baa830a732d3dbbf.js 4.0 KiB [ ] main-app-f6b391c808310b44.js
In total 1.7 MiB across 27 files.
Notes
What makes the JS bundle large is most certainly due to using @primer/react
, @fullcalendar
, and react-chartjs-2
.
But why is the difference so large?
Dev start time
The way Next.js works, with npm run dev
, is that it starts a server at localhost:3000
and only when you request a URL does it compile something. It's essentially lazy and that's a good thing because in a bigger app, you might have too many different entries so it'd be silly to wait for all of them to compile if you might not use them all.
Pages Router
❯ npm run dev ... ✓ Ready in 1125ms ○ Compiling / ... ✓ Compiled / in 2.9s (495 modules)
App Router
❯ npm run dev ... ✓ Ready in 1201ms ○ Compiling / ... ✓ Compiled / in 3.7s (1023 modules)
Mind you, it almost always says "Ready in 1201ms" or but the other number, like "3.7s" in this example, that seems to fluctuate quite wildly. I don't know why.
Conclusion
Was it worth it? Yes and no.
I've never liked next/router
. With App Router you instead use next/navigation
which feels much more refined and simple. The old next/router
is still there which exposes a useRouter
hook which is still used for doing push
and replace
.
The getStaticPaths
and the getStaticProps
were not really that terrible in Pages Router.
I think the whole point of App Router is that you can get external data not only in getStaticProps
(or getServerSideProps
) but you can more freely go and get external data in places like layout.tsx
, which means less prop-drilling.
There are some nicer APIs with App Router. And it's the future of Next.js and how Vercel is pushing it forward.
Switching from Next.js to Vite + wouter
July 28, 2023
0 comments React, Node, JavaScript
Next.js is a full front-end web framework. Vite is a build tool so they don't easily compare. But if you're building a single-page app ("SPA"), the difference isn't that big, especially if you bolt on a routing library which is something that Next.js has built in.
My SPA is a relatively straight forward one. It's a React app that uses wonderful Mantine UI framework. The app is CRM for real-estate agents that I've been hacking on with my wife. SEO is not a concern because you can't do anything until you've signed in. So server-side rendering is not a requirement. In that sense, it's like loading Gmail. Yes, users might want a speedy first load when they open it in a fresh new browser tab, but the static assets are most likely going to be heavily (browser) cached by the few users it has.
With that out of the way, let's skim through some of the differences.
Build times
Immediately, this is a tricky one to compare because Next.js has the ability to cache. You get that .next/cache/
directory which is black magic to me, but it clearly speeds things up. And it's incremental so the caching can help partially when only some of the code has changed.
Running, npm run build && npm run export
a couple of times yields:
Next.js
Without no .next/cache/
directory
Total time to run npm run build && npm run export
: 52 seconds
With the .next/cache/
left before each build
Total time to run npm run build && npm run export
: 30 seconds
Vite
Total time to run npm run build
: 12 seconds
A curious thing about Vite here is that its output contains a measurement of the time it took. But I ignored that and used /usr/bin/time -h ...
instead. This gives me the total time.
I.e. the output of npm run build
will say:
✓ built in 7.67s
...but it actually took 12.2 seconds with /usr/bin/time
.
Build artifacts
Perhaps not very important because Next.js automatically code splits in its wonderfully clever way.
Next.js
❯ du -sh out 1.8M out
❯ tree out | rg '\.js|\.css' | wc -l 52
Vite
❯ du -sh dist 960K dist
and
❯ tree dist/assets dist/assets ├── index-1636ae43.css └── index-d568dfbf.js
Again, it's probably unfair to compare at this point. Most of the weight of these static assets (particularly the .js
files) is due to Mantine components being so heavy.
Routing
This isn't really a judgment in any way. More of a record how it differs in functionality.
Next.js
In my app, that I'm switching from Next.js to Vite + wouter, I use the old way of using Next.js which is to use a src/pages/*
directory. For example, to make a route to the /account/settings
page I first create:
// src/pages/account/settings.tsx
import { Settings } from "../../components/account/settings"
const Page = () => {
return <Settings />
}
export default Page
I'm glad I built it this way in the first place. When I now port to Vite + wouter, I don't really have to touch that src/components/account/settings.tsx
code because that component kinda assumes it's been invoked by some routing.
Vite + wouter
First I installed the router in the src/App.tsx
. Abbreviated code:
// src/App.tsx
import { Routes } from "./routes"
export default function App() {
const { myTheme, colorScheme, toggleColorScheme } = useMyTheme()
return (
<ColorSchemeProvider
colorScheme={colorScheme}
toggleColorScheme={toggleColorScheme}
>
<MantineProvider withGlobalStyles withNormalizeCSS theme={myTheme}>
<Routes />
</MantineProvider>
</ColorSchemeProvider>
)
}
By the way, the code for Next.js looks very similar in its src/pages/_app.tsx
with all those contexts that Mantine make you wrap things in.
And here's the magic routing:
// src/routes.tsx
import { Router, Switch, Route } from "outer"
import { Home } from "./components/home"
import { Authenticate } from "./components/authenticate"
import { Settings } from "./components/account/settings"
import { Custom404 } from "./components/404"
export function Routes() {
return (
<Router>
<Switch>
<Route path="/signin" component={Authenticate} />
<Route path="/account/settings" component={Settings} />
{/* many more lines like this ... */}
<Route path="/" component={Home} />
<Route>
<Custom404 />
</Route>
</Switch>
</Router>
)
}
Redirecting with router
This is a made-up example, but it demonstrates the pattern with wouter compared to Next.js
Next.js
const { push } = useRouter()
useEffect(() => {
if (user) {
push('/signedin')
}
}, [user])
wouter
const [, setLocation] = useLocation()
useEffect(() => {
if (user) {
setLocation('/signedin')
}
}, [user])
Linking
Next.js
import Link from 'next/link'
// ...
<Link href="/settings" passHref>
<Anchor>Settings</Anchor>
</Link>
wouter
import { Link } from "wouter"
// ...
<Link href="/settings">
<Anchor>Settings</Anchor>
</Link>
Getting a query string value
Next.js
import { useRouter } from "next/router"
// ...
const { query } = useRouter()
if (query.name) {
const name = Array.isArray(query.name) ? query.name[0] : query.name
// ...
}
wouter
import { useSearch } from "wouter/use-location"
// ...
const search = useSearch()
const searchParams = new URLSearchParams(search)
if (searchParams.get('name')) {
const name = searchParams.get('name')
// ...
}
Conclusion
The best thing about Next.js is its momentum. It gets lots of eyes on it. Lots of support opportunities and great chance of its libraries being maintained well into the future. Vite also has great momentum and adaptation. But wouter is less "common".
Comparing apples and oranges is often counter-productive if you don't take all constraints and angles into account and those are usually quite specific. In my case, I just want to build a single-page app. I don't want a Node server. In fact, my particular app is a Python backend that does all the API responses from a fetch
in the JavaScript app. That Python app also serves the built static files, including the dist/index.html
file. That's how my app can serve the app straight away if the current URL is something like /account/settings
. A piece of Python code (more or less the only code that doesn't serve /api/*
URLs) collapses all initial serving URLs to serve the dist/index.html
file. It's a classic pattern and honestly feels a bit dated in 2023. But it works. And what's so great about all of this is that I have a multi-stage Dockerfile
that first does the npm run build
(and some COPY --from=frontend /home/node/app/dist ./server/out
) and now I can "lump" together the API backend and the front-end code in just 1 server (which I host on Digital Ocean).
If you had to write a SPA in 2023 what would you use? In particular, if it has to be React. Remix is all about server-side rendering. Create-react-app is completely unsupported. Building it from scratch yourself rolling your own TypeScript + Eslint + Rollup/esbuild/Parcel/Webpack does not feel productive unless you have enough time and energy to really get it all right.
In terms of comparing the performance between Next.js and Vite + wouter, the time it takes to build the whole app is actually not that big a deal. It's a rare thing to do. It's something I do after a long coding/debugging session. What's more pressing is how npm run dev
works.
With Vite, I type npm run dev
and hit Enter. Faster than I can almost notice, after hitting Enter I see...
VITE v4.4.6 ready in 240 ms ➜ Local: http://localhost:3000/ ➜ Network: use --host to expose ➜ press h to show help
and I'm ready to open http://localhost:3000/
to play. With Next.js, after having typed npm run dev
and Enter, there's this slight but annoying delay before it's ready.
The technology behind You Should Watch
January 28, 2023
0 comments You Should Watch, React, Firebase, JavaScript
I recently launched You Should Watch which is a mobile-friendly web app to have a to-watch list of movies and TV shows as well being able to quickly share the links if you want someone to "you should watch" it.
I'll be honest, much of the motivation of building that web app was to try a couple of newish technologies that I wanted to either improve on or just try for the first time. These are the interesting tech pillars that made it possible to launch this web app in what was maybe 20-30 hours of total time.
All the code for You Should Watch is here: https://github.com/peterbe/youshouldwatch-next
The Movie Database API
The cornerstone that made this app possible in the first place. The API is free for developers who don't intend to earn revenue on whatever project they build with it. More details in their FAQ.
The search functionality is important. The way it works is that you can do a "multisearch" which means it finds movies, TV shows, or people. Then, when you have each search result's id
and media_type
you can fetch a lot more information specifically. For example, that's how the page for a person displays things differently than the page for a movie.
Next.js and the new App dir
In Next.js 13 you have a choice between regular pages
directory or an app
directory where every page (which becomes a URL) has to be called page.tsx
.
No judgment here. It was a bit cryptic to rewrap my brain on how this works. In particular, the head.tsx
is now different from the page.tsx
and since both, in server-side rendering, need some async data I have to duplicate the await getMediaData()
instead of being able to fetch it once and share with drop-drilling or context.
Vercel deployment
Wow! This was the most pleasant experience I've experienced in years. So polished and so much "just works". You sign in, with your GitHub auth, click to select the GitHub repo (that has a next.config.js
and package.json
etc) and you're done. That's it! Now, not only does every merged PR automatically (and fast!) get deployed, but you also get a preview deployment for every PR (which I didn't use).
I'm still using the free hobby tier but god forbid this app gets serious traffic, I'd just bump it up to $20/month which is cheap. Besides, the app is almost entirely CDN cacheable so only the search XHR backend would linearly increase its load with traffic I think.
Well done Vercel!
Playwright and VS Code
Not the first time I used Playwright but it was nice to return and start afresh. It definitely has improved in developer experience.
Previously I used npx
and the terminal to run tests, but this time I tried "Playwright Test for VSCode" which was just fantastic! There are some slightly annoying things in that I had to use the mouse cursor more than I'd hoped, but it genuinely helped me be productive. Playwright also has the ability to generate JS code based on me clicking around in a temporary incognito browser window. You do a couple of things in the browser then paste in the generated source code into tests/basics.spec.ts
and do some manual tidying up. To run the debugger like that, one simply types pnpm dlx playwright codegen
pnpm
It seems hip and a lot of people seem to recommend it. Kinda like yarn
was hip and often recommended over npm
(me included!).
Sure it works and it installs things fast but is it noticeable? Not really. Perhaps it's 4 seconds when it would have been 5 seconds with npm
. Apparently pnpm
does clever symlinking to avoid a disk-heavy node_modules/
but does it really matter ...much?
It's still large:
❯ du -sh node_modules
468M node_modules
A disadvantage with pnpm
is that GitHub Dependabot currently doesn't support it :(
An advantage with pnpm
is that pnpm up -i --latest
is great interactive CLI which works like yarn upgrade-interactive --latest
just
just
is like make
but written in Rust. Now I have a justfile
in the root of the repo and can type shortcut commands like just dev
or just emu[TAB]
(to tab autocomplete).
In hindsight, my justfile
ended up being just a list of pnpm run ...
commands but the idea is that just
would be for all and any command under one roof.
End of the day, it becomes a nifty little file of "recipes" of useful commands and you can easily string them together. For example just lint
is the combination of typing pnpm run prettier:check
and pnpm run tsc
and pnpm run lint
.
Pico.css
A gorgeously simple looking pure-CSS framework. Yes, it's very limited in components and I don't know how well it "tree shakes" but it's so small and so neat that it had everything I needed.
My favorite React component library is Mantine but I definitely love the piece of mind that Pico.css is just CSS so you're not A) stuck with React forever, and B) not unnecessary JS code that slows things down.
Firebase
Good old Firebase. The bestest and easiest way to get a reliable and realtime database that is dirt cheap, simple, and has great documentation. I do regret not trying Supabase but I knew that getting the OAuth stuff to work with Google on a custom domain would be tricky so I stayed with Firebase.
react-lite-youtube-embed
A port of Paul Irish's Lite YouTube Embed which makes it easy to display YouTube thumbnails in a web performant way. All you have to do is:
import LiteYouTubeEmbed from "react-lite-youtube-embed";
<LiteYouTubeEmbed
id={youtubeVideo.id}
title={youtubeVideo.title} />
In conclusion
It's amazing how much time these tools saved compared to just years ago. I could build a fully working side-project with automation and high quality entirely thanks to great open source or well-tuned proprietary components, in just about one day if you sum up the hours.
How to change the current query string URL in NextJS v13 with next/navigation
December 9, 2022
4 comments React, JavaScript
At the time of writing, I don't know if this is the optimal way, but after some trial and error, I got it working.
This example demonstrates a hook that gives you the current value of the ?view=...
(or a default) and a function you can call to change it so that ?view=before
becomes ?view=after
.
In NextJS v13 with the pages
directory:
import { useRouter } from "next/router";
export function useNamesView() {
const KEY = "view";
const DEFAULT_NAMES_VIEW = "buttons";
const router = useRouter();
let namesView: Options = DEFAULT_NAMES_VIEW;
const raw = router.query[KEY];
const value = Array.isArray(raw) ? raw[0] : raw;
if (value === "buttons" || value === "table") {
namesView = value;
}
function setNamesView(value: Options) {
const [asPathRoot, asPathQuery = ""] = router.asPath.split("?");
const params = new URLSearchParams(asPathQuery);
params.set(KEY, value);
const asPath = `${asPathRoot}?${params.toString()}`;
router.replace(asPath, asPath, { shallow: true });
}
return { namesView, setNamesView };
}
In NextJS v13 with the app
directory.
import { useRouter, useSearchParams, usePathname } from "next/navigation";
type Options = "buttons" | "table";
export function useNamesView() {
const KEY = "view";
const DEFAULT_NAMES_VIEW = "buttons";
const router = useRouter();
const searchParams = useSearchParams();
const pathname = usePathname();
let namesView: Options = DEFAULT_NAMES_VIEW;
const value = searchParams.get(KEY);
if (value === "buttons" || value === "table") {
namesView = value;
}
function setNamesView(value: Options) {
const params = new URLSearchParams(searchParams);
params.set(KEY, value);
router.replace(`${pathname}?${params}`);
}
return { namesView, setNamesView };
}
The trick is that you only want to change 1 query string value and respect whatever was there before. So if the existing URL was /page?foo=bar
and you want that to become /page?foo=bar&and=also
you have to consume the existing query string and you do that with:
const searchParams = useSearchParams();
...
const params = new URLSearchParams(searchParams);
params.set('and', 'also')
Make your NextJS site 10-100x faster with Express caching
February 18, 2022
0 comments React, Node, Nginx, JavaScript
UPDATE: Feb 21, 2022: The original blog post didn't mention the caching of custom headers. So warm cache hits would lose Cache-Control
from the cold cache misses. Code updated below.
I know I know. The title sounds ridiculous. But it's not untrue. I managed to make my NextJS 20x faster by allowing the Express server, which handles NextJS, to cache the output in memory. And cache invalidation is not a problem.
Layers
My personal blog is a stack of layers:
KeyCDN --> Nginx (on my server) -> Express (same server) -> NextJS (inside Express)
And inside the NextJS code, to get the actual data, it uses HTTP to talk to a local Django server to get JSON based on data stored in a PostgreSQL database.
The problems I have are as follows:
- The CDN sometimes asks for the same URL more than once when in theory you'd think it should be cached by them for a week. And if the traffic is high, my backend might get a stamping herd of requests until the CDN has warmed up.
- It's technically possible to bypass the CDN by going straight to the origin server.
- NextJS is "slow" and the culprit is actually
critters
which computes the critical CSS inline and lazy-loads the rest. - Using Nginx to do in-memory caching (which is powerfully fast by the way) does not allow cache purging at all (unless you buy Nginx Plus)
I really like NextJS and it's a great developer experience. There are definitely many things I don't like about it, but that's more because my site isn't SPA'y enough to benefit from much of what NextJS has to offer. By the way, I blogged about rewriting my site in NextJS last year.
Quick detour about critters
If you're reading my blog right now in a desktop browser, right-click and view source and you'll find this:
<head>
<style>
*,:after,:before{box-sizing:inherit}html{box-sizing:border-box}inpu...
... about 19k of inline CSS...
</style>
<link rel="stylesheet" href="/_next/static/css/fdcd47c7ff7e10df.css" data-n-g="" media="print" onload="this.media='all'">
<noscript><link rel="stylesheet" href="/_next/static/css/fdcd47c7ff7e10df.css"></noscript>
...
</head>
It's great for web performance because a <link rel="stylesheet" href="css.css">
is a render-blocking thing and it makes the site feel slow on first load. I wish I didn't need this, but it comes from my lack of CSS styling skills to custom hand-code every bit of CSS and instead, I rely on a bloated CSS framework which comes as a massive kitchen sink.
To add critical CSS optimization in NextJS, you add:
experimental: { optimizeCss: true },
inside your next.config.js
. Easy enough, but it slows down my site by a factor of ~80ms to ~230ms on my Intel Macbook per page rendered.
So see, if it wasn't for this need of critical CSS inlining, NextJS would be about ~80ms per page and that includes getting all the data via HTTP JSON for each page too.
Express caching middleware
My server.mjs
looks like this (simplified):
import next from "next";
import renderCaching from "./middleware/render-caching.mjs";
const app = next({ dev });
const handle = app.getRequestHandler();
app
.prepare()
.then(() => {
const server = express();
// For Gzip and Brotli compression
server.use(shrinkRay());
server.use(renderCaching);
server.use(handle);
// Use the rollbar error handler to send exceptions to your rollbar account
if (rollbar) server.use(rollbar.errorHandler());
server.listen(port, (err) => {
if (err) throw err;
console.log(`> Ready on http://localhost:${port}`);
});
})
And the middleware/render-caching.mjs
looks like this:
import express from "express";
import QuickLRU from "quick-lru";
const router = express.Router();
const cache = new QuickLRU({ maxSize: 1000 });
router.get("/*", async function renderCaching(req, res, next) {
if (
req.path.startsWith("/_next/image") ||
req.path.startsWith("/_next/static") ||
req.path.startsWith("/search")
) {
return next();
}
const key = req.url;
if (cache.has(key)) {
res.setHeader("x-middleware-cache", "hit");
const [body, headers] = cache.get(key);
Object.entries(headers).forEach(([key, value]) => {
if (key !== "x-middleware-cache") res.setHeader(key, value);
});
return res.status(200).send(body);
} else {
res.setHeader("x-middleware-cache", "miss");
}
const originalEndFunc = res.end.bind(res);
res.end = function (body) {
if (body && res.statusCode === 200) {
cache.set(key, [body, res.getHeaders()]);
// console.log(
// `HEAP AFTER CACHING ${(
// process.memoryUsage().heapUsed /
// 1024 /
// 1024
// ).toFixed(1)}MB`
// );
}
return originalEndFunc(body);
};
next();
});
export default router;
It's far from perfect and I only just coded this yesterday afternoon. My server runs a single Node process so the max heap memory would theoretically be 1,000 x the average size of those response bodies. If you're worried about bloating your memory, just adjust the QuickLRU
to something smaller.
Let's talk about your keys
In my basic version, I chose this cache key:
const key = req.url;
but that means that http://localhost:3000/foo?a=1
is different from http://localhost:3000/foo?b=2
which might be a mistake if you're certain that no rendering ever depends on a query string.
But this is totally up to you! For example, suppose that you know your site depends on the darkmode
cookie, you can do something like this:
const key = `${req.path} ${req.cookies['darkmode']==='dark'} ${rec.headers['accept-language']}`
Or,
const key = req.path.startsWith('/search') ? req.url : req.path
Purging
As soon as I launched this code, I watched the log files, and voila!:
::ffff:127.0.0.1 [18/Feb/2022:12:59:36 +0000] GET /about HTTP/1.1 200 - - 422.356 ms ::ffff:127.0.0.1 [18/Feb/2022:12:59:43 +0000] GET /about HTTP/1.1 200 - - 1.133 ms
Cool. It works. But the problem with a simple LRU cache is that it's sticky. And it's stored inside a running process's memory. How is the Express server middleware supposed to know that the content has changed and needs a cache purge? It doesn't. It can't know. The only one that knows is my Django server which accepts the various write operations that I know are reasons to purge the cache. For example, if I approve a blog post comment or an edit to the page, it triggers the following (simplified) Python code:
import requests
def cache_purge(url):
if settings.PURGE_URL:
print(requests.get(settings.PURGE_URL, json={
pathnames: [url]
}, headers={
"Authorization": f"Bearer {settings.PURGE_SECRET}"
})
if settings.KEYCDN_API_KEY:
api = keycdn.Api(settings.KEYCDN_API_KEY)
print(api.delete(
f"zones/purgeurl/{settings.KEYCDN_ZONE_ID}.json",
{"urls": [url]}
))
Now, let's go back to the simplified middleware/render-caching.mjs
and look at how we can purge from the LRU over HTTP POST:
const cache = new QuickLRU({ maxSize: 1000 })
router.get("/*", async function renderCaching(req, res, next) {
// ... Same as above
});
router.post("/__purge__", async function purgeCache(req, res, next) {
const { body } = req;
const { pathnames } = body;
try {
validatePathnames(pathnames)
} catch (err) {
return res.status(400).send(err.toString());
}
const bearer = req.headers.authorization;
const token = bearer.replace("Bearer", "").trim();
if (token !== PURGE_SECRET) {
return res.status(403).send("Forbidden");
}
const purged = [];
for (const pathname of pathnames) {
for (const key of cache.keys()) {
if (
key === pathname ||
(key.startsWith("/_next/data/") && key.includes(`${pathname}.json`))
) {
cache.delete(key);
purged.push(key);
}
}
}
res.json({ purged });
});
What's cool about that is that it can purge both the regular HTML URL and it can also purge those _next/data/
URLs. Because when NextJS can hijack the <a>
click, it can just request the data in JSON form and use existing React components to re-render the page with the different data. So, in a sense, GET /_next/data/RzG7kh1I6ZEmOAPWpdA7g/en/plog/nextjs-faster-with-express-caching.json?oid=nextjs-faster-with-express-caching
is the same as GET /plog/nextjs-faster-with-express-caching
because of how NextJS works. But in terms of content, they're the same. But worth pointing out that the same piece of content can be represented in different URLs.
Another thing to point out is that this caching is specifically about individual pages. In my blog, for example, the homepage is a mix of the 10 latest entries. But I know this within my Django server so when a particular blog post has been updated, for some reason, I actually send out a bunch of different URLs to the purge where I know its content will be included. It's not perfect but it works pretty well.
Conclusion
The hardest part about caching is cache invalidation. It's usually the inner core of a crux. Sometimes, you're so desperate to survive a stampeding herd problem that you don't care about cache invalidation but as a compromise, you just set the caching time-to-live short.
But I think the most important tenant of good caching is: have full control over it. I.e. don't take it lightly. Build something where you can fully understand and change how it works exactly to your specific business needs.
This idea of letting Express cache responses in memory isn't new but I didn't find any decent third-party solution on NPMJS that I liked or felt fully comfortable with. And I needed to tailor exactly to my specific setup.
Go forth and try it out on your own site! Not all sites or apps need this at all, but if you do, I hope I have inspired a foundation of a solution.