How to handle success and failure in @tanstack/react-query useQuery hook

September 16, 2024
0 comments React, JavaScript

What @tanstack/react-query is is a fancy way of fetching data, on the client, in a React app.

Simplified primer by example; instead of...


function MyComponent() {
  const [userInfo, setUserInfo] = useState(null)
  useEffect(() => {
    fetch('/api/user/info')
    .then(response => response.json())
    .then(data => {
      setUserInfo(data)
    })
  }, [])

  return <div>Username: {userInfo ? userInfo.user_name : <em>not yet known</em>}</div>
}

you now do this:


import { useQuery } from '@tanstack/react-query'

function MyComponent() {
  const {data} = useQuery({
    queryKey: ['userinfo'],
    queryFn: async () {
      const response = await fetch('/api/user/info')
      return response.json()
    }
  })

  return <div>Username: {data ? data.user_name : <em>not yet known</em>}</div>
}

That's a decent start, but...

Error handling is a thing. Several things can go wrong:

  1. Complete network failure during the fetch(...)
  2. Server being (temporarily) down
  3. Not authorized
  4. Backend URL not found
  5. Backend URL found but wrong parameters

None of the code solutions above deal with these things. At least not all of them.

By default, useQuery will retry if any error thrown inside that queryFn call.

Queries that fail are silently retried 3 times, with exponential backoff delay before capturing and displaying an error to the UI.

From the documentation about important defaults

For example, if the server responds with a 403 the response body might not be of content-type JSON. So that response.json() might fail and throw and then useQuery will retry. You might be tempted to do this:


    queryFn: async () {
      const response = await fetch("/api/user/info")
+     if (!response.ok) {
+        throw new Error(`Fetching data failed with a ${response.status} from the server`)
+     }
      return response.json()
    }

The problem with this is that useQuery still thinks it's an error and that it should retry. Sometimes it's the right thing to do, sometimes pointless.

About retries

The default implementation in @tanstack/react-query can be seen here: packages/query-core/src/retryer.ts

In a gross simplification, it works like this:


function run() {
  const promise = config.fn()
  Promise.resolve(promise)
  .then(resolve)
  .catch((error) => {

    if (shouldRetry(config)) {
      await sleep(config.sleepTime())
      run()
    } else {
      reject(error)
    }

  })

I'm not being accurate here but the point is that it's quite simple. The config has stuff like a count of how many times it's retried previously, dynamically whether it should retry, and how long it should sleep.
The point is that it doesn't care what the nature of the error was. It doesn't test if the error was of type Response or if error.message === "ECONNRESET" or something like that.

So in a sense, it's a "dumping ground" for any error thrown. So if you look into the response, within your query function, and don't like the response, if you throw a new error, it will retry. And that might not be smart.

In simple terms; you should retry if retrying is likely to yield a different result. For example, if the server responded with a 503 Service Unavailable it's quite possible that if you just try again, a little later, it'll work.

What is wrong is if you get something like a 400 Bad Request response. Then, trying again won't work.
Another thing that is wrong is if your own code throws an error within. For example, ...


    queryFn: async () {
      const response = await fetch('/api/user/info')
      const userInfo = response.json()
      await doSomethingComplexThatMightFail(userInfo)
      return userInfo
    }

So, what's the harm?

Suppose that you have something basic like this:


    queryFn: async () {
      const response = await fetch("/api/user/info")
      if (!response.ok) {
        throw new Error(`Fetching data failed with a ${response.status} from the server`)
      }
      return response.json()
    }

and you use it like this:


function MyComponent() {
  const {data, error} = useQuery(...)

  if (error) {
    return <div>An error happened. Reload the page mayhaps?</div>
  }
  if (!data) {
    return <div>Loading...</div>
  }
  return <AboutUser info={data.userInfo}/>
}

then, I guess if it's fine to not be particularly "refined" about the error itself. It failed, refreshing the page might just work.

If not an error, then what?

The pattern I prefer, is to, if there is a problem with the response, to return it keyed as an error. Let's use TypeScript this time:


// THIS IS THE NAIVE APPROACH

type ServerResponse = {
  user: {
    first_name: string
    last_name: string
  }
}

...

function MyComponent() {
  const {data, error, isPending} = useQuery({
    queryKey: ['userinfo'],
    queryFn: async () {
      const response = await fetch('/api/user/info')
      if (!response.ok) {
         throw new Error(`Bad response ${response.status}`)
      }
      const user = await response.json()
      return user
    }
  })

  return <div>Username: {userInfo ? userInfo.user_name : <em>not yet known</em>}</div>
}

A better approach is to allow queryFn to return what it would 99% of the time, but also return an error, like this:


// THIS IS THE MORE REFINED APPROACH

type ServerResponse = {
  user?: {
    first_name: string
    last_name: string
  }
  errorCode?: number
}

...

function MyComponent() {
  const {data, error, isPending} = useQuery({
    queryKey: ['userinfo'],
    queryFn: async () {
      const response = await fetch('/api/user/info')
      if (response.status >= 500) {
         // This will trigger useQuery to retry
         throw new Error(`Bad response ${response.status}`)
      }
      if (response.status >= 400) {
         return {errorCode: response.status}
      }
      const user = await response.json()
      return {user}
    }
  })


  if (errorCode) {
     if (errorCode === 403) {
        return <p>You're not authorized. <a href="/login">Log in here</a></p>
     }
     throw new Error(`Unexpected response from the API (${errorCode})`)
  }
  return <div>
     Username: {userInfo ? userInfo.user_name : <em>not yet known</em>}
  </div>
}

It's just an example, but the point is; that you treat "problems" as valid results. That way you avoid throwing errors inside the query function, which will trigger nice retries.
And in this example, it can potentially throw an error in the rendering phase, outside the hook, which means it needs your attention (and does not deserve a retry)

What's counter-intuitive about this is that your backend probably doesn't return the error optionally with the data. Your backend probably looks like this:


# Example, Python, backend JSON endpoint 

def user_info_view(request):
    return JsonResponse({
        "first_name": request.user.first, 
        "last_name": request.user.last
    })

So, if that's how the backend responds, it'd be tempting to model the data fetched to that exact shape, but as per my example, you re-wrap it under a new key.

Conclusion

The shape of the data ultimately coming from within a useQuery function doesn't have to map one-to-one to how the server sends it. The advantage is that what you get back into the rendering process of your component is that there's a chance of capturing other types of errors that aren't retriable.

swr compared to @tanstack/react-query

August 30, 2024
0 comments JavaScript

I have a single-page-app built with React and Vite. It fetches data entirely on the client-side after it has started up. So there's no server at play other than the server that hosts the static assets.
Until yesterday, the app was use swr to fetch data, now it's using @tanstack/react-query instead. Why? Because I'm curious. This blog post attempts to jot down some of the difference and contrasts.

If you want to jump straight to the port diff, look at this commit: https://github.com/peterbe/analytics-peterbecom/pull/47/commits/eac4f873303bfb493320b0b4aa0f5f6ba133001a

Bundle phobia

When @tanstack/react-query first came out, back in the day when it was called React Query, I looked into it and immediately got scared how large it was. I think they've done some great work to remedy that because it's now not much larger than swr. Perhaps it's because swr, since wayback when, has grown too.

When I run npm run build it spits this out:

Before - with swr


vite v5.4.2 building for production...
✓ 1590 modules transformed.
dist/index.html                     0.76 kB │ gzip:   0.43 kB
dist/assets/index-CP2W9Ga1.css      0.41 kB │ gzip:   0.24 kB
dist/assets/index-B8iHmcGS.css    196.05 kB │ gzip:  28.94 kB
dist/assets/query-CvwMzO21.js      51.16 kB │ gzip:  18.61 kB
dist/assets/index-ByNQKZOZ.js      79.45 kB │ gzip:  22.69 kB
dist/assets/index-DnpwskLg.js     225.19 kB │ gzip:  72.76 kB
dist/assets/BarChart-CwU8AXdH.js  397.99 kB │ gzip: 112.41 kB

❯ du -sh dist/assets
940K    dist/assets

After - with @tanstack/react-query


vite v5.4.2 building for production...
✓ 1628 modules transformed.
dist/index.html                     0.76 kB │ gzip:   0.43 kB
dist/assets/index-CP2W9Ga1.css      0.41 kB │ gzip:   0.24 kB
dist/assets/index-B8iHmcGS.css    196.05 kB │ gzip:  28.94 kB
dist/assets/query-CqpLJXAS.js      51.44 kB │ gzip:  18.71 kB
dist/assets/index-BPszumoe.js      77.52 kB │ gzip:  22.14 kB
dist/assets/index-DjC9VFZg.js     250.65 kB │ gzip:  78.88 kB
dist/assets/BarChart-B-D1cgEG.js  400.24 kB │ gzip: 112.94 kB


❯ du -sh dist/assets
964K    dist/assets

In this case, it grew the total JS bundle by 26KB. As gzipped, it's 262.28 - 256.08 = 6.2 KB larger

Provider necessary

They work very similar, with small semantic differences (and of course features!) but one important difference is that when you use the useQuery hook (from import { useQuery } from "@tanstack/react-query") you first have to wrap the component in a
provider
. Like this:


import { QueryClient, QueryClientProvider } from "@tanstack/react-query"

import { Nav } from "./components/simple-nav"
import { Routes } from "./routes"

const queryClient = new QueryClient()

export default function App() {
  return (
    <ThemeProvider>
      <QueryClientProvider client={queryClient}>
        <Nav />
        <Routes />
      </QueryClientProvider>
    </ThemeProvider>
  )
}

You don't have to do that with when you use useSWR (from import useSWR from "swr"). I think I know the why but from an developer-experience point of view, it's quite nice with useSWR that you don't need that provider stuff.

Basic use

Here's the diff for my app: https://github.com/peterbe/analytics-peterbecom/pull/47/commits/eac4f873303bfb493320b0b4aa0f5f6ba133001a that had the commit message "Port from swr to @tanstack/react-query"

But to avoid having to read that big diff, here's how you use useSWR:


import useSWR from "swr"

function MyComponent() {

  const {data, error, isLoading} = useSWR<QueryResult>(
    API_URL, 
    async (url: string) => {
      const response = await fetch(url)
      if (!response.ok) {
        throw new Error(`${response.status} on ${response.url}`)
      }
      return response.json()
    }
  )

  return <div>
    {error && <p>Error happened <code>{error.message}</code></p>}

    {isLoading && <p>Loading...</p>}

    {data && <p>Meaning of life is: <b>{data.meaning_of_life}</b></p>}
  </div>

The equivalent using useQuery looks like this:


import { useQuery } from "@tanstack/react-query"

function MyComponent() {

  const { isPending, error, data } = useQuery<QueryResult>({
    queryKey: [API_URL],
    queryFn: async () => {
      const response = await fetch(API_URL)
      if (!response.ok) {
        throw new Error(`${response.status} on ${response.url}`)
      }
      return response.json()
    }
  )

  return <div>
    {error && <p>Error happened <code>{error.message}</code></p>}

    {isPending && <p>Loading...</p>}

    {data && <p>Meaning of life is: <b>{data.meaning_of_life}</b></p>}
  </div>

Feature comparisons

Comparison

The TanStack Query website has a more thorough comparison: https://tanstack.com/query/latest/docs/framework/react/comparison
What's clear is: TanStack Query has more features

What you need to consider is; do you need all these features at the expense of a larger JS bundle size? And if size isn't a concern, probably go for TanStack Query based on the simple fact that your needs might evolve and want more powerful functionalities.

To not use the hook

One lovely and simple feature about useSWR is that it gets "disabled" if you pass it a falsy URL. Consider this:


import useSWR from "swr"

function MyComponent() {

  const [apiUrl, setApiUrl] = useState<string | null>(null)

  const {data, error, isLoading} = useSWR<QueryResult>(
    apiUrl, 
    async (url: string) => {
      const response = await fetch(url)
      if (!response.ok) {
        throw new Error(`${response.status} on ${response.url}`)
      }
      return response.json()
    }
  )

  if (!apiUrl) {
    return <div>
      <p>Please select your API:</p>
      <SelectAPIComponent onChange={(url: string) => {
        setApiUrl(url)
      }}/>
    </div>
  }

  return <div>
    {error && <p>Error happened <code>{error.message}</code></p>}

    {isLoading && <p>Loading...</p>}

    {data && <p>Meaning of life is: <b>{data.meaning_of_life}</b></p>}
  </div>

It's practical and neat. It's not that different with useQuery except the queryFn will be called. You just need to remember to return null.


import { useQuery } from "@tanstack/react-query"

function MyComponent() {

  const [apiUrl, setApiUrl] = useState<string | null>(null)

  const { isPending, error, data } = useQuery<QueryResult>({
    queryKey: [apiUrl],
    queryFn: async () => {

      // NOTE these 3 lines
      if (!apiUrl) {
         return null
      }

      const response = await fetch(url)
      if (!response.ok) {
        throw new Error(`${response.status} on ${response.url}`)
      }
      return response.json()
    }
  )

  if (!apiUrl) {
    return <div>
      <p>Please select your API:</p>
      <SelectAPIComponent onChange={(url: string) => {
        setApiUrl(url)
      }}/>
    </div>
  }

  return <div>
    {error && <p>Error happened <code>{error.message}</code></p>}

    {isPending && <p>Loading...</p>}

    {data && <p>Meaning of life is: <b>{data.meaning_of_life}</b></p>}
  </div>

In both of these case, the type (if you hover over it) of that data variable becomes QueryResult | undefined.

Pending vs Loading vs Fetching

In simple terms, with useSWR it's called isLoading and with useQuery it's called isPending.

Since both hooks automatically re-fetch data when the window gets focus back (thanks to the Page Visibility API), when it does so it's called isValidating with useSWR and isFetching with useQuery.

Persistent storage

In both cases, of my app, I was using localStorage to keep a default copy of the fetched data. This makes it so that when you load the page initially it 1) populates from localStorage while waiting for 2) the first fetch response.

With useSWR it feels a bit after-thought to add it and you don't get a ton of control. How I solved it with useSWR was to not touch anything with the useSWR hook but wrap the parent component (my <App/> component) in a provider that looked like this:


// main.tsx

import React from "react"
import ReactDOM from "react-dom/client"
import { SWRConfig } from "swr"

import App from "./App.tsx"
import { localStorageProvider } from "./swr-localstorage-cache-provider.ts"

ReactDOM.createRoot(document.getElementById("root")!).render(
  <React.StrictMode>
    <SWRConfig value={{ provider: localStorageProvider }}>
      <App />
    </SWRConfig>
    <App />
  </React.StrictMode>,
)

// swr-localstorage-cache-provider.ts

import type { Cache } from "swr"

const KEY = "analytics-swr-cache-provider"

export function localStorageProvider() {
  let map = new Map<string, object>()
  try {
    map = new Map(JSON.parse(localStorage.getItem("app-cache") || "[]"))
  } catch (e) {
    console.warn("Failed to load cache from localStorage", e)
  }
  window.addEventListener("beforeunload", () => {
    const appCache = JSON.stringify(Array.from(map.entries()))
    localStorage.setItem(KEY, appCache)
  })

  return map as Cache
}

With @tanstack/react-query it feels like it was built from the ground-up with this stuff in mind. A neat thing is that the persistency stuff is a separate plugin so you don't need to make the bundle larger if you don't need persistent storage. Here's how the equivalent solution looks like with @tanstack/react-query:

First,


npm install @tanstack/query-sync-storage-persister @tanstack/react-query-persist-client

import { QueryClient, QueryClientProvider } from "@tanstack/react-query"
+import { createSyncStoragePersister } from "@tanstack/query-sync-storage-persister"
+import { QueryClient } from "@tanstack/react-query"
+import { PersistQueryClientProvider } from "@tanstack/react-query-persist-client"

import { Nav } from "./components/simple-nav"
import { Routes } from "./routes"

+const queryClient = new QueryClient()

+const persister = createSyncStoragePersister({
+  storage: window.localStorage,
+})

export default function App() {
  return (
    <MantineProvider defaultColorScheme={"light"}>
-      <QueryClientProvider client={queryClient}>
+      <PersistQueryClientProvider
+        client={queryClient}
+        persistOptions={{ persister }}
+      >
        <Nav />
        <Routes />
-      </QueryClientProvider>
+      </PersistQueryClientProvider
    </MantineProvider>
  )
}

An important detail that I'm glossing over here is that, in my application, I actually wanted to have only some of the useQuery hooks to be backed by a persistent client. And I was able to do that. My App.tsx app used the regular <QueryClientProvider ...> provider, but deeper in the tree of components and routes and stuff, I went in with the <PersistQueryClientProvider ...> and it just worked.

The net effect is that when you start up your app, it almost immediately has some data in there, but it starts fetching fresh new data from the backend and that triggers the isFetching property to be true.

Other differences

Given that this post is just meant to be an introductory skim of the differences, note that I haven't talked about "mutations".
Both frameworks support it. A mutation is basically, like a query but you instead use it with a fetch(url, {method: 'POST', data: ...}) to POST data from the client back to the server.
They both support this but I haven't explored it much yet. At least not enough to make a blog post comparison.

One killer feature that @tanstack/react-router has that swr does not is "garbage collection" and "stale time".
If you have dynamic API endpoints that you fetch a lot from, naively useSWR will cache them all in the browser memory; just in case the same URL gets re-used. But for certain apps, that might be a lot of different fetches and lots of different caching keys. The URLs themselves are tiny, but responses might be large so if you have, over a period of time, too many laying around, it could cause too much memory usage by that browser tab. @tanstac/react-query has "garbage collection" enabled by default, set to 5 minutes. That's neat!

In summary

Use swr if your use case is minimal, bundle size is critical, and you don't have grand plans for fancy features that @tanstack/react-query offers.

Use @tanstack/react-query if you have more complex needs around offline/online, persistent caching, large number of dynamic queries, and perhaps more demanding needs around offline mutations.

Add a lazy getter, that is a function call, on a object in JavaScript

August 28, 2024
0 comments JavaScript

Did you know you can attach a key to a JavaScript object that is actually a callable?

For example:


const data = await response.json()

Object.defineProperty(data, 'magic', {
  get: () => {
    return Math.random()
  },
})

console.log({magic: data.magic})

will print:

{ magic: 0.6778944803790492 }

And suppose you want it memoized, you can use this:


const data = await response.json()

let magic
Object.defineProperty(data, 'magic', {
  get: () => {
    return magic ??= Math.random()
  },
})

console.log({magic: data.magic})
console.log({magic: data.magic})
console.log({magic: data.magic})

will print:

{ magic: 0.21367035961590308 }
{ magic: 0.21367035961590308 }
{ magic: 0.21367035961590308 }

Note that it doesn't allow setting. If you do this:


Object.defineProperty(data, 'magic', {
  get: () => {
    return Math.random())
  },
  set: () => {
    throw new Error('Nope!')
  },
})

data.magic = 42

it will print:

Error: Nope!

One thing that bit me today, and much the reason why I'm writing this, is that I had this:


async function getData() {
  const response = await get()
  const data = await response.json()

  Object.defineProperty(data, 'magic', {
    get: () => {
      return Math.random()
    },
  })

  return {...data}
}


// Example use:

const {userName, magic} = await getData()
console.log({userName, magic})

// Will print
// { userName: 'peter', magic: undefined }

This does not work because the magic property is not enumerable. To fix that, make this edit:


  Object.defineProperty(data, 'magic', {
    get: () => {
      return Math.random()
    },
+   enumerable: true,
  })

Now, the same code as above, when you run console.log({userName, magic}) it will print:

{ userName: 'peter', magic: 0.23560450431932733 }

Wouter + Vite is the new create-react-app, and I love it

August 16, 2024
0 comments React, Node, Bun

If you've done React for a while, you most likely remember Create React App. It was/is a prepared config that combines React with webpack, and eslint. Essentially, you get immediate access to making apps with React in a local dev server and it produces a complete build artefact that you can upload to a web server and host your SPA (Single Page App). I loved it and blogged much about it in distant past.

The create-react-app project died, and what came onto the scene was tools that solved React rendering configs with SSR (Server Side Rendering). In particular, we now have great frameworks like Gatsby, Next.js, Remix, and Astro. They're great, especially if you want to use server-side rendering with code-splitting by route and that sweet TypeScript integration between your server (fs, databases, secrets) and your rendering components.

However, I still think there is a place for a super light and simple SPA tool that only adds routing, hot module reloading, and build artefacts. For that, I love Vite + Wouter. At least for now :)
What's so great about it? Speed

Quickstart


❯ npm create vite@latest my-vite-react-ts-app -- --template react-ts

...

Done. Now run:

  cd my-vite-react-ts-app
  npm install
  npm run dev

A single-page app needs routing, so let's add wouter to it and add it to the app entry point:

❯ my-vite-react-ts-app
❯ npm install && npm install wouter

And edit the default created src/App.tsx to something like this:


import "./App.css";
import { Routes } from "./routes";

function App() {
  // You might need other wrapping components such as theming providers, etc.
  return <Routes />;
}

export default App;

And the src/routes.tsx:


import { Route, Router, Switch } from "wouter";

export function Routes() {
  return (
    <Router>
      <Switch>
        <Route path="/" component={Home} />
        <Route>
          <Custom404 />
        </Route>
      </Switch>
    </Router>
  );
}

function Custom404() {
  return <div>Page not found</div>;
}

function Home() {
  return <div>Hello World</div>;
}

That's it! Let's test it with npm run dev and open http://localhost:5173


❯ npm run dev


  VITE v5.4.1  ready in 97 ms

  ➜  Local:   http://localhost:5173/
  ➜  Network: use --host to expose
  ➜  press h + enter to show help


Hot module reloading works as expected.
Let's build it now:


❯ npm run build

vite v5.4.1 building for production...
✓ 42 modules transformed.
dist/index.html                   0.46 kB │ gzip:  0.30 kB
dist/assets/index-DiwrgTda.css    1.39 kB │ gzip:  0.72 kB
dist/assets/index-Ba6YWXXy.js   148.08 kB │ gzip: 48.29 kB
✓ built in 340ms

It says "built in 340ms" but that doesn't include the total time of the whole npm run build execution. To get the total time, use the built in time command:


❯ /usr/bin/time npm run build

...

✓ built in 340ms
        1.14 real         2.28 user         0.13 sys

So about 1.1 seconds as the wall clock goes.

Add basic routing

As you can imagine, adding more routes that point to client-side components is as simple as:


import { Route, Router, Switch } from "outer";

+import Charts from "./components/charts"

export function Routes() {
  return (
    <Router>
      <Switch>
        <Route path="/" component={Home} />
+       <Route path="/charts" component={Charts} />
        <Route>
          <Custom404 />
        </Route>
      </Switch>
    </Router>
  );
}

Using a stack like Vite and Wouter which isn't as feature-packed as Remix and Next.js doesn't have to be super optimized. It's pretty near to being production deployment-worthy. Just one thing missing in my view; lazy route-based code-splitting. Let's build that. Here's the new routes.tsx:


import { lazy, Suspense } from "react";
import { Route, Router, Switch } from "wouter";

type LazyComponentT = React.LazyExoticComponent<() => JSX.Element>;

function LC(Component: LazyComponentT, loadingText = "Loading") {
  return () => {
    return (
      <Suspense fallback={<p>{loadingText}</p>}>
        <Component />
      </Suspense>
    );
  };
}

const Charts = LC(lazy(() => import("./components/charts")));

export function Routes() {
  return (
    <Router>
      <Switch>
        <Route path="/" component={Home} />
        <Route path="/charts" component={Charts} />
        <Route>
          <Custom404 />
        </Route>
      </Switch>
    </Router>
  );
}

function Custom404() {
  return <div>Page not found</div>;
}

function Home() {
  return <div>Hello World</div>;
}

If you run npm run build now, you'll see something like this:


❯ npm run build

...

dist/index.html                   0.46 kB │ gzip:  0.30 kB
dist/assets/index-DiwrgTda.css    1.39 kB │ gzip:  0.72 kB
dist/assets/charts-qo6bqIo2.js    0.12 kB │ gzip:  0.13 kB
dist/assets/index-JUI4kknP.js   149.25 kB │ gzip: 48.81 kB

In particular, there's a new .js file that is prefixed with the word charts-. How easy was that?!

Compared to Next.js

Next.js is wonderful. But it's a bit heavy. Let's build something with npx create-next-app@latest which is very similar. I.e. SPA with React, TypeScript, and route-based code-splitting.

You just have to make this change to next.config.mjs to make a SPA:


/** @type {import('next').NextConfig} */
const nextConfig = {
  output: "export",
};

export default nextConfig;

Now, npm run build will generate a directory called out which you can upload to a CDN.

But let's add another component to make the comparison fair. Something like this:


// This is src/app/charts/page.tsx

export default function Charts() {
  return <div>Charts</div>;
}

So easy! Thank you Next.js. Run npm run build again and look at the files in out/_next/static:


 out/_next/static
 ├── X6P949yoE-Ou9K46Apwab
 │   ├── _buildManifest.js
 │   └── _ssgManifest.js
 ├── chunks
 │   ├── 23-bc0704c1190bca24.js
 │   ├── app
 │   │   ├── _not-found
 │   │   │   └── page-05886c10710171db.js
+│   │   ├── charts
+│   │   │   └── page-3bd6b64ccd38c64e.js
 │   │   ├── layout-3e7df178500d1502.js
 │   │   └── page-121d7018024c0545.js
 │   ├── fd9d1056-2821b0f0cabcd8bd.js
 │   ├── framework-f66176bb897dc684.js
 │   ├── main-app-00df50afc5f6514d.js
 │   ├── main-c3a7d74832265c9f.js
 │   ├── pages
 │   │   ├── _app-6a626577ffa902a4.js
 │   │   └── _error-1be831200e60c5c0.js
 │   ├── polyfills-78c92fac7aa8fdd8.js
 │   └── webpack-879f858537244e02.js
 └── CSS
     └── 876d048b5dab7c28.css

(technically I cheated here, because adding another route changes the hashes from some of the other chunks but you get the point)

Building it with npm run build takes...


❯ /usr/bin/time npm run build


> my-nextjs-ts-app@0.1.0 build
> next build

  ▲ Next.js 14.2.5

   Creating an optimized production build ...
 ✓ Compiled successfully
 ✓ Linting and checking validity of types
 ✓ Collecting page data
 ✓ Generating static pages (6/6)
 ✓ Collecting build traces
 ✓ Finalizing page optimization

Route (app)                              Size     First Load JS
┌ ○ /                                    140 B          87.3 kB
├ ○ /_not-found                          871 B            88 kB
└ ○ /charts                              140 B          87.3 kB
+ First Load JS shared by all            87.1 kB
  ├ chunks/23-bc0704c1190bca24.js        31.6 kB
  ├ chunks/fd9d1056-2821b0f0cabcd8bd.js  53.6 kB
  └ other shared chunks (total)          1.86 kB


○  (Static)  prerendered as static content

        6.26 real         9.89 user         1.42 sys

So about 6+ seconds.

Comparing the time npm run build takes, between Next.js and Vite+Wouter looks like this:


❯ hyperfine "cd my-vite-react-ts-app && npm run build" "cd my-nextjs-ts-app && npm run build"

...


Summary
  cd my-vite-react-ts-app && npm run build ran
    5.90 ± 0.63 times faster than cd my-nextjs-ts-app && npm run build

In other words, the Vite+Wouter SPA is 6x faster at building than the equivalent Next.js SPA.

Summary

The npm run build time isn't massively important. It's not the kind of operation you do super often and oftentimes it's something you can kick off and walk away from in a sense. Kinda.

Where it matters, to me, is that "instantness" feeling you get when you type npm run dev and you can (almost) immediately start to work. It makes for happiness.
To properly compare that experience between Vite+Wouter vs. Next.js I wrote a hacky script which spawns the npm run dev the background and then every 10ms checks if it can successfully HTTP GET the http://localhost:5173 (or http://localhost:3000.
When run that, a couple of times, the numbers I get are:

  • Vite + Wouter: Getting 200 OK: 266.6ms
  • Next.js: Getting 200 OK: 2.414s

And that matters; to me! Tease me all you like for short attention span, but I often have a thought that I want to code and that flow gets disrupted if there's a sudden pause before I can test the dev server.

Bonus: Bun

If you know me, you know I'm big fan of Bun and have always thought one of its coolest features is its ability to start up quickly.

Out of curiosity, I used bun create vite and created a replicate of the Vite+Wouter but using bun (v1.1.22) instead of node (v20.16). Comparing their build times...

❯ hyperfine "cd my-vite-react-ts-app && npm run build" "cd my-vite-bun-react-ts-app && bun run build"

...

Summary
  cd my-vite-bun-react-ts-app && bun run build ran
    1.53 ± 1.34 times faster than cd my-vite-react-ts-app && npm run build

I.e. Using bun to build the Vite+Wouter app is 1.53 times faster than using node.

Default environment variables in Bash

August 12, 2024
0 comments Bash

So many of you, this is so basic that it's embarrassing. Any maybe to me too. But the truth is, I often forget the syntax. By mentioning it here, hopefully, I'll memorize it better.

To set a default environment variables, consider this example Bash program:


#!/bin/bash

: "${PORT:=8000}"

echo "Port number: $PORT"

When you run it, it defaults to the value 8000 (a string)


❯ bash dummy.sh
HOSTNAME:8000

And if you override the default:


❯ PORT=1234 bash dummy.sh
Port number: 1234

Note, you don't have to "define" the default on its own line. You can simplify it by defining the default where you use the environment variables. E.g:


#!/bin/bash

echo "Port number: ${PORT:=8000}"

This works the same.

Comparing Deno vs Node vs Bun

August 5, 2024
0 comments Bun, JavaScript

This is an unscientific comparison update from previous blog posts that compared Node and Bun, but didn't compare with Deno.

Temperature conversion

From Converting Celsius to Fahrenheit round-up it compared a super simple script that just prints a couple of lines of text after some basic computation. If you include Deno on that run you get:


❯ hyperfine --shell=none --warmup 3 "bun run conversion.js" "node conversion.js" "deno run conversion.js"
Benchmark 1: bun run conversion.js
  Time (mean ± σ):      22.2 ms ±   2.1 ms    [User: 12.4 ms, System: 8.6 ms]
  Range (min … max):    20.6 ms …  36.0 ms    136 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

...

Summary
  bun run conversion.js ran
    1.97 ± 0.35 times faster than deno run conversion.js
    2.41 ± 0.39 times faster than node conversion.js

Note that bun run and deno run both support .ts files whereas Node needs it to be .js (unless you use something like --require @swc-node/register). So in this benchmark, I let bun run and deno run use the .js version.


❯ deno --version
deno 1.45.2 (release, x86_64-apple-darwin)
v8 12.7.224.12
typescript 5.5.2

❯ node --version
v20.14.0

❯ bun --version
1.1.21

Leibniz formula

In Leibniz formula for π in Python, JavaScript, and Ruby I wrote a simple program that computes the value of π using the Leibniz formula. It became a comparison of that code implementation in Python vs. Ruby vs. Node.

But let's redo the test with Bun and Deno too. Code was


let sum = 0;
let estimate = 0;
let i = 0;
const epsilon = 0.0001;

while (Math.abs(estimate - Math.PI) > epsilon) {
  sum += (-1) ** i / (2 * i + 1);
  estimate = sum * 4;
  i += 1;
}
console.log(
  `After ${i} iterations, the estimate is ${estimate} and the real pi is ${Math.PI} ` +
    `(difference of ${Math.abs(estimate - Math.PI)})`
);

Running it once, it prints:


❯ deno run pi.js
After 10000 iterations, the estimate is 3.1414926535900345 and the real pi is 3.141592653589793 (difference of 0.0000999999997586265)

Running them becomes more of a measurement of how fast the programs start rather than how fast they run, but it's nevertheless and interesting to know too:


❯ hyperfine --warmup 3 "node pi.js" "bun run pi.js" "deno run pi.js"
Benchmark 1: node pi.js
  Time (mean ± σ):      54.9 ms ±   6.5 ms    [User: 42.6 ms, System: 11.3 ms]
  Range (min … max):    50.2 ms …  83.9 ms    48 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

...

Summary
  bun run pi.js ran
    1.92 ± 1.01 times faster than deno run pi.js
    2.37 ± 0.31 times faster than node pi.js

Conclusion

Both of these programs that I'm comparing with are super trivial and take virtually no time to run, once they've started. So it becomes more a test of warm-start performance. Alas, it's cool to see that both Deno and Bun make a better job of it here. Bun is almost 2x faster than Deno and 2.5x faster than Node.

Previous page
Next page