How do you thousands-comma AND whitespace format a f-string in Python

March 17, 2024
0 comments Python

For some reason, I always forget how to do this. Tired of that. Let's blog about it so it sticks.

To format a number with thousand-commas you do:


>>> n = 1234567
>>> f"{n:,}"
'1,234,567'

To add whitespace to a string you do:


>>> name="peter"
>>> f"{name:<20}"
'peter               '

How to combine these in one expression, you do:


>>> n = 1234567
>>> f"{n:<15,}"
'1,234,567      '

Leibniz formula for π in Python, JavaScript, and Ruby

March 14, 2024
0 comments Python, JavaScript

Officially, I'm one day behind, but here's how you can calculate the value of π using the Leibniz formula.

Leibniz formula

Python


import math

sum = 0
estimate = 0
i = 0
epsilon = 0.0001
while abs(estimate - math.pi) > epsilon:
    sum += (-1) ** i / (2 * i + 1)
    estimate = sum * 4
    i += 1
print(
    f"After {i} iterations, the estimate is {estimate} and the real pi is {math.pi} "
    f"(difference of {abs(estimate - math.pi)})"
)

Outputs:

After 10000 iterations, the estimate is 3.1414926535900345 and the real pi is 3.141592653589793 (difference of 9.99999997586265e-05)

JavaScript


let sum = 0;
let estimate = 0;
let i = 0;
const epsilon = 0.0001;

while (Math.abs(estimate - Math.PI) > epsilon) {
  sum += (-1) ** i / (2 * i + 1);
  estimate = sum * 4;
  i += 1;
}
console.log(
  `After ${i} iterations, the estimate is ${estimate} and the real pi is ${Math.PI} ` +
    `(difference of ${Math.abs(estimate - Math.PI)})`
);

Outputs

After 10000 iterations, the estimate is 3.1414926535900345 and the real pi is 3.141592653589793 (difference of 0.0000999999997586265)

Ruby


sum = 0
estimate = 0
i = 0
epsilon = 0.0001
while (estimate - Math::PI).abs > epsilon
    sum += ((-1) ** i / (2.0 * i + 1))
    estimate = sum * 4
    i += 1
end
print(
    "After #{i} iterations, the estimate is #{estimate} and the real pi is #{Math::PI} "+
    "(difference of #{(estimate - Math::PI).abs})"
)

Outputs

After 10000 iterations, the estimate is 3.1414926535900345 and the real pi is 3.141592653589793 (difference of 9.99999997586265e-05)

Backwards

Technically, these little snippets are checking that it works since each language already has access to a value of π as a standard library constant.

If you don't have that, you can decide on a number of iterations, for example 1,000, and use that.

Python


sum = 0
for i in range(1000):
    sum += (-1) ** i / (2 * i + 1)
print(sum * 4)

JavaScript


let sum = 0;
for (const i of [...Array(10000).keys()]) {
  sum += (-1) ** i / (2 * i + 1);
}
console.log(sum * 4);

Ruby


sum = 0
for i in 0..10000
    sum += ((-1) ** i / (2.0 * i + 1))
end
puts sum * 4

Performance test

Perhaps a bit silly but also a fun thing to play with. Pull out hyperfine and compare Python 3.12, Node 20.11, Ruby 3.2, and Bun 1.0.30:


❯ hyperfine --warmup 10 "python3.12 ~/pi.py" "node ~/pi.js" "ruby ~/pi.rb" "bun run ~/pi.js"
Benchmark 1: python3.12 ~/pi.py
  Time (mean ± σ):      53.4 ms ±   7.5 ms    [User: 31.9 ms, System: 12.3 ms]
  Range (min … max):    41.5 ms …  64.8 ms    44 runs

Benchmark 2: node ~/pi.js
  Time (mean ± σ):      57.5 ms ±  10.6 ms    [User: 43.3 ms, System: 11.0 ms]
  Range (min … max):    46.2 ms …  82.6 ms    35 runs

Benchmark 3: ruby ~/pi.rb
  Time (mean ± σ):     242.1 ms ±  11.6 ms    [User: 68.4 ms, System: 37.2 ms]
  Range (min … max):   227.3 ms … 265.3 ms    11 runs

Benchmark 4: bun run ~/pi.js
  Time (mean ± σ):      32.9 ms ±   6.3 ms    [User: 14.1 ms, System: 10.0 ms]
  Range (min … max):    17.1 ms …  41.9 ms    60 runs

Summary
  bun run ~/pi.js ran
    1.62 ± 0.39 times faster than python3.12 ~/pi.py
    1.75 ± 0.46 times faster than node ~/pi.js
    7.35 ± 1.45 times faster than ruby ~/pi.rb

Comparing Pythons

Just because I have a couple of these installed:


❯ hyperfine --warmup 10 "python3.8 ~/pi.py" "python3.9 ~/pi.py" "python3.10 ~/pi.py" "python3.11 ~/pi.py" "python3.12 ~/pi.py"
Benchmark 1: python3.8 ~/pi.py
  Time (mean ± σ):      54.6 ms ±   8.1 ms    [User: 33.0 ms, System: 11.4 ms]
  Range (min … max):    40.0 ms …  69.7 ms    56 runs

Benchmark 2: python3.9 ~/pi.py
  Time (mean ± σ):      54.9 ms ±   8.0 ms    [User: 32.2 ms, System: 12.3 ms]
  Range (min … max):    42.3 ms …  70.1 ms    38 runs

Benchmark 3: python3.10 ~/pi.py
  Time (mean ± σ):      54.7 ms ±   7.5 ms    [User: 33.0 ms, System: 11.8 ms]
  Range (min … max):    42.3 ms …  78.1 ms    44 runs

Benchmark 4: python3.11 ~/pi.py
  Time (mean ± σ):      53.8 ms ±   6.0 ms    [User: 32.7 ms, System: 13.0 ms]
  Range (min … max):    44.8 ms …  70.3 ms    42 runs

Benchmark 5: python3.12 ~/pi.py
  Time (mean ± σ):      53.0 ms ±   6.4 ms    [User: 31.8 ms, System: 12.3 ms]
  Range (min … max):    43.8 ms …  63.5 ms    42 runs

Summary
  python3.12 ~/pi.py ran
    1.02 ± 0.17 times faster than python3.11 ~/pi.py
    1.03 ± 0.20 times faster than python3.8 ~/pi.py
    1.03 ± 0.19 times faster than python3.10 ~/pi.py
    1.04 ± 0.20 times faster than python3.9 ~/pi.py

Notes on porting a Next.js v14 app from Pages to App Router

March 2, 2024
0 comments React, JavaScript

Unfortunately, the app I ported from using the Pages Router to using App Router, is in a private repo. It's a Next.js static site SPA (Single Page App).

It's built with npm run build and then exported so that the out/ directory is the only thing I need to ship to the CDN and it just works. There's a home page and a few dynamic routes whose slugs depend on an SQL query. So the SQL (PostgreSQL) connection, using knex, has to be present when running npm run build.

In no particular order, let's look at some differences

Build times

With caching

After running next build a bunch of times, the rough averages are:

  • Pages Router: 20.5 seconds
  • App Router: 19.5 seconds

Without caching

After running rm -fr .next && next build a bunch of times, the rough averages are:

  • Pages Router: 28.5 seconds
  • App Router: 31 seconds

Note

I have another SPA app that is built with vite and wouter and uses the heavy mantine for the UI library. That SPA app does a LOT more in terms of components and pages etc. That one takes 9 seconds on average.

Static output

If you compare the generated out/_next/static/chunks there's a strange difference.

Pages Router

360.0 KiB [##########################] /pages
268.0 KiB [###################       ]  726-4194baf1eea221e4.js
160.0 KiB [###########               ]  ee8b1517-76391449d3636b6f.js
140.0 KiB [##########                ]  framework-5429a50ba5373c56.js
112.0 KiB [########                  ]  cdfd8999-a1782664caeaab31.js
108.0 KiB [########                  ]  main-930135e47dff83e9.js
 92.0 KiB [######                    ]  polyfills-c67a75d1b6f99dc8.js
 16.0 KiB [#                         ]  502-394e1f5415200700.js
  8.0 KiB [                          ]  0e226fb0-147f1e5268512885.js
  4.0 KiB [                          ]  webpack-1b159842bd89504c.js

In total 1.2 MiB across 15 files.

App Router

428.0 KiB [##########################]  142-94b03af3aa9e6d6b.js
196.0 KiB [############              ]  975-62bfdeceb3fe8dd8.js
184.0 KiB [###########               ]  25-aa44907f6a6c25aa.js
172.0 KiB [##########                ]  fd9d1056-e15083df91b81b75.js
164.0 KiB [##########                ]  ca377847-82e8fe2d92176afa.js
140.0 KiB [########                  ]  framework-aec844d2ccbe7592.js
116.0 KiB [#######                   ]  a6eb9415-a86923c16860379a.js
112.0 KiB [#######                   ]  69-f28d58313be296c0.js
108.0 KiB [######                    ]  main-67e49f9e34a5900f.js
 92.0 KiB [#####                     ]  polyfills-c67a75d1b6f99dc8.js
 44.0 KiB [##                        ] /app
 24.0 KiB [#                         ]  1cc5f7f4-2f067a078d041167.js
 24.0 KiB [#                         ]  250-47a2e67f72854c46.js
  8.0 KiB [                          ] /pages
  4.0 KiB [                          ]  webpack-baa830a732d3dbbf.js
  4.0 KiB [                          ]  main-app-f6b391c808310b44.js

In total 1.7 MiB across 27 files.

Notes

What makes the JS bundle large is most certainly due to using @primer/react, @fullcalendar, and react-chartjs-2.
But why is the difference so large?

Dev start time

The way Next.js works, with npm run dev, is that it starts a server at localhost:3000 and only when you request a URL does it compile something. It's essentially lazy and that's a good thing because in a bigger app, you might have too many different entries so it'd be silly to wait for all of them to compile if you might not use them all.

Pages Router

❯ npm run dev

...

 ✓ Ready in 1125ms
 ○ Compiling / ...
 ✓ Compiled / in 2.9s (495 modules)

App Router

❯ npm run dev

...

 ✓ Ready in 1201ms
 ○ Compiling / ...
 ✓ Compiled / in 3.7s (1023 modules)

Mind you, it almost always says "Ready in 1201ms" or but the other number, like "3.7s" in this example, that seems to fluctuate quite wildly. I don't know why.

Conclusion

Was it worth it? Yes and no.

I've never liked next/router. With App Router you instead use next/navigation which feels much more refined and simple. The old next/router is still there which exposes a useRouter hook which is still used for doing push and replace.

The getStaticPaths and the getStaticProps were not really that terrible in Pages Router.

I think the whole point of App Router is that you can get external data not only in getStaticProps (or getServerSideProps) but you can more freely go and get external data in places like layout.tsx, which means less prop-drilling.

There are some nicer APIs with App Router. And it's the future of Next.js and how Vercel is pushing it forward.

How to avoid a count query in Django if you can

February 14, 2024
1 comment Django, Python

Suppose you have a complex Django QuerySet query that is somewhat costly (in other words slow). And suppose you want to return:

  1. The first N results
  2. A count of the total possible results

So your implementation might be something like this:


def get_results(queryset, fields, size):
    count = queryset.count()
    results = []
    for record in queryset.values(*fields)[:size]
        results.append(record)
    return {"count": count, "results": results}

That'll work. If there are 1,234 rows in your database table that match those specific filters, what you might get back from this is:


>>> results = get_results(my_queryset, ("name", "age"), 5)
>>> results["count"]
1234
>>> len(results["results"])
5

Or, if the filters would only match 3 rows in your database table:


>>> results = get_results(my_queryset, ("name", "age"), 5)
>>> results["count"]
3
>>> len(results["results"])
3

Between your Python application and your database you'll see:

query 1: SELECT COUNT(*) FROM my_database WHERE ...
query 2: SELECT name, age FROM my_database WHERE ... LIMIT 5

The problem with this is that, in the latter case, you had to send two database queries when all you needed was one.
If you knew it would only match a tiny amount of records, you could do this:


def get_results(queryset, fields, size):
-   count = queryset.count()
    results = []
    for record in queryset.values(*fields)[:size]:
        results.append(record)
+   count = len(results)
    return {"count": count, "results": results}

But that is wrong. The count would max out at whatever the size is.

The solution is to try to avoid the potentially unnecessary .count() query.


def get_results(queryset, fields, size):
    count = 0
    results = []
    for i, record in enumerate(queryset.values(*fields)[: size + 1]):
        if i == size:
            # Alas, there are more records than the pagination
            count = queryset.count()
            break
        count = i + 1
        results.append(record)
    return {"count": count, "results": results}

This way, you only incur one database query when there wasn't that much to find, but if there was more than what the pagination called for, you have to incur that extra database query.

How to restore all unstaged files in with git

February 8, 2024
0 comments GitHub, MacOSX, Linux

tl;dr git restore -- .

I can't believe I didn't know this! Maybe, at one point, I did, but, since forgotten.

You're in a Git repo and you have edited 4 files and run git status and see this:


❯ git status
On branch main
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
    modified:   four.txt
    modified:   one.txt
    modified:   three.txt
    modified:   two.txt

no changes added to commit (use "git add" and/or "git commit -a")

Suppose you realize; "Oh no! I didn't mean to make those changes in three.txt" You can restore that file by mentioning it by name:


❯ git restore three.txt

❯ git status
On branch main
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
    modified:   four.txt
    modified:   one.txt
    modified:   two.txt

no changes added to commit (use "git add" and/or "git commit -a")

Now, suppose you realize you want to all of those modified files. How do you restore them all without mentioning each and every one by name. Simple:


❯ git status
On branch main
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
    modified:   four.txt
    modified:   one.txt
    modified:   two.txt

no changes added to commit (use "git add" and/or "git commit -a")

❯ git restore -- .

❯ git status
On branch main
nothing to commit, working tree clean

The "trick" is: git restore -- .

As far as I understand restore is the new word for checkout. You can equally run git checkout -- . too.

How slow is Node to Brotli decompress a file compared to not having to decompress?

January 19, 2024
3 comments Node, MacOSX, Linux

tl;dr; Not very slow.

At work, we have some very large .json that get included in a Docker image. The Node server then opens these files at runtime and displays certain data from that. To make the Docker image not too large, we compress these .json files at build-time. We compress the .json files with Brotli to make a .json.br file. Then, in the Node server code, we read them in and decompress them at runtime. It looks something like this:


export function readCompressedJsonFile(xpath) {
  return JSON.parse(brotliDecompressSync(fs.readFileSync(xpath)))
}

The advantage of compressing them first, at build time, which is GitHub Actions, is that the Docker image becomes smaller which is advantageous when shipping that image to a registry and asking Azure App Service to deploy it. But I was wondering, is this a smart trade-off? In a sense, why compromise on runtime (which faces users) to save time and resources at build-time, which is mostly done away from the eyes of users? The question was; how much overhead is it to have to decompress the files after its data has been read from disk to memory?

The benchmark

The files I test with are as follows:

ls -lh pageinfo*
-rw-r--r--  1 peterbe  staff   2.5M Jan 19 08:48 pageinfo-en-ja-es.json
-rw-r--r--  1 peterbe  staff   293K Jan 19 08:48 pageinfo-en-ja-es.json.br
-rw-r--r--  1 peterbe  staff   805K Jan 19 08:48 pageinfo-en.json
-rw-r--r--  1 peterbe  staff   100K Jan 19 08:48 pageinfo-en.json.br

There are 2 groups:

  1. Only English (en)
  2. 3 times larger because it has English, Japanese, and Spanish

And for each file, you can see the effect of having compressed them with Brotli.

  1. The smaller JSON file compresses 8x
  2. The larger JSON file compresses 9x

Here's the benchmark code:


import fs from "fs";
import { brotliDecompressSync } from "zlib";
import { Bench } from "tinybench";

const JSON_FILE = "pageinfo-en.json";
const BROTLI_JSON_FILE = "pageinfo-en.json.br";
const LARGE_JSON_FILE = "pageinfo-en-ja-es.json";
const BROTLI_LARGE_JSON_FILE = "pageinfo-en-ja-es.json.br";

function f1() {
  const data = fs.readFileSync(JSON_FILE, "utf8");
  return Object.keys(JSON.parse(data)).length;
}

function f2() {
  const data = brotliDecompressSync(fs.readFileSync(BROTLI_JSON_FILE));
  return Object.keys(JSON.parse(data)).length;
}

function f3() {
  const data = fs.readFileSync(LARGE_JSON_FILE, "utf8");
  return Object.keys(JSON.parse(data)).length;
}

function f4() {
  const data = brotliDecompressSync(fs.readFileSync(BROTLI_LARGE_JSON_FILE));
  return Object.keys(JSON.parse(data)).length;
}

console.assert(f1() === 2633);
console.assert(f2() === 2633);
console.assert(f3() === 7767);
console.assert(f4() === 7767);

const bench = new Bench({ time: 100 });
bench.add("f1", f1).add("f2", f2).add("f3", f3).add("f4", f4);
await bench.warmup(); // make results more reliable, ref: https://github.com/tinylibs/tinybench/pull/50
await bench.run();

console.table(bench.table());

Here's the output from tinybench:

┌─────────┬───────────┬─────────┬────────────────────┬──────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────┼─────────┼────────────────────┼──────────┼─────────┤
│    0    │   'f1'    │  '179'  │  5563384.55941942  │ '±6.23%' │   18    │
│    1    │   'f2'    │  '150'  │ 6627033.621072769  │ '±7.56%' │   16    │
│    2    │   'f3'    │  '50'   │ 19906517.219543457 │ '±3.61%' │   10    │
│    3    │   'f4'    │  '44'   │ 22339166.87965393  │ '±3.43%' │   10    │
└─────────┴───────────┴─────────┴────────────────────┴──────────┴─────────┘

Note, this benchmark is done on my 2019 Intel MacBook Pro. This disk is not what we get from the Apline Docker image (running inside Azure App Service). To test that would be a different story. But, at least we can test it in Docker locally.

I created a Dockerfile that contains...

ARG NODE_VERSION=20.10.0

FROM node:${NODE_VERSION}-alpine

and run the same benchmark in there by running docker composite up --build. The results are:

┌─────────┬───────────┬─────────┬────────────────────┬──────────┬─────────┐
│ (index) │ Task Name │ ops/sec │ Average Time (ns)  │  Margin  │ Samples │
├─────────┼───────────┼─────────┼────────────────────┼──────────┼─────────┤
│    0    │   'f1'    │  '151'  │ 6602581.124978315  │ '±1.98%' │   16    │
│    1    │   'f2'    │  '112'  │  8890548.4166656   │ '±7.42%' │   12    │
│    2    │   'f3'    │  '44'   │ 22561206.40002191  │ '±1.95%' │   10    │
│    3    │   'f4'    │  '37'   │ 26979896.599974018 │ '±1.07%' │   10    │
└─────────┴───────────┴─────────┴────────────────────┴──────────┴─────────┘

Analysis/Conclusion

First, focussing on the smaller file: Processing the .json is 25% faster than the .json.br file

Then, the larger file: Processing the .json is 16% faster than the .json.br file

So that's what we're paying for a smaller Docker image. Depending on the size of the .json file, your app runs ~20% slower at this operation. But remember, as a file on disk (in the Docker image), it's ~8x smaller.

I think, in conclusion: It's a small price to pay. It's worth doing. Your context depends.
Keep in mind the numbers there to process that 300KB pageinfo-en-ja-es.json.br file, it was able to do that 37 times in one second. That means it took 27 milliseconds to process that file!

The caveats

To repeat, what was mentioned above: This was run in my Intel MacBook Pro. It's likely to behave differently in a real Docker image running inside Azure.

The thing that I wonder the most about is arguably something that actually doesn't matter. 🙃
When you ask it to read in a .json.br file, there's less data to ask from the disk into memory. That's a win. You lose on CPU work but gain on disk I/O. But only the end net result matters so in a sense that's just an "implementation detail".

Admittedly, I don't know if the macOS or the Linux kernel does things with caching the layer between the physical disk and RAM for these files. The benchmark effectively asks "Hey, hard disk, please send me a file called ..." and this could be cached in some layer beyond my knowledge/comprehension. In a real production server, this only happens once because once the whole file is read, decompressed, and parsed, it won't be asked for again. Like, ever. But in a benchmark, perhaps the very first ask of the file is slower and all the other runs are unrealistically faster.

Feel free to clone https://github.com/peterbe/reading-json-files and mess around to run your own tests. Perhaps see what effect async can have. Or perhaps try it with Bun and it's file system API.

Previous page
Next page