A blog and website by Peter Bengtsson

Find static files defined in django-pipeline but not found

25 July 2017 0 comments   Django, Python

If you're reading this you're probably familiar with how, in django-pipeline, you define bundles of static files to be combined and served. If you're not familiar with django-pipeline it's unlike this'll be of much help.

The Challenge (aka. the pitfall)

So you specify bundles by creating things in your something like this:

        'colors': {
            'source_filenames': (
            'output_filename': 'css/colors.css',
            'extra_context': {
                'media': 'screen,projection',
        'stats': {
            'source_filenames': (
            'output_filename': 'js/stats.js',

You do a bit more configuration and now, when you run ./ collectstatic --noinput Django and django-pipeline will gather up all static files from all Django apps installed, then start post processing then and doing things like concatenating them into one file and doing stuff like minification etc.

The problem is, if you look at the example snippet above, there's a typo. Instead of js/application.js it's accidentally js/aplication.js. Oh noes!!

What's sad is it that nobody will notice (running ./ collectstatic will exit with a 0). At least not unless you do some careful manual reviewing. Perhaps you will notice later, when you've pushed the site to prod, that the output file js/stats.js actually doesn't contain the code from js/application.js.

Or, you can automate it!

A Solution (aka. the hack)

I started this work this morning because the error actually happened to us. Thankfully not in production but our staging server produced a rendered HTML page with <link href="/static/css/report.min.cd784b4a5e2d.css" rel="stylesheet" type="text/css" /> which was an actual file but it was 0 bytes.

It wasn't that hard to figure out what the problem was because of the context of recent changes but it would have been nice to catch this during continuous integration.

So what we did was add an extra class to settings.STATICFILES_FINDERS called myproject.base.finders.LeftoverPipelineFinder. So now it looks like this:

# in

    'myproject.finders.LeftoverPipelineFinder',  # the new hotness!

And here's the class implementation:

from pipeline.finders import PipelineFinder

from django.conf import settings
from django.core.exceptions import ImproperlyConfigured

class LeftoverPipelineFinder(PipelineFinder):
    """This finder is expected to come AFTER 
    django.contrib.staticfiles.finders.FileSystemFinder and 
    django.contrib.staticfiles.finders.AppDirectoriesFinder in 
    If a path is looked for here it means it's trying to find a file
    that none of the regular staticfiles finders couldn't find.
    def find(self, path, all=False):
        # Before we raise an error, try to find out where,
        # in the bundles, this was defined. This will make it easier to correct
        # the mistake.
        for config_name in 'STYLESHEETS', 'JAVASCRIPT':
            config = settings.PIPELINE[config_name]
            for key in config:
                if path in config[key]['source_filenames']:
                    raise ImproperlyConfigured(
                        'Static file {!r} can not be found anywhere. Defined in '
        # If the file can't be found AND it's not in bundles, there's
        # got to be something else really wrong.
        raise NotImplementedError(path)

Now, if you have a typo or something in your bundles, you'll get a nice error about it as soon as you try to run collectstatic. For example:

▶ ./ collectstatic --noinput
Post-processed 'css/search.min.css' as 'css/search.min.css'
Post-processed 'css/base.min.css' as 'css/base.min.css'
Post-processed 'css/base-dynamic.min.css' as 'css/base-dynamic.min.css'
Post-processed 'js/google-analytics.min.js' as 'js/google-analytics.min.js'
Traceback (most recent call last):
django.core.exceptions.ImproperlyConfigured: Static file 'js/aplication.js' can not be found anywhere. Defined in PIPELINE['JAVASCRIPT']['stats']['source_filenames']

Final Thoughts

This was a morning hack. I'm still not entirely sure if this the best approach, but there was none better and the result is pretty good.

We run ./ collectstatic --noinput in our continous integration just before it runs ./ test. So if you make a Pull Request that has a typo in it will get caught.

Unfortunately, it won't find missing files if you use foo*.js or something like that. django-pipeline uses glob.glob to convert expressions like that into a list of actual files and that depends on the filesystem and all of that happens before the django.contrib.staticfiles.finders.find function is called.

If you have any better suggestions to solve this, please let me know.

Why I'm ditching AdBuff on

20 July 2017 0 comments   Web development

I'm a performance nerd and if something isn't as fast as it can be it hurts my soul.
I have this side project called It's a lyrics search engine with over 2 million songs.

To try to make a buck to pay for the hosting cost, I put in ads. The only one I could find that does NOT use document.write is AdBuff. But their technical implementation, pardon my French, sucks! It's redirects upon redirects in an iframe over HTTP. Granted, it loads async but it's still dragging down performance for people on low-end devices and on mobile networks.

Today I decided to take the AdBuff ads off. Let's see if it made a difference performance-wise:


Load waterfall WITH ads


Load waterfall WITHOUT ads

Basically, 2.7 seconds (on LTE) instead of 14.3 seconds. And 211Kb of data instead of 1Mb.

So how much money do I stand to lose for ditching these ads? Well, I've earned a grand total of $10.82 in total for 1,214,072 impressions. That's what I spend on hosting this project every 6 days. Clearly this isn't working out.

UPDATE emailed me that they rolled out some new features including "Blazing fast ad code". Sure. Let's try it then.
Elements of it might be fast but nothing is fast when it needs some ~80 extra requests.

Adversal downloads a LOT of stuff

Best YouTube Chefs of 2017

14 July 2017 3 comments   Misc. links

tl;dr Highly subjective rundown of YouTube channels cooking inspiring stuff. According to me.

I am, by no means, an expert at cooking or media. I'm a web developer. But I do watch lots of cooking videos on YouTube. So it makes me an expert based on what I see. ...subjectively.

This is list based on the gutteral feeling of imagining opening YouTube (most usually on my Roku) and noticing that all chefs I follow have released a new video. Which order I would watch them is the order I most like their material.

My General "Criteria"

This is what I base "good" on:

  1. Fun and enjoyable to watch
  2. Accessible to cook (doesn't have to be simple but has to be something I can see myself cooking and eating)
  3. Ability to actually do ...later (ideally a link in the video description to the transcript plus ingredient list)
  4. Mouthwatering to look at and imagine to smell
  5. If I make this, will my family love me more?

Number 1: J. Kenji López-Alt and SeriousEats

J. Kenji López-Alt

Before I start my praise; a piece of constructive criticism: It's totally confusing what's Kenji's personal stuff and what's SeriousEats. I still haven't grokked the interconnection but usually it doesn't matter because I mentally lump them together as one.

My wife calls him "my boyfriend" and I don't deny that I adore him. He's everything I want to be as a chef; down-to-earth, professional, varied, scientific, and loves the simple recipes just as much as any busy parent with a family to feed.

The videos are short and focussed and the on-screen extra facts and figures makes it easier to follow along. He also does short little videos about techniques such as how to slice onions.

He's also the author of the book The Food Lab: Better Home Cooking Through Science which I sometimes read before going to bed. There's comfort in knowing that he's done empirical research, like a chemist, and not just basing his statements on years of experience.

Number 2: Food Wishes aka. Chef John

Food Wishes

Just like Kenji, Chef John, is an american chef who records relatively simple recipes but spruces them up to be quite fancy and impressive. His menu is varied and feels like a mix of American and French cuisine in an accessible format. All his videos (I think) as self-recorded and you never see more than his hands. They're one video per thing and sticks to the point which makes it easy to follow along. Also, he talks over the video with little tips like "If you don't have self-raising flour at home you can just use ...jada jada".

All of his recipes are mirrored on his blog which I often have opened on my iPhone right on the kitchen counter, trying to replicate what I've watched.

An extra bonus is that he's got a quirky style and sense of humor that my wife also likes so she usually joins in watching his videos.

Number 3: America's Test Kitchen

America's Test Kitchen

I have to admit that I still haven't really understood what this corporation is. There's Cook's Country which spans "America's Test Kitchen", "Cook's Illustrated", "cook's science". I used to watch their show on PBS, led by Christhoer Kimball, and although it was good, it was a bit too time consuming for the impatient me.

Either way, the main America's Test Kitchen YouTube channel is quite focussed on kitchen equipment and cooking techniques. They're wonderful. So many times I've concluded that my (somekitchengadget) isn't great, then I check their videos and then I go straight to Amazon and buy it. Last item was the di Oro Living - Large Silicone Spatula.

Their kitchen equipment testing is rigorous. When they talk about all the hard core testing they've done to pots, pans, cutting boards, knives, bowls etc. you get that comforting sense that if you do take their advice and buy the thing, it's really going to be the best there is. It's the same with some of their more "dry" videos about empirical testing of cooking techniques. For example, "The Secrets of Cooking Rice". You watch it once, try to remember what you learned, and how you can walk around knowing this is thee way to do something. QED.

Number 4: Gordon Ramsay

Gordon Ramsay

Beyond swear words and putdowns, if there's one thing you can learn from Gordon is is passion for food. Like a naturally genius salesman he can think up the most wonderfully positive words for the ingredients, the smells, the tastes and the foods aesthetics. The passion is contagious!

I never actually watched any of his shows that got him famous. It's "too TV" for my taste. Too much aggression and better-than-you. But beyond that, his videos are impressive. It's also a nice to have in that he makes you think you can cook all that fancy stuff from fancy restaurants you can't afford to go to.

Some of the things he cooks might be a bit more advanced for my skillset (and time!) but there are lots of nuggets of "panache" that you can tuck away in the back of your brain.

By the way Gordon, I have not signed up for your online cooking courses because, basically, this blog post. There's just too much good content from other chefs' channels. Sorry.

Number 5: Jamie Oliver's Food Tube

Jamie Oliver

I've always been a big Jamie Oliver fan. His Jamie's Food Revolution (called "Ministry of Food" in the UK) is probably the book I've used the most in the last couple of years. He's got that impressive and contagious passion for food an ingredients. He's funny and accessible, and his cooking is quite advanced and grand. Some recipes are that extra big of extravagance that a home cook sometimes needs to spruce things up.

Food Tube is his YouTube channel and it contains lots of videos by other chefs. They almost all have one thing in common; they're as nuts as Jamie. I get the feeling that Jamie recruits people into his channel on "Can you be more hyper than me?".

In all fairness some videos are almost too hyper. You get a bit flustered by the high pace and flamboyance and you struggle to take notes. But when it gets a bit too intense I just watch it with the practical part of my brain switched off, just to get inspired.

Number 6: The Dumpling Sisters

Dumpling Sisters

I don't really get it, if they are part of Jamie Oliver's Food Tube or not. Some videos have the name in it. Some don't. Doesn't matter much. These ladies are wonderful. It's probably one my more in-accessible channels because most of their cooking is Asian (Chinese style) and although it's mouthwatering and something I could easily devour with a pair of chopsticks, many things become hard to cook due to ingredients. I live in a suburban community in South Carolina and we don't have one of those, good, Asian supermarkets around here. I guess I could order some of the things online but it's not so easy.

Having said that, their "Perfect Special Fried Rice" is a favorite of mine. I've probably cooked it, in variations, almost 10 times now. Always a hit!

Even though I, admittedly, am failing at my Chinese cooking, I do like this channel to keep me inspired and dreaming about going back to China and learn more about its large cuisine. Gotta have something that reminds me of all the delicious weird and wonderful I've had there.

Number 7: Cupcake Jemma

Cupcake Jemma

I'll be honest; I've never attempted anything of the stuff she bakes. First of all, I almost never bake. And if I do it's either Swedish style cakes or loafs of bread. But there's just something about Jemma. She's so charming and "cute". I know cupcakes are serious business to a lot of people but I actually don't really like cupcakes. However, watching her videos does make you think "some day, I'll attempt that".

If it's one thing I've learned from watching her videos, it's that a lot of baking requires vanilla extract. She uses almost as much vanilla extract as Jamie Oliver uses olive oil.

Also, as mentioned before, someone having passion for what they're cooking is inspiring and contagious. Her videos are so cheerful and colorful it's almost like a free injection of happiness.

Number 8: Kitchen Conundrums with Joseph Thomas

Thomas Joseph

This isn't a channel. The link above is a playlist. I think his videos are part of Everyday Food which is something I haven't explored yet.

His videos are nice because he breaks down things into small easy-to-understand parts. He's also good at showing how to do certain actions. Like, which pot to use when and how to poor it etc. It's quite advanced stuff (but not all!) but I did manage to make French Macaroons from his video. The didn't have that perfect shape you get in a nice bakery but it actually turned out pretty well.

My only criticism is that his kitchen is too neat. Makes my look like a battlefield in comparison.

Rounding Up

Come to think of it, these are the only YouTube cooking channels I follow. Occassionally YouTube recommends other awesome stuff (based on their recommendation engine) that I watch but don't necessarily subsribe. The ordering above is kinda silly (apart from Kenji being my number 1 at the moment). They're all good!

I'm not a great chef. I don't have that "magic" of being able to smell my way into which seasonings to add mid-way. Nor do I immediately just know what to do with all sorts of veggies I see in the supermarket. But I try. I have more cookbooks now than I have years to live and, clearly, I watch a lot of inspiring cooking videos. Very little of what I learn seems to stick in my memory, but YouTube's search engine and my watch history has proven to be my way of memorizing recipes and techniques.

Please please please, share your thoughts and tips about other channels I should check out. Not that these channels aren't keeping me busy but I'm always curious to add more.

How to do performance micro benchmarks in Python

24 June 2017 4 comments   Python

Suppose that you have a function and you wonder, "Can I make this faster?" Well, you might already have thought that and you might already have a theory. Or two. Or three. Your theory might be sound and likely to be right, but before you go anywhere with it you need to benchmark it first. Here are some tips and scaffolding for doing Python function benchmark comparisons.


  1. Internally, Python will warm up and it's likely that your function depends on other things such as databases or IO. So it's important that you don't test function1 first and then function2 immediately after because function2 might benefit from a warm up painfully paid for by function1. So mix up the order of them or cycle through them enough that they all pay for or gain from warm ups.

  2. Look at the median first. The mean (aka. average) is often tainted by spikes and these spikes of slow-down can be caused by your local Spotify client deciding to reindex itself or something some such. Sometimes those spikes matter. For example, garbage collection is inevitable and will have an effect that matters.

  3. Run your functions many times. So many times that the whole benchmark takes a while. Like tens of seconds or more. Also, if you run it significantly long it's likely that all candidates gets punished by the same environmental effects such as garbage collection or CPU being reassinged to something else intensive on your computer.

  4. Try to take your benchmark into different, and possibly more realistic environments. For example, don't rely on reading a file like /Users/peterbe/only/on/my/macbook when, likely, the end destination for your code is an Ubuntu server in AWS. Write your code so that it's easy to copy and paste around, like into a vi/jed editor in an ssh session somewhere.

  5. Sanity check each function before benchmarking them. No need for pytest or anything fancy but just make sure that you test them in some basic way. But the assertion testing is likely to add to the total execution time so don't do it when running the functions.

  6. Avoid "prints" inside the time measured code. A print() is I/O and an "external resource" that can become very unfair to compare CPU bound performance.

  7. Don't fear testing many different functions. If you have multiple ideas of doing a function differently, it's cheap to pile them on. But be careful how you "report" because if there are many different ways of doing something you might accidentally compare different fruit without noticing.

  8. Make sure your functions take at least one parameter. I'm no Python core developer or C hacker but I know there are "murks" within a compiler and interpreter that might do what a regular memoizer might done. Also, the performance difference can be reversed on tiny inputs compared to really large ones.

  9. Be humble with the fact that 0.01 milliseconds difference when doing 10,000 iterations is probably not worth writing a more complex and harder-to-debug function.

The Boilerplate

Let's demonstrate with an example:

# The functions to compare
import math

def f1(degrees):
    return math.cos(degrees)

def f2(degrees):
    e = 2.718281828459045
    return (
        (e**(degrees * 1j) + e**-(degrees * 1j)) / 2

# Assertions
assert f1(100) == f2(100) == 0.862318872287684
assert f1(1) == f2(1) == 0.5403023058681398

# Reporting
import time
import random
import statistics

functions = f1, f2
times = {f.__name__: [] for f in functions}

for i in range(100000):  # adjust accordingly so whole thing takes a few sec
    func = random.choice(functions)
    t0 = time.time()
    t1 = time.time()
    times[func.__name__].append((t1 - t0) * 1000)

for name, numbers in times.items():
    print('FUNCTION:', name, 'Used', len(numbers), 'times')
    print('\tMEDIAN', statistics.median(numbers))
    print('\tMEAN  ', statistics.mean(numbers))
    print('\tSTDEV ', statistics.stdev(numbers))

Let's break that down a bit.

You run that and get something like this:

FUNCTION: f1 Used 49990 times
    MEDIAN 0.0
    MEAN   0.00045161219591330375
    STDEV  0.0011268475946446341
FUNCTION: f2 Used 50010 times
    MEDIAN 0.00095367431640625
    MEAN   0.0009188626294516487
    STDEV  0.000642871632138125

More Examples

The example above already broke one of the tenets in that these functions were simply too fast. Doing rather basic mathematics is just too fast to compare with such a trivial benchmark. Here are some other examples:

Remove duplicates from list without losing order

# The functions to compare

def f1(seq):
    checked = []
    for e in seq:
        if e not in checked:
    return checked

def f2(seq):
    checked = []
    seen = set()
    for e in seq:
        if e not in seen:
    return checked

def f3(seq):
    checked = []
    [checked.append(i) for i in seq if not checked.count(i)]
    return checked

def f4(seq):
    seen = set()
    return [x for x in seq if x not in seen and not seen.add(x)]

def f5(seq):
    def generator():
        seen = set()
        for x in seq:
            if x not in seen:
                yield x

    return list(generator())

# Assertion
import random

def _random_seq(length):
    seq = []
    for _ in range(length):
    return seq

L = list('abca')
assert f1(L) == f2(L) == f3(L) == f4(L) == f5(L) == list('abc')
L = _random_seq(10)
assert f1(L) == f2(L) == f3(L) == f4(L) == f5(L)

# Reporting
import time
import statistics

functions = f1, f2, f3, f4, f5
times = {f.__name__: [] for f in functions}

for i in range(3000):
    seq = _random_seq(i)
    for _ in range(len(functions)):
        func = random.choice(functions)
        t0 = time.time()
        t1 = time.time()
        times[func.__name__].append((t1 - t0) * 1000)

for name, numbers in times.items():
    print('FUNCTION:', name, 'Used', len(numbers), 'times')
    print('\tMEDIAN', statistics.median(numbers))
    print('\tMEAN  ', statistics.mean(numbers))
    print('\tSTDEV ', statistics.stdev(numbers))


FUNCTION: f1 Used 3029 times
    MEDIAN 0.6871223449707031
    MEAN   0.6917867380307822
    STDEV  0.42611748137761174
FUNCTION: f2 Used 2912 times
    MEDIAN 0.054955482482910156
    MEAN   0.05610262627130026
    STDEV  0.03000829926668248
FUNCTION: f3 Used 2985 times
    MEDIAN 1.4472007751464844
    MEAN   1.4371055654145566
    STDEV  0.888658217522005
FUNCTION: f4 Used 2965 times
    MEDIAN 0.051975250244140625
    MEAN   0.05343245816673035
    STDEV  0.02957275548477728
FUNCTION: f5 Used 3109 times
    MEDIAN 0.05507469177246094
    MEAN   0.05678296204202234
    STDEV  0.031521596461048934


def f4(seq):
    seen = set()
    return [x for x in seq if x not in seen and not seen.add(x)]

Fastest way to count the number of lines in a file

# The functions to compare
import codecs
import subprocess

def f1(filename):
    count = 0
    with, encoding='utf-8', errors='ignore') as f:
        for line in f:
            count += 1
    return count

def f2(filename):
    with, encoding='utf-8', errors='ignore') as f:
        return len(

def f3(filename):
    return int(subprocess.check_output(['wc', '-l', filename]).split()[0])

# Assertion
filename = 'big.csv'
assert f1(filename) == f2(filename) == f3(filename) == 9999

# Reporting
import time
import statistics
import random

functions = f1, f2, f3
times = {f.__name__: [] for f in functions}

filenames = '', 'hacker_news_data.txt', 'yarn.lock', 'big.csv'
for _ in range(200):
    for fn in filenames:
        for func in functions:
            t0 = time.time()
            t1 = time.time()
            times[func.__name__].append((t1 - t0) * 1000)

for name, numbers in times.items():
    print('FUNCTION:', name, 'Used', len(numbers), 'times')
    print('\tMEDIAN', statistics.median(numbers))
    print('\tMEAN  ', statistics.mean(numbers))
    print('\tSTDEV ', statistics.stdev(numbers))


FUNCTION: f1 Used 800 times
    MEDIAN 5.852460861206055
    MEAN   25.403797328472137
    STDEV  37.09347378640582
FUNCTION: f2 Used 800 times
    MEDIAN 0.45299530029296875
    MEAN   2.4077045917510986
    STDEV  3.717931526478758
FUNCTION: f3 Used 800 times
    MEDIAN 2.8804540634155273
    MEAN   3.4988239407539368
    STDEV  1.3336427480808102


def f2(filename):
    with, encoding='utf-8', errors='ignore') as f:
        return len(


No conclusion really. Just wanted to point out that this is just a hint of a decent start when doing performance benchmarking of functions.

There is also the timeit built-in for "provides a simple way to time small bits of Python code" but it has the disadvantage that your functions are not allowed to be as complex. Also, it's harder to generate multiple different fixtures to feed your functions without that fixture generation effecting the times.

There's a lot of things that this boilerplate can improve such as sorting by winner, showing percentages comparisons against the fastests, ASCII graphs, memory allocation differences, etc. That's up to you.

Fastest way to find out if a file exists in S3 (with boto3)

16 June 2017 4 comments   Web development, Python

tl;dr; It's faster to list objects with prefix being the full key path, than to use HEAD to find out of a object is in an S3 bucket.


I have a piece of code that opens up a user uploaded .zip file and extracts its content. Then it uploads each file into an AWS S3 bucket if the file size is different or if the file didn't exist at all before.

It looks like this:

for filename, filesize, fileobj in extract(zip_file):
    size = _size_in_s3(bucket, filename)
    if size is None or size != filesize:
        upload_to_s3(bucket, filename, fileobj)
        print('Updated!' if size else 'New!')

I'm using the boto3 S3 client so there are two ways to ask if the object exists and get its metadata.

Option 1: client.head_object

Option 2: client.list_objects_v2 with Prefix=${keyname}.

But why the two different approaches?

The problem with client.head_object is that it's odd in how it works. Sane but odd. If the object does not exist, boto3 raises a botocore.exceptions.ClientError which contains a response and in it you can look for exception.response['Error']['Code'] == '404'.

What I noticed was that if you use a try:except ClientError: approach to figure out if an object exists, you reset the client's connection pool in urllib3. So after an exception has happened, any other operations on the client causes it to have to, internally, create a new HTTPS connection. That can cost time.

I wrote and filed this issue on

So I wrote two different functions to return an object's size if it exists:

def _key_existing_size__head(client, bucket, key):
    """return the key's size if it exist, else None"""
        obj = client.head_object(Bucket=bucket, Key=key)
        return obj['ContentLength']
    except ClientError as exc:
        if exc.response['Error']['Code'] != '404':

And the contender...:

def _key_existing_size__list(client, bucket, key):
    """return the key's size if it exist, else None"""
    response = client.list_objects_v2(
    for obj in response.get('Contents', []):
        if obj['Key'] == key:
            return obj['Size']

They both work. That was easy to test. But which is fastest?

Before we begin, which do you think is fastest? The head_object feels like it'll be able to send an operation to S3 internally to do a key lookup directly. But S3 isn't a normal database.

Here's the script partially cleaned up but should be easy to run.

The results

So I wrote a loop that ran 1,000 times and I made sure the bucket was empty so that 1,000 times the result of the iteration is that it sees that the file doesn't exist and it has to do a client.put_object.

Here are the results:

FUNCTION: _key_existing_size__list Used 511 times
    SUM    148.2740752696991
    MEAN   0.2901645308604679
    MEDIAN 0.2569708824157715
    STDEV  0.17742598775696436

FUNCTION: _key_existing_size__head Used 489 times
    SUM    249.79622673988342
    MEAN   0.510830729529414
    MEDIAN 0.4780092239379883
    STDEV  0.14352671121877011

Because it's network bound, it's really important to avoid the 'MEAN' and instead look at the 'MEDIAN'. My home broadband can cause temporary spikes.

Clearly, using client.list_objects_v2 is faster. It's 90% faster than client.head_object.

But note! this was 1,000 times of B) "does the file already exist?" and B) "No? Ok upload it". So the times there include all the client.put_object calls.

So why did I measure both? I.e. _key_existing_size__list+client.put_object versus. _key_existing_size__head+client.put_object? The reason is that the approach of using try:except ClientError: followed by a client.put_object causes boto3 to create a new HTTPS connection in its pool. Again, see the issue which demonstrates this in different words.

What if the object always exists?

So, I simply run the benchmark again. The first time, it uploaded all 1,000 uniquely named objects. So running it a second time, every time the answer is that the object exists, and its size hasn't changed, so it never triggers the client.put_object.

Here are the results this time:

FUNCTION: _key_existing_size__list Used 495 times
    SUM    54.60546112060547
    MEAN   0.11031406286991004
    MEDIAN 0.08583354949951172
    STDEV  0.06339202669609442

FUNCTION: _key_existing_size__head Used 505 times
    SUM    44.59347581863403
    MEAN   0.0883039125121466
    MEDIAN 0.07310152053833008
    STDEV  0.054452842190700346

In this case, using client.head_object is faster. By 20% but the median time is 0.08 seconds! Even on a home broadband connection. In other words, I don't think that difference is significant.

One more time, excluding the client.put_object

The point of using client.list_objects_v2 instead of client.head_object was to avoid breaking the connection pool in urllib3 that boto3 manages somehow. Having to create a new HTTPS connection (and adding it to the pool) costs time, but what if we disregard that and compare the two functions "purely" on how long they take when the file does NOT exist? Remember, the second measurement above was when every object exists.

So we know it took 0.09 seconds and 0.07 seconds respectively for the two functions to figure out that the object does exist. How long does it take to figure out that the object does not exist independent of any other op. I.e. just try each one without doing a client.put_object afterwards. That means we avoid the bug so the comparison is fair.

The results:

FUNCTION: _key_existing_size__list Used 499 times
    SUM    123.57429671287537
    MEAN   0.247643881188127
    MEDIAN 0.2196049690246582
    STDEV  0.18622877427652743

FUNCTION: _key_existing_size__head Used 501 times
    SUM    112.99495434761047
    MEAN   0.22553883103315464
    MEDIAN 0.2828958034515381
    STDEV  0.15342842113446084

The client.list_objects_v2 beats client.head_object by 30%. And it matters. Above I said that 20% difference didn't matter but now it does. That's because the time difference when it always finds the object was 0.013 seconds. When it comes to figuring out that the object did not exist the time difference is 0.063 seconds. That's still a pretty small number but, hey, you gotto draw the line somewhere.

In conclusion

Using client.list_objects_v2 is a better alternative to using client.head_object.

If you think you'll often find that the object doesn't exist and needs a client.put_object then using client.list_objects_v2 is 90% faster. If you think you'll rarely need client.put_object (i.e. that most objects don't change) then client.list_objects_v2 is almost the same performance.

Why didn't I know about machma?!

07 June 2017 0 comments   Go, MacOSX, Linux

"machma - Easy parallel execution of commands with live feedback"

This is so cool!

It's a command line program that makes it really easy to run any command line program in parallel. I.e. in separate processes with separate CPUs.

Something network bound

Suppose I have a file like this:

▶ wc -l urls.txt
      30 urls.txt

▶ cat urls.txt | head -n 3

If I wanted to download all of these files with wget the traditional way would be:

▶ time cat urls.txt | xargs wget -q -P ./downloaded/
cat urls.txt  0.00s user 0.00s system 53% cpu 0.005 total
xargs wget -q -P ./downloaded/  0.07s user 0.24s system 2% cpu 14.913 total

▶ ls downloaded | wc -l

▶ du -sh downloaded
 21M    downloaded

So it took 15 seconds to download 30 files that totals 21MB.

Now, let's do it with machama instead:

▶ time cat urls.txt | machma -- wget -q -P ./downloaded/ {}
cat urls.txt  0.00s user 0.00s system 55% cpu 0.004 total
machma -- wget -q -P ./downloaded/ {}  0.53s user 0.45s system 12% cpu 7.955 total

That uses 8 separate processors (because my laptop has 8 CPUs).
Because 30 / 8 ~= 4, it roughly does 4 iterations.

But note, it took 15 seconds to download 30 files synchronously. That's an average of 0.5s per file. The reason it doesn't take 4x0.5 seconds (instead of 8 seconds) is because it's at the mercy of bad luck and some of those 30 spiking a bit.

Something CPU bound

Now let's do something really CPU intensive; Guetzli compression.

▶ ls images | wc -l

▶ time find images -iname '*.jpg' | xargs -I {} guetzli --quality 85 {} compressed/{}
find images -iname '*.jpg'  0.00s user 0.00s system 40% cpu 0.009 total
xargs -I {} guetzli --quality 85 {} compressed/{}  35.74s user 0.68s system 99% cpu 36.560 total

And now the same but with machma:

▶ time find images -iname '*.jpg' | machma -- guetzli --quality 85 {} compressed/{}

processed 7 items (0 failures) in 0:10
find images -iname '*.jpg'  0.00s user 0.00s system 51% cpu 0.005 total
machma -- guetzli --quality 85 {} compressed/{}  58.47s user 0.91s system 546% cpu 10.857 total

Basically, it took only 11 seconds. This time there were fewer images (7) than there was CPUs (8), so basically the poor computer is doing super intensive CPU (and memory) work across all CPUs at the same time. The average time for each of these files is ~5 seconds so it's really interesting that even if you try to do this in parallel execution instead of taking a total of ~5 seconds, it took almost double that.

In conclusion

Such a handy tool to have around for command line stuff. I haven't looked at its code much but it's almost a shame that the project only has 300+ GitHub stars. Perhaps because it's kinda complete and doesn't need much more work.

Also, if you attempt all the examples above you'll notice that when you use the ... | xargs ... approach the stdout and stderr is a mess. For wget, that's why I used -q to silence it a bit. With machma you get a really pleasant color coded live output that tells you the state of the queue, possible failures and an ETA.

Experimenting with Guetzli

24 May 2017 0 comments   MacOSX, Web development, Linux

tl;dr; Guetzli, the new JPEG compression program from Google can save a bytes with little loss of quality.

Inspired by this blog post about Guetzli I thought I'd try it out with something that's relevant to my project, 300x300 JPGs that can be heavily compressed.

So I installed it (with Homebrew) on my MacBook Pro (late 2013) and picked 7 JPGs I had, and use in SongSearch. Which is interesting because these JPEGs have already been compressed once. They are taken from converting from much larger PNGs with PIL (Pillow) at quality rating 80%. In other words, this is Guetzli on top of PIL.

I ran one iteration for every image for the following qualities: 85%, 90%, 95%, 99%, 100%.

The results on the size are as follows:

Image Average Size (bytes) % Smaller
original 23497.0 0
85% 16025.4 32%
90% 18829.4 20%
95% 21338.1 9.2%
99% 22705.3 3.4%
100% 22919.7 2.5%

So, for example, if you choose the 90% quality you save, on average, 4,667B (4.6KB).

As you might already know, Guetzli is incredibly memory hungry and very very slow. On average each image compression took on average 4-6 seconds (higher quality, shorter times). Meaning, if you like Guetzli you probably need to build around it so that the compression happens in a build step or async somewhere and ideally you don't want to run too many compressions in parallel as it might cause CPU and memory overloading.

Now, how does it look?

Go to and stare at the screen to see if you can A) see which one is more compressed and B) if the one that is more compressed is too low quality.

What do you think?

Is it worth it?

Is the quality drop too much to save 10% on image sizes?

Please share your thoughts. Perhaps we can re-do this experiment with some slightly larger JPGs.

Fastest Redis configuration for Django

11 May 2017 0 comments   Django, Web development, Linux, Python

I have an app that does a lot of Redis queries. It all runs in AWS with ElastiCache Redis. Due to the nature of the app, it stores really large hash tables in Redis. The application then depends on querying Redis for these. The question is; What is the best configuration possible for the fastest service possible?

Note! Last month I wrote Fastest cache backend possible for Django which looked at comparing Redis against Memcache. Might be an interesting read too if you're not sold on Redis.


All options are variations on the compressor, serializer and parser which are things you can override in django-redis. All have an effect on the performance. Even compression, for if the number of bytes between Redis and the application is smaller, then it should have better network throughput.

Without further ado, here are the variations:

    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/0',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
    "json": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/1',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "SERIALIZER": "django_redis.serializers.json.JSONSerializer",
    "ujson": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/2',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "SERIALIZER": "fastestcache.ujson_serializer.UJSONSerializer",
    "msgpack": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/3',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "SERIALIZER": "django_redis.serializers.msgpack.MSGPackSerializer",
    "hires": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/4',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "PARSER_CLASS": "redis.connection.HiredisParser",
    "zlib": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/5',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "COMPRESSOR": "django_redis.compressors.zlib.ZlibCompressor",
    "lzma": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": config('REDIS_LOCATION', 'redis://') + '/6',
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "COMPRESSOR": "django_redis.compressors.lzma.LzmaCompressor"

As you can see, they each have a variation on the OPTIONS.PARSER_CLASS, OPTIONS.SERIALIZER or OPTIONS.COMPRESSOR.

The default configuration is to use redis-py and to pickle the Python objects to a bytestring. Pickling in Python is pretty fast but it has the disadvantage that it's Python specific so you can't have a Ruby application reading the same Redis database.

The Experiment

Note how I have one LOCATION per configuration. That's crucial for the sake of testing. That way one database is all JSON and another is all gzip etc.

What the benchmark does is that it measures how long it takes to READ a specific key (called benchmarking). Then, once it's done that it appends that time to the previous value (or [] if it was the first time). And lastly it writes that list back into the database. That way, towards the end you have 1 key whose value looks something like this: [0.013103008270263672, 0.003879070281982422, 0.009411096572875977, 0.0009970664978027344, 0.0002830028533935547, ..... MANY MORE ....].

Towards the end, each of these lists are pretty big. About 500 to 1,000 depending on the benchmark run.

In the experiment I used wrk to basically bombard the Django server on the URL /random (which makes a measurement with a random configuration). On the EC2 experiment node, it finalizes around 1,300 requests per second which is a decent number for an application that does a fair amount of writes.

The way I run the Django server is with uwsgi like this:

uwsgi --http :8000 --wsgi-file fastestcache/ --master --processes 4 --threads 2

And the wrk command like this:

wrk -d30s  ""

(that, by default, runs 2 threads on 10 connections)

At the end of starting the benchmarking, I open http://localhost:8000/summary which spits out a table and some simple charts.

An Important Quirk

Time measurements over time
One thing I noticed when I started was that the final numbers' average was very different from the medians. That would indicate that there are spikes. The graph on the right shows the times put into that huge Python list for the default configuration for the first 200 measurements. Note that there are little spikes but generally quite flat over time once it gets past the beginning.

Sure enough, it turns out that in almost all configurations, the time it takes to make the query in the beginning is almost order of magnitude slower than the times once the benchmark has started running for a while.

So in the test code you'll see that it chops off the first 10 times. Perhaps it should be more than 10. After all, if you don't like the spikes you can simply look at the median as the best source of conclusive truth.

The Code

The benchmarking code is here. Please be aware that this is quite rough. I'm sure there are many things that can be improved, but I'm not sure I'm going to keep this around.

The Equipment

The ElastiCache Redis I used was a cache.m3.xlarge (13 GiB, High network performance) with 0 shards and 1 node and no multi-zone enabled.

The EC2 node was a m4.xlarge Ubuntu 16.04 64-bit (4 vCPUs and 16 GiB RAM with High network performance).

Both the Redis and the EC2 were run in us-west-1c (North Virginia).

The Results

Here are the results! Sorry if it looks terrible on mobile devices.

root@ip-172-31-2-61:~# wrk -d30s  "" && curl ""
Running 30s test @
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.19ms    6.32ms  60.14ms   80.12%
    Req/Sec   583.94    205.60     1.34k    76.50%
  34902 requests in 30.03s, 2.59MB read
Requests/sec:   1162.12
Transfer/sec:     88.23KB
                         TIMES        AVERAGE         MEDIAN         STDDEV
json                      2629        2.596ms        2.159ms        1.969ms
msgpack                   3889        1.531ms        0.830ms        1.855ms
lzma                      1799        2.001ms        1.261ms        2.067ms
default                   3849        1.529ms        0.894ms        1.716ms
zlib                      3211        1.622ms        0.898ms        1.881ms
ujson                     3715        1.668ms        0.979ms        1.894ms
hires                     3791        1.531ms        0.879ms        1.800ms

Best Averages (shorter better)
██████████████████████████████████████████████████████████████   2.596  json
█████████████████████████████████████                            1.531  msgpack
████████████████████████████████████████████████                 2.001  lzma
█████████████████████████████████████                            1.529  default
███████████████████████████████████████                          1.622  zlib
████████████████████████████████████████                         1.668  ujson
█████████████████████████████████████                            1.531  hires
Best Medians (shorter better)
███████████████████████████████████████████████████████████████  2.159  json
████████████████████████                                         0.830  msgpack
████████████████████████████████████                             1.261  lzma
██████████████████████████                                       0.894  default
██████████████████████████                                       0.898  zlib
████████████████████████████                                     0.979  ujson
█████████████████████████                                        0.879  hires

Size of Data Saved (shorter better)
█████████████████████████████████████████████████████████████████  60K  json
██████████████████████████████████████                             35K  msgpack
████                                                                4K  lzma
█████████████████████████████████████                              35K  default
█████████                                                           9K  zlib
████████████████████████████████████████████████████               48K  ujson
█████████████████████████████████████                              34K  hires

Discussion Points


This experiment has lead me to the conclusion that the best serializer is msgpack and the best compression is zlib. That is the best configuration for django-redis.

msgpack has implementation libraries for many other programming languages. Right now that doesn't matter for my application but if msgpack is both faster and more versatile (because it supports multiple languages) I conclude that to be the best serializer instead.

Web Console trick to get all URLs into your clipboard

27 April 2017 0 comments   Javascript, Web development

This isn't rocket science in the world of Web Development but I think it's so darn useful that if you've been unlucky to miss this, it's worth mentioning one more time.

Suppose you're on a site with lots of links. Like
And you want to get a list of all URLs that contain word "react" in the URL pathname.
And you want to get this into your clipboard.

Here's what you do:

  1. Open your browsers Web Console. In Firefox it's Alt+Cmd+K on OSX (or F12). In Chrome it's Alt+Cmd+J.

  2. Type in the magic: copy([...document.querySelectorAll('a')].map(a => a.href).filter(u => u.match(/react/i)))

  3. Hit Enter, go to a text editor and paste like regular.

It should look something like this:


Web Console in Firefox
The example is just that. An example. The cool thing about this is:

The limit is your imagination. You can also do things like copy([...document.querySelectorAll('a')].filter(a => a.textContent.match(/react/i)).map(a => a.href)) and you filter by the links' text.

Best practice with retries with requests

19 April 2017 3 comments   Python

tl;dr; I have a lot of code that does response = requests.get(...) in various Python projects. This is nice and simple but the problem is that networks are unreliable. So it's a good idea to wrap these network calls with retries. Here's one such implementation.

The First Hack

import time
import requests


def get(url):
        return requests.get(url)
    except Exception:
        # sleep for a bit in case that helps
        # try again
        return get(url)

This, above, is a terrible solution. It might fail for sooo many reasons. For example SSL errors due to missing Python libraries. Or the URL might have a typo in it, like get('http:/').

Also, perhaps it did work but the response is a 500 error from the server and you know that if you just tried again, the problem would go away.


while True:
    response = get('')
    if response.status_code != 500:
        # Hope it won't 500 a little later

What we need is a solution that does this right. Both for 500 errors and for various network errors.

The Solution

Here's what I propose:

import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry

def requests_retry_session(
    status_forcelist=(500, 502, 504),
    session = session or requests.Session()
    retry = Retry(
    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)
    return session

Usage example...

response = requests_retry_session().get('')

s = requests.Session()
s.auth = ('user', 'pass')
s.headers.update({'x-test': 'true'})

response = requests_retry_session(sesssion=s).get(

It's an opinionated solution but by its existence it demonstrates how it works so you can copy and modify it.

Testing The Solution

Suppose you try to connect to a URL that will definitely never work, like this:

t0 = time.time()
    response = requests_retry_session().get(
except Exception as x:
    print('It failed :(', x.__class__.__name__)
    print('It eventually worked', response.status_code)
    t1 = time.time()
    print('Took', t1 - t0, 'seconds')

There is no server running in :9999 here on localhost. So the outcome of this is...

It failed :( ConnectionError
Took 1.8215010166168213 seconds


1.8 = 0 + 0.6 + 1.2

The algorithm for that backoff is documented here and it says:

A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for: {backoff factor} * (2 ^ ({number of total retries} - 1)) seconds. If the backoff_factor is 0.1, then sleep() will sleep for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer than Retry.BACKOFF_MAX. By default, backoff is disabled (set to 0).

It does 3 retry attempts, after the first failure, with a backoff sleep escalation of: 0.6s, 1.2s.
So if the server never responds at all, after a total of ~1.8 seconds it will raise an error:

In this example, the simulation is matching the expectations (1.82 seconds) because my laptop's DNS lookup is near instant for localhost. If it had to do a DNS lookup, it'd potentially be slightly more on the first failure.

Works In Conjunction With timeout

Timeout configuration is not something you set up in the session. It's done on a per-request basis. httpbin makes this easy to test. With a sleep delay of 10 seconds it will never work (with a timeout of 5 seconds) but it does use the timeout this time. Same code as above but with a 5 second timeout:

t0 = time.time()
    response = requests_retry_session().get(
except Exception as x:
    print('It failed :(', x.__class__.__name__)
    print('It eventually worked', response.status_code)
    t1 = time.time()
    print('Took', t1 - t0, 'seconds')

And the output of this is:

It failed :( ConnectionError
Took 21.829053163528442 seconds

That makes sense. Same backoff algorithm as before but now with 5 seconds for each attempt:

21.8 = 5 + 0 + 5 + 0.6 + 5 + 1.2 + 5

Works For 500ish Errors Too

This time, let's run into a 500 error:

t0 = time.time()
    response = requests_retry_session().get(
except Exception as x:
    print('It failed :(', x.__class__.__name__)
    print('It eventually worked', response.status_code)
    t1 = time.time()
    print('Took', t1 - t0, 'seconds')

The output becomes:

It failed :( RetryError
Took 2.353440046310425 seconds

Here, the reason the total time is 2.35 seconds and not the expected 1.8 is because there's a delay between my laptop and I tested with a local Flask server to do the same thing and then it took a total of 1.8 seconds.


Yes, this suggested implementation is very opinionated. But when you've understood how it works, understood your choices and have the documentation at hand you can easily implement your own solution.

Personally, I'm trying to replace all my requests.get(...) with requests_retry_session().get(...) and when I'm making this change I make sure I set a timeout on the .get() too.

The choice to consider a 500, 502 and 504 errors "retry'able" is actually very arbitrary. It totally depends on what kind of service you're reaching for. Some services only return 500'ish errors if something really is broken and is likely to stay like that for a long time. But this day and age, with load balancers protecting a cluster of web heads, a lot of 500 errors are just temporary. Obivously, if you're trying to do something very specific like requests_retry_session().post(...) with very specific parameters you probably don't want to retry on 5xx errors.