Peterbe.com

A blog and website by Peter Bengtsson

tl;dr; Fetching from IndexedDB is about 5-15 times faster than fetching from AJAX.

localForage is a wrapper for the browser that makes it easy to work with any local storage in the browser. Different browsers have different implementations. By default, when you use localForage in Firefox is that it used IndexedDB which is asynchronous by default meaning your script don't get blocked whilst waiting for data to be retrieved.

A good pattern for a "fat client" (lots of javascript, server primarly speaks JSON) is to download some data, by AJAX using JSON and then store that in the browser. Next time you load the page, you first read from the local storage in the browser whilst you wait for a fresh new JSON from the server. That way you can present data to the screen sooner. (This is how Buggy works, blogged about it here)

Another similar pattern is that you load everything by AJAX from the server, present it and store it in the local storage. Then you perdiocally (or just on onload) you send the most recent timestamp from the data you've received and the server gives you back everything new and everything that has changed by that timestamp. The advantage of this is that the payload is continuously small but the server has to make a custom response for each client whereas a big fat blob of JSON can be better cached and such. However, oftentimes the data is dependent on your credentials/cookie anyway so most possibly you can't do much caching.

Anyway, whichever pattern you attempt I thought it'd be interesting to get a feel for how much faster it is to retrieve from the browsers memory compared to doing a plain old AJAX GET request. After all, browsers have seriously optimized for AJAX requests these days so basically the only thing standing in your way is network latency.

So I wrote a little comparison script that tests this. It's here: http://www.peterbe.com/localforage-vs-xhr/index.html

It retrieves a 225Kb JSON blob from the server and measures how long that took to become an object. Equally it does the same with localforage.getItem and then it runs this 10 times and compares the times. It's obviously not a surprise the local storage retrieval is faster, what's interesting is the difference in general.

What do you think? I'm sure both sides can be optimized but at this level it feels quite realistic scenarios.

In action
A couple of weeks ago we had accidentally broken our production server (for a particular report) because of broken HTML. It was an unclosed tag which rendered everything after that tag to just plain white. Our comprehensive test suite failed to notice it because it didn't look at details like that. And when it was tested manually we simply missed the conditional situation when it was caused. Neither good excuses. So it got me thinking how can we incorporate HTML (html5 in particular) validation into our test suite.

So I wrote a little gist and used it a bit on a couple of projects and was quite pleased with the results. But I thought this might be something worthwhile to keep around for future projects or for other people who can't just copy-n-paste a gist.

With that in mind I put together a little package with a README and a setup.py and now you can use it too.

There are however some caveats. Especially if you intend to run it as part of your test suite.

Caveat number 1

You can't flood htmlvalidator.nu. Well, you can I guess. It would be really evil of you and kittens will die. If you have a test suite that does things like response = self.client.get(reverse('myapp:myview')) and there are many tests you might be causing an obscene amount of HTTP traffic to them. Which brings us on to...

Caveat number 2

The htmlvalidator.nu site is written in Java and it's open source. You can basically download their validator and point django-html-validator to it locally. Basically the way it works is java -jar vnu.jar myfile.html. However, it's slow. Like really slow. It takes about 2 seconds to run just one modest HTML file. So, you need to be patient.

Premailer is probably my most successful open source project in recent years. I base that on the fact that 25 different people have committed to it.

Today I merged a monster PR by Michael Jason Smith of OnlineGroups.net.

What it does is basically that it makes premailer work in Python 3, PyPy and Python 2.6. Check out the tox.ini file. Test coverage is still 100%.

If you look at the patch the core of the change is actually surprisingly little. The majority of the "secret sauce" is basically a bunch of import statements which are split by if sys.version_info >= (3, ): and some various minor changes around encoding UTF-8. The rest of the changes are basically test sit-ups.

A really interesting thing that hit us was that the code had assumptions about the order of things. Basically the tests assumed the the order of certain things in the resulting output was predictable even though it was done using a dict. dicts are famously unreliable in terms of the order you get things out and it's meant to be like that and it's a design choice. The reason it worked till now is not only luck but quite amazing.

Anyway, check it out. Now that we have a tox.ini file it should become much easier to run tests which I hope means patches will be better checked as they come in.

I, multiple times per day, find myself wanting to find out what headers I get back on a URL but I don't care about the response payload. The command to use then is:

curl -v http://www.peterbe.com/ > /dev/null

That'll print out all the headers sent and received. Nice and crips.

So because I type this every day I made it into a shortcut script

cd ~/bin
echo '#!/bin/bash
> set -x
> curl -v "$@" > /dev/null
> ' > c
chmod +x c

If it's not clear what the code looks like, it's this:

#!/bin/bash
set -x
curl -v "$@" > /dev/null

Now I can just type:

c http://www.peterbe.com

Or if I want to add some extra request headers for example:

c -H 'User-Agent: foobar' http://www.peterbe.com

Because this took me quite a while to figure out, I thought I'd share in case somebody else is falling into the same pit of confusion.

When you write an attribute directive in angularjs you might want to have it fed by an attribute value.
For example, something like this:

<div my-attribute="somevalue"></div>

How then do you create a new scope that takes that in? It's not obvious. Any here's how you do it:

app.directive('myAttribute', function() {
    return {
        restrict: 'A',
        scope: {
            myAttribute: '='
        },
        template: '<div style="font-weight:bold">{{ myAttribute | number:2 }}</div>'
    };
});

The trick to notice is that the "self attribute" because the name of the attribute in camel case.

Thanks @mythmon for helping me figure this out.

I just learned a really good bash trick which is something I've wanted to have but didn't really appreciate that it was possible so I never even searched for it.

set -ex

Ok, one thing at a time.

set -e

What this does, at the top of your bash script is that it exits as soon as any line in the bash script fails.
Suppose you have a script like this:

git pull origin master
find . | grep '\.pyc$' | xargs rm
./restart_server.sh

If the first line fails you don't want the second line to execute and you don't want the third line to execute either. The naive solution is to "and" them:

git pull origin master && find . | grep '\.pyc$' | xargs rm && ./restart_server.sh

but now it's just getting silly. (and is it even working?)

What set -e does is that it exists if any of the lines fail.

set -x

What this does is that it prints each command that is going to be executed with a little plus.
The output can look something like this:

+ rm -f pg_all.sql pg_all.sql.gz
+ pg_dumpall
+ apack pg_all.sql.gz pg_all.sql
++ date +%A
+ s3cmd put --reduced-redundancy pg_all.sql.gz s3://db-backups-peterbe/Sunday/
pg_all.sql.gz -> s3://db-backups-peterbe/Sunday/pg_all.sql.gz  [part 1 of 2, 15MB]
 15728640 of 15728640   100% in    0s    21.22 MB/s  done
pg_all.sql.gz -> s3://db-backups-peterbe/Sunday/pg_all.sql.gz  [part 2 of 2, 14MB]
 14729510 of 14729510   100% in    0s    21.50 MB/s  done
+ rm pg_all.sql pg_all.sql.gz

...when the script looks like this:

#!/bin/bash
set -ex
rm -f pg_all.sql pg_all.sql.gz
pg_dumpall > pg_all.sql
apack pg_all.sql.gz pg_all.sql
s3cmd put --reduced-redundancy pg_all.sql.gz s3://db-backups-peterbe/`date +%A`/
rm pg_all.sql pg_all.sql.gz

And to combine these two gems you simply put set -ex at the top of your bash script.

Thanks @bramwelt for showing me this one.

Do you want to display some code in a Keynote presentation?

It's easy. All you need is Homebrew installed.

First you need to install the program highlight.

$ brew install highlight

So you have a piece of code. For example some Python code. The take that snippet of code and save it to a file like code.py. Now all you need to do is run this:

$ highlight -O rtf code.py | pbcopy

Then, switch back into Keynote and simply paste.

But if you don't want to create a file of the snippet, simply copy the snippet from within your editor and run this:

$ pbpaste | highlight -S py -O rtf | pbcopy

The -S py means "syntax is py (for python)".

You can use highlight for a bunch of other things like creating HTML. See man highlight for more tips.

First of all, Dropbox is awesome. When I plug in my iPhone to my laptop it automatically backs up all my photos to Dropbox fastly and conveniently. If I want to share a photo or a movie I can easily get a "Share link" to show friends and family.

The only disadvantage with Dropbox is that you only get 5Gb for free. Upgrading to the 100Gb Pro account is $10 per month. Not a massive amount but my inner geek uses that as an excuse to hack around it.

So, what I do is set up a S3 bucket. Let's call it camera-uploads-peterbe and then I use s3cmd to upload all the pictures by month. Like this:

$ cd Dropbox/Camera\ Uploads/
$ s3cmd put --reduced-redundancy 2014-01-* s3://camera-uploads-peterbe/2014/
$ rm 2014-01-*

I'm sure that by writing this here, people will write comments saying much better ways to do properly offside backups and I'm eager to hear that but for me S3 feels great. It's tools I'm very familiar with. It's a very established and mature business so it's unlikely to go away any time soon. It's secure and it's incredibly cheap because there's virtually no transfer of these files.

Also, I like how it's clearly explicit and simple. I don't have to worry about some obscure background app not working properly and me not noticing.

Let's see what the future holds. At the moment I'm just worrying about storing the files but it's a bit clunky to retrieve individual files.

One of my most popular GitHub Open Source projects is premailer. It's a python library for combining HTML and CSS into HTML with all its CSS inlined into tags. This is a useful and necessary technique when sending HTML emails because you can't send those with an external CSS file (or even a CSS style tag in many cases).

The project has had 23 contributors so far and as always people come in get some itch they have scratched and then leave. I really try to get good test coverage and when people come with code I almost always require that it should come with tests too.

But sometimes you miss things. Also, this project was born as a weekend hack that slowly morphed into an actual package and its own repository and I bet there was code from that day that was never fully test covered.

So today I combed through the code and plugged all the holes where there wasn't test coverage.
Also, I set up Coveralls (project page) which is an awesome service that hooks itself up with Travis CI so that on every build and every Pull Request, the tests are run with --with-cover on nosetests and that output is reported to Coveralls.

The relevant changes you need to do are:

1) You need to go to coveralls.io (sign in with your GitHub account) and add the repo.
2) Edit your .travis.yml file to contain the following:

before_install:
    - pip install coverage
...
after_success:
    - pip install coveralls
    - coveralls

And you need to execute your tests so that coverage is calculated (the coverage module stores everything in a .coverage file which coveralls analyzes and sends). So in my case I change to this:

script:
    - nosetests premailer --with-cover --cover-erase --cover-package=premailer

3) You must also give coveralls some clues. So it reports on only the relevant files. Here's what mine looked like:

[run]
source = premailer

[report]
omit = premailer/test*

Now, I get to have a cute "coverage: 100%" badge in the README and when people post pull requests Coveralls will post a comment to reflect how the pull request changes the test coverage.

I am so grateful for all these wonderful tools. And it's all free too!

I just rolled out a change here on my personal blog which I hope will make my few visitors happy.

Basically; when you hover over a link (local link) long enough it prefetches it (with AJAX) so that if you do click it's hopefully already cached in your browser.

If you hover over a link and almost instantly hover out it cancels the prefetching. The assumption here is that if you deliberately put your mouse cursor over a link and proceed to click on it you want to go there. Because your hand is relatively slow I'm using the opportunity to prefetch it even before you have clicked. Some hands are quicker than others so it's not going to help for the really quick clickers.

What I also had to do was set a Cache-Control header of 1 hour on every page so that the browser can learn to cache it.

The effect is that when you do finally click the link, by the time your browser loads it and changes the rendered output it'll hopefully be able to do render it from its cache and thus it becomes visually ready faster.

Let's try to demonstrate this with this horrible animated gif:
(or download the screencast.mov file)

Screencast
1. Hover over a link (in this case the "Now I have a Gmail account" from 2004)
2. Notice how the Network panel preloads it
3. Click it after a slight human delay
4. Notice that when the clicked page is loaded, its served from the browser cache
5. Profit!

So the code that does is is quite simply:

$(function() {
  var prefetched = [];
  var prefetch_timer = null;
  $('div.navbar, div.content').on('mouseover', 'a', function(e) {
    var value = e.target.attributes.href.value;
    if (value.indexOf('/') === 0) {
      if (prefetched.indexOf(value) === -1) {
        if (prefetch_timer) {
          clearTimeout(prefetch_timer);
        }
        prefetch_timer = setTimeout(function() {
          $.get(value, function() {
            // necessary for $.ajax to start the request :(
          });
          prefetched.push(value);
        }, 200);
      }
    }
  }).on('mouseout', 'a', function(e) {
    if (prefetch_timer) {
      clearTimeout(prefetch_timer);
    }
  });
});

Also, available on GitHub.

I'm excited about this change because of a couple of reasons:

  1. On mobile, where you might be on a non-wifi data connection you don't want this. There you don't have the mouse event onmouseover triggering. So people on such devices don't "suffer" from this optimization.
  2. It only downloads the HTML which is quite light compared to static assets such as pictures but it warms up the server-side cache if needs be.
  3. It's much more targetted than a general prefetch meta header.
  4. Most likely content will appear rendered to your eyes faster.