Peterbe.com

A blog and website by Peter Bengtsson

Filtered home page! Currently only showing blog entries under the category: Python. Clear filter

Remember mincss from a couple of days ago? Now it supports downloading the HTML, to analyze, using PhantomJS. That's pretty exciting because PhantomJS actually supports Javascript. It's a headless (a web browser without a graphical user interface) Webkit engine. What mincss does is that invokes a simple script like this:

var page = require('webpage').create();
page.open(phantom.args[0], function () {
  console.log(page.content);
  phantom.exit();
});

which will allow any window.onload events to fire which might create more DOM nodes. So, like in this example it'll spit out HTML that contains a <p class="bar"> tag which you otherwise wouldn't get with Python's urllib.urlopen().

The feature was just added (version 0.6.0) and I wouldn't be surprised if there are dragons there because I haven't tried it on a lot of sites. And at the time of writing, I was not able to compile it on my Ubuntu 64bit server so I haven't put it into production yet.

Anyway, with this you can hopefully sprinkle less of those /* no mincss */ comments into you CSS.

First of all, to find out what mincss is read this blog post which explains what the heck this new Python tool is.

My personal website is an ideal candidate for using mincss because it uses an un-customized Bootstrap CSS which weighs over 80Kb (minified) and on every page hit, the rendered HTML is served directly from memcache so dynamic slowness is not a problem. With that, what I can do is run mincss just before the rendered (from Django) output HTML is stored in memcache. Also, what I can do is take ALL inline style blocks and all link tags and combine them into one big inline style block. That means that I can reduce any additional HTTP connections needed down to zero! Remember, "Minimize HTTP Requests" is the number one web performance optimization rule.

To get a preview of that, compare http://www.peterbe.com/about with http://www.peterbe.com/about3. Visually no difference. But view the source :)

Before:
Document size: Before

After:
Document size: After

Voila! One HTTP request less and 74Kb less!

Now, as if that wasn't good enough, let's now take into account that the browser won't start rendering the page until the HTML and ALL CSS is "downloaded" and parsed. Without further ado, let's look at how much faster this is now:

Before:
Waterfall view: Before
report

After:
Waterfall view: After
report

How cool is that! The "Start Render" event is fired after 0.4 seconds instead of 2 seconds!

Note how the "Content Download" isn't really changing. That's because no matter what the CSS is, there's still a tonne of images yet to download.

That example page is interesting too because it contains a piece of Javascript that is fired on the window.onload that creates little permalink links into the document and the CSS it needs is protected thanks to the /* no mincss */ trick as you can see here.

The code that actually implements mincss here is still very rough and is going to need some more polishing up until I publish it further.

Anyway, I'm really pleased with the results. I'm going to tune the implementation a bit further and eventually apply this to all pages here on my blog. Yes, I understand that the CSS, if implemented as a link, can be reused thanks to the browser's cache but visitors of my site rarely check out more than one page. In fact, the number of "pages per visit" on my blog is 1.17 according to Google Analytics. Even if this number was bigger I still think it would be a significant web performance boost.

UPDATE

Steve Souders points out a flaw in the test. See his full comment below. Basically, what appears to happen in the first report, IE8 downlads the file c98c3dfc8525.css twice even though it returns as a 200 the first time. No wonder that delays the "Start Render" time.

So, I re-ran the test with Firefox instead (still from the US East coast):

Before:
WebpageTest before (Firefox)
report

After:
WebpageTest after (Firefox)
report

That still shows a performance boost from 1.4 seconds down to 0.6 seconds when run using Firefox.

Perhaps it's a bug in Webpagetest or perhaps it's simply how IE8 works. In a sense it "simulates" the advantages of reducing the dependency on extra HTTP requests.

A project I started before Christmas (i.e. about a month ago) is now production ready.

mincss (code on github) is a tool that when given a URL (or multiple URLs) downloads that page and all its CSS and compares each and every selector in the CSS and finds out which ones aren't used. The outcome is a copy of the original CSS but with the selectors not found in the document(s) removed. It goes something like this:

>>> from mincss.processor import Processor
>>> p = Processor()
>>> p.process_url('http://www.peterbe.com')
>>> p.process()
>>> p.inlines
[]
>>> p.links
[<mincss.processor.LinkResult object at 0x10a3bbe50>, <mincss.processor.LinkResult object at 0x10a4d4e90>]
>>> one = p.links[0]
>>> one.href
'//d1ac1bzf3lrf3c.cloudfront.net/static/CACHE/css/c98c3dfc8525.css'
>>> len(one.before)
83108
>>> len(one.after)
10062
>>> one.after[:70]
u'header {display:block}html{font-size:100%;-webkit-text-size-adjust:100'

To whet your appetite, running it on any one of my pages here on my blog it goes from: 82Kb down to 7Kb. Before you say anything; yes I know its because I using a massive (uncustomized) Twitter Bootstrap file that contains all sorts of useful CSS that I'm not using more than 10% of. And yes, those 10% on one page might be different from the 10% on another page and between them it's something like 15%. Add a third page and it's 20% etc. But, because I'm just doing one page at a time, I can be certain it will be enough.

One way of using mincss is to run it on the command line and look at the ouput, then audit it and give yourself an idea of selectors that aren't used. A safer way is to just do one page at a time. It's safer.

The way it works is that it parses the CSS payload (from inline blocks or link tags) with a relatively advanced regular expression and then loops over each selector one at a time and runs it with cssselect (which uses lxml) to see if the selector is used anywhere. If the selector isn't used the selector is removed.

I know I'm not explaining it well so I put together a little example implementation which you can download and run locally just to see how it works.

Now, regarding Javascript and DOM manipulations and stuff; there's not a lot you can do about that. If you know exactly what your Javascript does, for example, creating a div with class loggedin-footer you can prepare your CSS to tell mincss to leave it alone by adding /* no mincss */ somewhere in the block. Again, look at the example implementation for how this can work.

An alternative is to instead of using urllib.urlopen() you could use a headless browser like PhantomJS which will run it with some Javascript rendering but you'll never cover all bases. For example, your page might have something like this:

$(function() {
  $.getJSON('/is-logged-in', function(res) {
    if (res.logged_in) {
      $('<div class="loggedin-footer">').appendTo($('#footer'));
    }
  });
});

But let's not focus on what it can not do.

I think this can be a great tool for all of us who either just download a bloated CSS framework or you have a legacy CSS that hasn't been updated as new HTML is added and removed.

The code is Open Source (of course) and patiently awaiting your pull requests. There's almost full test coverage and there's still work to be done to improve the code such as finding more bugs and optimizing.

Using the proxy with '?MINCSS_STATS=1'
Also, there's a rough proxy server you can start that attempts to run it on any URL. You start it like this:

pip install Flask
cd mincss/proxy
python app.py

and then you just visit something like http://localhost:5000/www.peterbe.com/about and you can see it in action. That script needs some love since it's using lxml to render the processed output which does weird things to some DOM elements.

I hope it's of use to you.

UPDATE

Published a blog post about using mincss in action

UPDATE 2

cssmin now supports downloading using PhantomJS which means that Javascript rendering will work. See this announcement

UPDATE 3

Version 0.8 is 500% faster now for large documents. Make sure you upgrade!

So here's my latest little fun side-project: HUGEpic.io http://hugepic.io

Zoomed in on Mona Lisa
It's a web app for uploading massive pictures and looking at them like maps.

The advantages with showing pictures like this are:

  • you only download what you need
  • you can send a permanent link to a picture at a particular location with a particular zoom level
  • you can draw annotations on a layer on top of the image

All the code is here on Github and as you can see it's a Tornado that uses two databases: MongoDB and Redis and when it connects to MongoDB it uses the new Tornado specific driver called Motor which is great.

Before I get to the juicy client side stuff, let me talk about something awesome in between Tornado and Javascript, namely: RQ
It's an awesomely simple python message queue that only works with Python, Redis and on UNIXy systems. All checks. My only real experience with message queues has honestly been with Celery which is also great but a right pain compared to RQ. With RQ, all I do is reduce the heavy tasks down to pure python functions. For example, make_thumbnail() then I simply do this:

from utils import make_thumbnail
from rq import Queue
queue = Queue(connection=self.redis)
job = q.enqueue(
    make_thumbnail,
    image,
    width,
    extension,
    self.application.settings['static_path']
)

and that's it. Starting rqworker on the command line (from somewhere where __import__('utils.make_thumbnail') makes sense) and we're off!

You might think that using a message queue is all fancy pants and just something I need to bother myself with because of Tornado's eventloop nature. But no, it's so much more than that. When a massive 5 Mb JPG is uploaded, a little algorithm is figuring out roughly how many zoom levels that can be used and what ALL 256x256 tiles are going to be for each resized version of the original. Then it needs to generate a thumbnail to represent that JPG and all the tiles and the thumbnail need to go through an optimizer (I'm using jpegoptim and optipng).
Lastly, to be able to serve all tiles from a fast CDN I have to upload every single tile to a Reduced Redundancy Amazon S3 storage that the Amazon CloudFront CDN is hooked up to. This might sometimes fail due to network hickups and must be resilient to continue where it left off.

All of that stuff takes a very long time but it's made it much easier and much more comfortable thanks to RQ.

Now, on the front end. The genesis inspiration to this was a library called Polymaps which isn't bad but when I later switched to Leaftlet I was blown away. It was lighter, smoother running and has an absolutely stunning API that even I could understand.

And that led me to find another amazingly neat library, for Leaflet, called Leaflet.draw which makes it really easy to add tools for drawing on the pictures. You can draw lines, rectangles, circles, polygons and drop markers. And for all of them it was relatively easy to bind cute popup bubbles so you can type in comments like this or this.

And lastly, there's Filepicker. It's a brilliant web service that simply takes care of your uploads. Uploading a 8 Mb JPG through a little file upload form not only takes an incredibly long time, it's fragile and has no good default UX. Filepicker takes care of all of that and makes it possible to upload files the way you want it. For example, if you use Google Drive to back up your massie pictures, Filepicker can handle that. And Dropbox. And Box. And of course, regular drag-and-drop uploads but with a lovely progress bar indicator and thumbnail preview.

Uploading by URL
There's also upload by simply entering a URL. So, try find a picture on Google Images click on one, then in the right-hand bar right-click the URL and "Copy Link Location" and paste that into HUGEpic to test.

So for a weekend project that has taken only a couple of weeks I'm quite proud. My hopes for big success is nil but it has been a great learning experience mixing interesting client-side programming, web programming and intereting CPU bound and networking challenges.

On iPhone Safari
Oh, and did I mention it works great on mobile too? Even the file uploading part. Thanks to Filepicker.

Here are two perfectly good ways to turn 123456789 into "123,456,789":

import locale

def f1(n):
    locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
    return locale.format('%d', n, True)

def f2(n):
    r = []
    for i, c in enumerate(reversed(str(n))):
        if i and (not (i % 3)):
            r.insert(0, ',')
        r.insert(0, c)
    return ''.join(r)

assert f1(123456789) == '123,456,789'
assert f2(123456789) == '123,456,789'    

Which one do you think is the fastest?

Easy, write a benchmark:

from time import time

for f in (f1, f2):
    t0 = time()
    for i in range(1000000):
        f(i)
    t1 = time()
    print f.func_name, t1 - t0, 'seconds'

And, drumroll, the results are:

peterbe@mpb:~$ python benchmark.py
f1 19.4571149349 seconds
f2 6.30253100395 seconds

The f2 one looks very plain and a good candidate for PyPy:

peterbe@mpb:~$ pypy dummy.py
f1 14.367814064 seconds
f2 0.77246594429 seconds

...which is 800% speed boost which is cute. It's also kinda ridiculous that each iteration of f2 takes 0.0000008 seconds. What's that!?

An obvious albeit somewhat risky optimization on f1 is this:

import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
def f1(n):
    return locale.format('%d', n, True)

...and now we get:

peterbe@mpb:~$ python dummy.py
f1 16.3811080456 seconds
f2 6.14097189903 seconds

Before you say it, yes I'm aware the locale can do much more but I was just curious and I scratched it.

UPDATE

Dave points out the built in function format (which was added in Python 2.6). So let's add it and kick ass!

def f3(i):
    return format(i, ',')

And we run the tests again:

peterbe@mpb:~$ python dummy.py
f1 16.4227910042
f2 6.13625884056
f3 0.892002105713
peterbe@mpb:~$ pypy dummy.py
f1 4.61941003799
f2 0.720993041992
f3 0.26224398613

There's your winner!

I'm quite fond of hastebin.com. It's fast. It's reliable. And it's got nice keyboard shortcuts that work for my taste.

So, I created a little program to quickly throw things into hastebin. You can have one too:

First create ~/bin/hastebinit and paste in:

#!/usr/bin/python

import urllib2
import os
import json

URL = 'http://hastebin.com/documents'

def run(*args):
    if args:
        content = [open(x).read() for x in args]
        extensions = [os.path.splitext(x)[1] for x in args]
    else:
        content = [sys.stdin.read()]
        extensions = [None]

    for i, each in enumerate(content):
        req = urllib2.Request(URL, each)
        response = urllib2.urlopen(req)
        the_page = response.read()
        key = json.loads(the_page)['key']
        url = "http://hastebin.com/%s" % key
        if extensions[i]:
            url += extensions[i]
        print url

if __name__ == '__main__':
    import sys
    sys.exit(run(*sys.argv[1:]))

Then run: chmod +x ~/bin/hastebinit

Now you can do things like:

$ cat ~/myfile | hastebinit
$ hastebinit < ~/myfile
$ hastebinit ~/myfile myotherfile

Hopefully it'll one day help at least one more soul out there!

I've finally had time to sort out the django-mongokit code so it's now fully compatible with Django 1.4.

Because I don't personally use the project I've sort of got myself lost in various patches from some awesome contributors who keep it in check.

Also, thanks to Marc Abramowitz who added a tox.ini which I've now updated to also test python 2.7 and Django 1.4

Go forth and build awesome Django apps with MongoKit which is still think is the best wrapper available on PyMongo out there.

(This post is a response to Richard Patchet's request for tips on how to actually use premailer)

First of all, premailer is a Python library that converts a document of HTML and tranforms its <style> tags into inline style attributes on the HTML itself. This comes very handy when you need to take a nicely formatted HTML newletter template and prepare it before sending because when you send HTML emails you can't reference an external .css file.

So, here's how to turn it into a command line script.

First, install, then write the script:

$ pip install premailer
$ touch ~/bin/run-premailer.py
$ chmod +x ~/bin/run-premailer.py

Now, you might want to do this differently but this should get you places:

#!/usr/bin/env python

from premailer import transform

def run(files):
    try:
        base_url = [x for x in files if x.count('://')][0]
        files.remove(base_url)
    except IndexError:
        base_url = None

    for file_ in files:
        html = open(file_).read()
        print transform(html, base_url=base_url)

if __name__ == '__main__':
    import sys
    run(sys.argv[1:])

To test it, I've made a sample HTML page that looks like this:

<html>
    <head>
        <title>Test</title>
        <style>
        h1, h2 { color:red; }
        strong {
          text-decoration:none
          }
        p { font-size:2px }
        p.footer { font-size: 1px}
        p a:link { color: blue; }
        </style>
    </head>
    <body>
        <h1>Hi!</h1>
        <p><strong>Yes!</strong></p>
        <p class="footer" style="color:red">Feetnuts</p>
        <p><a href="page2/">Go to page 2</a></p>
    </body>
</html>

Cool. So let's run it: $ run-premailer.py test.html

<html>
  <head>
    <title>Test</title>
  </head>
  <body>
        <h1 style="color:red">Hi!</h1>
        <p style="font-size:2px"><strong style="text-decoration:none">Yes!</strong></p>
        <p style="{color:red; font-size:1px} :link{color:red}">Feetnuts</p>
    <p style="font-size:2px"><a href="page2/" style=":link{color:blue}">Go to page 2</a></p>
    </body>
</html>

Note that premailer supports converting relative URLs, so let's actually using that:
$ run-premailer.py test.html http://www.peterbe.com

<html>
  <head>
    <title>Test</title>
  </head>
  <body>
        <h1 style="color:red">Hi!</h1>
        <p style="font-size:2px"><strong style="text-decoration:none">Yes!</strong></p>
        <p style="{color:red; font-size:1px} :link{color:red}">Feetnuts</p>
    <p style="font-size:2px"><a href="http://www.peterbe.com/page2/" 
     style=":link{color:blue}">Go to page 2</a></p>
    </body>
</html>

I'm sure you can think of many many ways to improve that. Mayhaps use argparse or something fancy to allow for more options. Mayhaps make it so that you can supply named .css files on the command line that get automagically inserted on the fly.

The @staticmethod decorator is nothing new. In fact, it was added in version 2.2. However, it's not till now in 2012 that I have genuinely fallen in love with it.

First a quick recap to remind you how @staticmethod works.

class Printer(object):

    def __init__(self, text):
        self.text = text

    @staticmethod
    def newlines(s):
        return s.replace('\n','\r')

    def printer(self):
        return self.newlines(self.text)

p = Printer('\n\r')
assert p.printer() == '\r\r'

So, it's a function that has nothing to do with the instance but still belongs to the class. It belongs to the class from an structural point of view of the observer. Like, clearly the newlines function is related to the Printer class. The alternative is:

def newlines(s):
    return s.replace('\n','\r')

class Printer(object):

    def __init__(self, text):
        self.text = text

    def printer(self):
        return newlines(self.text)

p = Printer('\n\r')
assert p.printer() == '\r\r'

It's the exact same thing and one could argue that the function has nothing to do with the Printer class. But ask yourself (by looking at your code); how many times do you have classes with methods on them that take self as a parameter but never actually use it?

So, now for the trump card that makes it worth the effort of making it a staticmethod: object orientation. How would you do this neatly without OO?

class UNIXPrinter(Printer):

    @staticmethod
    def newlines(s):
        return s.replace('\n\r', '\n')

p = UNIXPrinter('\n\r')
assert p.printer() == '\n'  

Can you see it? It's ideal for little functions that should be domesticated by the class but have nothing to do with the instance (e.g. self). I used to think it looked like it's making a pure looking thing like something more complex that it needs to be. But now, I think it looks great!

I've blogged before about how this site can easily push out over 2,000 requests/second using only 6 WSGI workers excluding latency. The reason that's possible is because the whole page(s) can be cached server-side. What actually happens is that the whole rendered HTML blob is stored in the cache server (Redis in my case) so that no database queries are needed at all.

I wanted my site to still "feel" dynamic in the sense that once you post a comment (and it's published), the page automatically invalidates the cache and thus, the user doesn't have to refresh his browser when he knows it should have changed. To accomplish this I used a hacked cache_page decorator that makes the cache key depend on the content it depends on. Here's the code I actually use today for the home page:

def _home_key_prefixer(request):
    if request.method != 'GET':
        return None
    prefix = urllib.urlencode(request.GET)
    cache_key = 'latest_comment_add_date'
    latest_date = cache.get(cache_key)
    if latest_date is None:
        # when a blog comment is posted, the blog modify_date is incremented
        latest, = (BlogItem.objects
                   .order_by('-modify_date')
                   .values('modify_date')[:1])
        latest_date = latest['modify_date'].strftime('%f')
        cache.set(cache_key, latest_date, 60 * 60)
    prefix += str(latest_date)

    try:
        redis_increment('homepage:hits', request)
    except Exception:
        logging.error('Unable to redis.zincrby', exc_info=True)

    return prefix

@cache_page_with_prefix(60 * 60, _home_key_prefixer)
def home(request, oc=None):
    ...
    try:
        redis_increment('homepage:misses', request)
    except Exception:
        logging.error('Unable to redis.zincrby', exc_info=True)
    ...

And in the models I then have this:

@receiver(post_save, sender=BlogComment)
@receiver(post_save, sender=BlogItem)
def invalidate_latest_comment_add_dates(sender, instance, **kwargs):
    cache_key = 'latest_comment_add_date'
    cache.delete(cache_key)

So this means:

  • whole pages are cached for long time for fast access
  • updates immediately invalidates the cache for best user experience
  • no need to mess with ANY SQL caching

So, the next question is, if posting a comment means that the cache is invalidated and needs to be populated, what's the ratio of hits versus hits where the cache is cleared? Glad you asked. That's why I made this page:

www.peterbe.com/stats/

It allows me to monitor how often a new blog comment or general time-out means poor django needs to re-create the HTML using SQL.

At the time of writing, one in every 25 hits to the homepage requires the server to re-generate the page. And still the content is always fresh and relevant.

The next level of optimization would be to figure out whether a particular page update (e.g. a blog comment posting on a page that isn't featured on the home page) should or should not invalidate the home page. esp