Peterbe.com

A blog and website by Peter Bengtsson

Filtered home page! Currently only showing blog entries under the category: Python. Clear filter

My friend and colleague Jannis (aka jezdez) Leidel saved my bacon today where I had gotten completely stuck.

So, I have this python2.6 virtualenv and whenever I ran python setup.py sdist upload it would upload a really nasty tarball to PyPI. What would happen is that when people do pip install premailer it would file horribly and look something like this:

...
IOError: [Errno 2] No such file or directory: '/path/to/virtual-env/build/premailer/setup.py'

What?!?! If you download the tarball and unpack it you'll see that there definitely is a setup.py file in there.

Anyway. What happens, which I didn't realize was that within the .tar.gz file there were these strange copies of files. For example for every file.py there was a ._file.py etc.

Here's what the file looked like after a tarball had been created:

(premailer26)peterbe@mpb:~/dev/PYTHON/premailer (master)$ tar -zvtf dist/premailer-2.0.2.tar.gz
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 ./._premailer-2.0.2
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._LICENSE
-rw-r--r--  0 peterbe staff    1517 Mar 28 10:13 premailer-2.0.2/LICENSE
-rw-r--r--  0 peterbe staff     280 Apr  9 21:10 premailer-2.0.2/._MANIFEST.in
-rw-r--r--  0 peterbe staff      34 Apr  9 21:10 premailer-2.0.2/MANIFEST.in
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/PKG-INFO
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer/
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer.egg-info
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._README.md
-rw-r--r--  0 peterbe staff    5185 Mar 28 10:13 premailer-2.0.2/README.md
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._setup.cfg
-rw-r--r--  0 peterbe staff      59 Apr 11 15:51 premailer-2.0.2/setup.cfg
-rw-r--r--  0 peterbe staff     280 Apr  9 21:09 premailer-2.0.2/._setup.py
-rw-r--r--  0 peterbe staff    2079 Apr  9 21:09 premailer-2.0.2/setup.py
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._dependency_links.txt
-rw-r--r--  0 peterbe staff       1 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/dependency_links.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/._not-zip-safe
-rw-r--r--  0 peterbe staff       1 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/not-zip-safe
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/PKG-INFO
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._requires.txt
-rw-r--r--  0 peterbe staff      23 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/requires.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._SOURCES.txt
-rw-r--r--  0 peterbe staff     329 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/SOURCES.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._top_level.txt
-rw-r--r--  0 peterbe staff      10 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/top_level.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:21 premailer-2.0.2/premailer/.___init__.py
-rw-r--r--  0 peterbe staff      66 Apr  9 21:21 premailer-2.0.2/premailer/__init__.py
-rw-r--r--  0 peterbe staff     280 Apr  9 09:23 premailer-2.0.2/premailer/.___main__.py
-rw-r--r--  0 peterbe staff    3315 Apr  9 09:23 premailer-2.0.2/premailer/__main__.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._premailer.py
-rw-r--r--  0 peterbe staff   15368 Apr  8 16:22 premailer-2.0.2/premailer/premailer.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._test_premailer.py
-rw-r--r--  0 peterbe staff   37184 Apr  8 16:22 premailer-2.0.2/premailer/test_premailer.py

Strangly, this only happened in a Python 2.6 environment. The problem went away when I created a brand new Python 2.7 enviroment with the latest setuptools.

So basically, the fault lies with OSX and a strange interaction between OSX and tar.
This superuser.com answer does a much better job explaining this "flaw".

So, the solution to the problem is to create the distribution like this instead:

$ COPYFILE_DISABLE=true python setup.py sdist

If you do that, you get a healthy lookin tarball that actually works to pip install. Thanks jezdez for pointing that out!

Because this bit me harder than I was ready for, I thought I'd make a note of it for the next victim.

In Python 2, suppose you have this:

Python 2.7.5
>>> items = [(1, 'A number'), ('a', 'A letter'), (2, 'Another number')]

Sorting them, without specifying how, will automatically notice that it contains tuples:

Python 2.7.5
>>> sorted(items)
[(1, 'A number'), (2, 'Another number'), ('a', 'A letter')]

This doesn't work in Python 3 because comparing integers and strings is not allowed. E.g.:

Python 3.3.3
>>> 1 < '1'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unorderable types: int() < str()

You have to convert them to stings first.

Python 3.3.3
>>> sorted(items, key=lambda x: str(x[0]))
[(1, 'A number'), (2, 'Another number'), ('a', 'A letter')]

If you really need to sort by 1 < '1' this won't work. Then you need a more complex key function. E.g.:

Python 3.3.3
>>> def keyfunction(x):
...   v = x[0]
...   if isinstance(v, int): v = '0%d' % v
...   return v
...
>>> sorted(items, key=keyfunction)
[(1, 'A number'), (2, 'Another number'), ('1', 'Actually a string')]

That's really messy but the best I can come up with at past 4PM on Friday.

No, this is not about the new JSON Type added in Postgres 9.2. This is about how you can get a record set from a Postgres database into a JSON string the best way possible using Python.

Here's the traditional way:

>>> import json
>>> import psycopg2
>>>
>>> conn = psycopg2.connect('dbname=peterbecom')
>>> cur = conn.cursor()
>>> cur.execute("""
...   SELECT
...     id, oid, root, approved, name
...   FROM blogcomments
...   LIMIT 10
... """)
>>> columns = (
...     'id', 'oid', 'root', 'approved', 'name'
... )
>>> results = []
>>> for row in cur.fetchall():
...     results.append(dict(zip(columns, row)))
...
>>> print json.dumps(results, indent=2)
[
  {
    "oid": "comment-20030707-161847",
    "root": true,
    "id": 5662,
    "name": "Peter",
    "approved": true
  },
  {
    "oid": "comment-20040219-r4cf",
    "root": true,
    "id": 5663,
    "name": "silconscave",
    "approved": true
  },
  {
    "oid": "c091011r86x",
    "root": true,
    "id": 5664,
    "name": "Rachel Jay",
    "approved": true
  },
...

This is plain and nice but it's kinda annoying that you have to write down the columns you're selecting twice.
Also, it's annoying that you have to convert the results of fetchall() into a list of dicts in an extra loop.

So, there's a trick to the rescue! You can use the cursor_factory parameter. See below:

>>> import json
>>> import psycopg2
>>> from psycopg2.extras import RealDictCursor
>>>
>>> conn = psycopg2.connect('dbname=peterbecom')
>>> cur = conn.cursor(cursor_factory=RealDictCursor)
>>> cur.execute("""
...   SELECT
...     id, oid, root, approved, name
...   FROM blogcomments
...   LIMIT 10
... """)
>>>
>>> print json.dumps(cur.fetchall(), indent=2)
[
  {
    "oid": "comment-20030707-161847",
    "root": true,
    "id": 5662,
    "name": "Peter",
    "approved": true
  },
  {
    "oid": "comment-20040219-r4cf",
    "root": true,
    "id": 5663,
    "name": "silconscave",
    "approved": true
  },
  {
    "oid": "c091011r86x",
    "root": true,
    "id": 5664,
    "name": "Rachel Jay",
    "approved": true
  },
...

Isn't that much nicer? It's shorter and only lists the columns once.

But is it much faster? Sadly, no it's not. Not much faster. I ran various benchmarks comparing various ways of doing this and basically concluded that there's no significant difference. The latter one using RealDictCursor is around 5% faster. But I suspect all the time in the benchmark is spent doing things (the I/O) that is not different between the various versions.

Anyway. It's a keeper. I think it just looks nicer.

When you use a web framework like Tornado, which is single threaded with an event loop (like nodejs familiar with that), and you need persistency (ie. a database) there is one important questions you need to ask yourself:

Is the query fast enough that I don't need to do it asynchronously?

If it's going to be a really fast query (for example, selecting a small recordset by key (which is indexed)) it'll be quicker to just do it in a blocking fashion. It means less CPU work to jump between the events.

However, if the query is going to be potentially slow (like a complex and data intensive report) it's better to execute the query asynchronously, do something else and continue once the database gets back a result. If you don't all other requests to your web server might time out.

Another important question whenever you work with a database is:

Would it be a disaster if you intend to store something that ends up not getting stored on disk?

This question is related to the D in ACID and doesn't have anything specific to do with Tornado. However, the reason you're using Tornado is probably because it's much more performant that more convenient alternatives like Django. So, if performance is so important, is durable writes important too?

Let's cut to the chase... I wanted to see how different databases perform when integrating them in Tornado. But let's not just look at different databases, let's also evaluate different ways of using them; either blocking or non-blocking.

What the benchmark does is:

  • On one single Python process...
  • For each database engine...
  • Create X records of something containing a string, a datetime, a list and a floating point number...
  • Edit each of these records which will require a fetch and an update...
  • Delete each of these records...

I can vary the number of records ("X") and sum the total wall clock time it takes for each database engine to complete all of these tasks. That way you get an insert, a select, an update and a delete. Realistically, it's likely you'll get a lot more selects than any of the other operations.

And the winner is:

pymongo!! Using the blocking version without doing safe writes.

Fastest database for Tornado

Let me explain some of those engines

  • pymongo is the blocking pure python engine
  • with the redis, toredis and memcache a document ID is generated with uuid4, converted to JSON and stored as a key
  • toredis is a redis wrapper for Tornado
  • when it says (safe) on the engine it means to tell MongoDB to not respond until it has with some confidence written the data
  • motor is an asynchronous MongoDB driver specifically for Tornado
  • MySQL doesn't support arrays (unlike PostgreSQL) so instead the tags field is stored as text and transformed back and fro as JSON
  • None of these database have been tuned for performance. They're all fresh out-of-the-box installs on OSX with homebrew
  • None of these database have indexes apart from ElasticSearch where all things are indexes
  • momoko is an awesome wrapper for psycopg2 which works asyncronously specifically with Tornado
  • memcache is not persistant but I wanted to include it as a reference
  • All JSON encoding and decoding is done using ultrajson which should work to memcache, redis, toredis and mysql's advantage.
  • mongokit is a thin wrapper on pymongo that makes it feel more like an ORM
  • A lot of these can be optimized by doing bulk operations but I don't think that's fair
  • I don't yet have a way of measuring memory usage for each driver+engine but that's not really what this blog post is about
  • I'd love to do more work on running these benchmarks on concurrent hits to the server. However, with blocking drivers what would happen is that each request (other than the first one) would have to sit there and wait so the user experience would be poor but it wouldn't be any faster in total time.
  • I use the official elasticsearch driver but am curious to also add Tornado-es some day which will do asynchronous HTTP calls over to ES.

You can run the benchmark yourself

The code is here on github. The following steps should work:

$ virtualenv fastestdb
$ source fastestdb/bin/activate
$ git clone https://github.com/peterbe/fastestdb.git
$ cd fastestdb
$ pip install -r requirements.txt
$ python tornado_app.py

Then fire up http://localhost:8000/benchmark?how_many=10 and see if you can get it running.

Note: You might need to mess around with some of the hardcoded connection details in the file tornado_app.py.

Discussion

Before the lynch mob of HackerNews kill me for saying something positive about MongoDB; I'm perfectly aware of the discussions about large datasets and the complexities of managing them. Any flametroll comments about "web scale" will be deleted.

I think MongoDB does a really good job here. It's faster than Redis and Memcache but unlike those key-value stores, with MongoDB you can, if you need to, do actual queries (e.g. select all talks where the duration is greater than 0.5). MongoDB does its serialization between python and the database using a binary wrapper called BSON but mind you, the Redis and Memcache drivers also go to use a binary JSON encoding/decoder.

The conclusion is; be aware what you want to do with your data and what and where performance versus durability matters.

What's next

Some of those drivers will work on PyPy which I'm looking forward to testing. It should work with cffi like psycopg2cffi for example for PostgreSQL.

Also, an asynchronous version of elasticsearch should be interesting.

My colleague Axel Hecht showed me something I didn't know about sorting in Python.

In Python you can sort with a tuple. It's best illustrated with a simple example:

>>> items = [(1, 'B'), (1, 'A'), (2, 'A'), (0, 'B'), (0, 'a')]
>>> sorted(items)
[(0, 'B'), (0, 'a'), (1, 'A'), (1, 'B'), (2, 'A')]

By default the sort and the sorted built-in function notices that the items are tuples so it sorts on the first element first and on the second element second.

However, notice how you get (0, 'B') appearing before (0, 'a'). That's because upper case comes before lower case characters. However, suppose you wanted to apply some "humanization" on that and sort case insensitively. You might try:

>>> sorted(items, key=str.lower)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: descriptor 'lower' requires a 'str' object but received a 'tuple'

which is an error we deserve because this won't work for the first part of each tuple.

We could try to write a lambda function (e.g. sorted(items, key=lambda x: x.lower() if isinstance(x, str) else x)) but that's not going to work because you'll only ever get to apply that to the first item.

Without further ado, here's how you do it. A lambda function that returns a tuple:

>>> sorted(items, key=lambda x: (x[0], x[1].lower()))
[(0, 'a'), (0, 'B'), (1, 'A'), (1, 'B'), (2, 'A')]

And there you have it! Thanks for sharing Axel!

As a bonus item for people still reading...
I'm sure you know that you can reverse a sort order simply by passing in sorted(items, reverse=True, ...) but what if you want to have different directions depend on the key that you're sorting on.

Using the technique of a lambda function that returns a tuple, here's how we sort a slightly more advanced structure:

>>> peeps = [{'name': 'Bill', 'salary': 1000}, {'name': 'Bill', 'salary': 500}, {'name': 'Ted', 'salary': 500}]

And now, sort with a lambda function returning a tuple:

>>> sorted(peeps, key=lambda x: (x['name'], x['salary']))
[{'salary': 500, 'name': 'Bill'}, {'salary': 1000, 'name': 'Bill'}, {'salary': 500, 'name': 'Ted'}]

Makes sense, right? Bill comes before Ted and 500 comes before 1000. But how do you sort it like that on the name but reverse on the salary? Simple, negate it:

>>> sorted(peeps, key=lambda x: (x['name'], -x['salary']))
[{'salary': 1000, 'name': 'Bill'}, {'salary': 500, 'name': 'Bill'}, {'salary': 500, 'name': 'Ted'}]

Thanks to Igor who emailed me and made me aware, you can't put pseudo classes in style attributes in HTML. I.e. this does not work:

<a href="#" style="color:pink :hover{color:red}">Sample Link</a>

See for yourself: Sample Link

Note how it does not become red when you hover over the link above.
This is what premailer used to do. Until yesterday.

BEFORE:

>>> from premailer import transform
>>> print transform('''
... <html>
... <style>
... a { color: pink }
... a:hover { color: red }
... </style>
... <a href="#">Sample Link</a>
... </html>
... ''')
<html><head><a href="#" style="{color:pink} :hover{color:red}">Sample Link</a></head></html>


AFTER:

>>> from premailer import transform
>>> print transform('''
... <html>
... <style>
... a { color: pink }
... a:hover { color: red }
... </style>
... <a href="#">Sample Link</a>
... </html>
... ''')
<html><head>
<style>a:hover {color:red}</style>
<a href="#" style="color:pink">Sample Link</a>
</head></html>


That's because the new default is exclude pseudo classes by default.

Thanks Igor for making me aware!

This took me by surprise today!

If you run this unit test, it actually passes with flying colors:

import unittest

class BadAssError(TypeError):
    pass

def foo():
    raise BadAssError("d'oh")

class Test(unittest.TestCase):

    def test(self):
        self.assertRaises(BadAssError, foo)
        self.assertRaises(TypeError, foo)
        self.assertRaises(Exception, foo)

if __name__ == '__main__':
    unittest.main()

Basically, assertRaises doesn't just take the exception that is being raised and accepts it, it also takes any of the raised exceptions' parents.

I've only tested it with Python 2.6 and 2.7. And the same works equally with unittest2.

I don't really know how I feel about this. It did surprise me when I was changing one of the exceptions and expected the old tests to break but they didn't. I mean, if I want to write a test that really makes sure the exception really is BadAssError it means I can't use assertRaises().

Thanks to Theo Spears awesome effort premailer now support CSS specificity. What that means is that when linked and inline CSS blocks are transformed into tag style attributes the order is preserved as you'd expect.

When the browser applies CSS to elements it does it in a specific order. For example if you have this CSS:

p.tag { color: blue; }
p { color: red; }

and this HTML:

<p>Regular text</p>
<p class="tag">Special text</p>

the browser knows to draw the first paragraph tag in red and the second paragraph in red. It does that because p.tag is more specific that p.

Before, it would just blindly take each selector and set it as a style tag like this:

<p style="color:red">Regular text</p>
<p style="color:blue; color:red" class="tag">Special text</p>


which is not what you want.

The code in action is here.

Thanks Theo!

If you use django-fancy-cache you can either run with stats or without. With stats, you can get a number of how many times a cache key "hits" and how many times it "misses". Keeping stats incurs a small performance slowdown. But how much?

I created a simple page that either keeps stats or ignores it. I ran the benchmark over Nginx and Gunicorn with 4 workers. The cache server is a memcached running on the same host (my OSX 10.7 laptop).

With stats:

Average: 768.6 requests/second
Median: 773.5 requests/second
Standard deviation: 14.0

Without stats:

Average: 808.4 requests/second
Median: 816.4 requests/second
Standard deviation: 30.0

That means, roughly that running with stats incurs a 6% slower performance.

The stats is completely useless to your users. The stats tool is purely for your own curiousity and something you can switch on and off easily.

Note: This benchmark assumes that the memcached server is running on the same host as the Nginx and the Gunicorn server. If there was more network in between, obviously all the .incr() commands would cause more performance slowdown.

This personal blog site of mine uses django-fancy-cache and mincss.

What that means is that I can cache the whole output of every blog post for weeks and when I do that I can first preprocess the HTML and convert every external CSS into one inline STYLE block which will only reference selectors that are actually used.

To see it in action, right-click and select "View Page Source". You'll see something like this:

/*
Stats about using github.com/peterbe/mincss
-------------------------------------------
Requests:         1 (now: 0)
Before:           81Kb
After:            11Kb
After (minified): 11Kb
Saving:           70Kb
*/
section{display:block}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-tex...

The reason the saving is so huge, in my case, is because I'm using Twitter Bootstrap CSS framework which is awesome but as any framework, it will inevitably contain a bunch of stuff that I don't use. Some stuff I don't use on any page at all. Some stuff is used only on some pages and some other stuff is used only on some other pages.

What I gain by this, is faster page loads. What the browser does is that it, gets a URL, downloads all HTML, opens the HTML to look for referenced CSS (using the link tag) and downloads that too. Once all of that is downloaded, it starts to render the page. Approximately after that it starts to download all referenced Javascript and starts evaluating and executing that.

By not having to download the CSS the browser has one less thing to do. Only one request? Well, that request might be on a CDN (not a great idea actually) so even though it's just 1 request it will involve another DNS look-up.

Here's what the loading of the homepage looks like in Firefox from a US east coast IP.

Granted, a downloaded CSS file can be cached by the browser and used for other pages under the same domain. But, on my blog the bounce rate is about 90%. That doesn't necessarily mean that visitors leave as soon as they arrived, but it does mean that they generally just read one page and then leave. For those 10% of visitors who visit more than one page will have to download the same chunk of CSS more than once. But mind you, it's not always the same chunk of CSS because it's different for different pages. And the amount of CSS that is now in-line only adds about 2-3Kb on the HTML load when sent gzipped.

Getting to this point wasn't easy because I first had to develop mincss and django-fancy-cache and integrate it all. However, what this means is that you can have it done on your site too! All the code is Open Source and it's all Python and Django which are very popular tools.