Persistent caching with fire-and-forget updates

14 December 2011   4 comments   Python, Tornado

Powered by Fusion×

I just recently landed some patches on toocool that implements and interesting pattern that is seen more and more these days. I call it: Persistent caching with fire-and-forget updates

Basically, the implementation is this: You issue a request that requires information about a Twitter user: E.g. The app looks into its MongoDB for information about the tweeter and if it can't find this user it goes onto the Twitter REST API and looks it up and saves the result in MongoDB. The next time the same information is requested, and the data is available in the MongoDB it instead checks if the modify_date or more than an hour and if so, it sends a job to the message queue (Celery with Redis in my case) to perform an update on this tweeter.

You can basically see the code here but just to reiterate and abbreviate, it looks like this:

tweeter = self.db.Tweeter.find_one({'username': username})
if not tweeter:
   result = yield tornado.gen.Task(...)
   if result:
       tweeter = self.save_tweeter_user(result)
       # deal with the error!
elif age(tweeter['modify_date']) > 3600:
   tasks.refresh_user_info.delay(username, ...)
# render the template!

What the client gets, i.e. the user using the site, is it that apart from the very first time that URL is request is instant results but data is being maintained and refreshed.

This pattern works great for data that doesn't have to be up-to-date to the second but that still needs a way to cache invalidate and re-fetch. This works because my limit of 1 hour is quite arbitrary. An alternative implementation would be something like this:

tweeter = self.db.Tweeter.find_one({'username': username})
if not tweeter or (tweeter and age(tweeter) > 3600 * 24 * 7):
    # re-fetch from Twitter REST API
elif age(tweeter) > 3600:
    # fire-and-forget update

That way you don't suffer from persistently cached data that is too old.


Shawn Wheatley
What you describe is really a specific implementation of Memoization - You're right, it's a very powerful design.
I'm just testing something here.
Paul Winkler
Can you explain the use of "yield" in that code, or link to something that explains it? This doesn't look like a generator. I don't think I've seen the value of a yield expression assigned to a local before.
Peter Bengtsson
It's a Tornado thing. It's awesome because it's an alternative to using callbacks basically. Unlike callbacks which need a new function/method with new parameters and scope, this method just carries on on the next line like any procedural program. I was turned on by Tornado before this was added and now it just makes it even sexier.
Thank you for posting a comment

Your email will never ever be published

Related posts

Cryptic errors when using django-nose 07 December 2011
When to __deepcopy__ classes in Python 14 March 2012
Related by keywords:
One hot ear 28 October 2003
How I stopped worrying about IO blocking Tornado 18 September 2012
Fastest database for Tornado 09 October 2013
Never put external Javascript in the <head> 02 April 2013
Introducing: HUGEpic - a web app for showing massive pictures 03 November 2012
All my apps are now running on one EC2 server 03 November 2013
Speed test between django_mongokit and postgresql_psycopg2 09 March 2010
mongoengine vs. django-mongokit 24 May 2010
How I made my MongoDB based web app 10 times faster 21 October 2010
Secs sell! How frickin' fast this site is! (server side) 05 April 2012
Mocking DBRefs in Mongoose and nodeunit 14 April 2011
Correction: running Django tests with MongoDB is NOT slow 30 May 2010