Persistent caching with fire-and-forget updates

14 December 2011   4 comments   Python, Tornado

Mind That Age!

This blog post is 6 years old! Most likely, its content is outdated. Especially if it's technical.

Powered by Fusion×

I just recently landed some patches on toocool that implements and interesting pattern that is seen more and more these days. I call it: Persistent caching with fire-and-forget updates

Basically, the implementation is this: You issue a request that requires information about a Twitter user: E.g. The app looks into its MongoDB for information about the tweeter and if it can't find this user it goes onto the Twitter REST API and looks it up and saves the result in MongoDB. The next time the same information is requested, and the data is available in the MongoDB it instead checks if the modify_date or more than an hour and if so, it sends a job to the message queue (Celery with Redis in my case) to perform an update on this tweeter.

You can basically see the code here but just to reiterate and abbreviate, it looks like this:

tweeter = self.db.Tweeter.find_one({'username': username})
if not tweeter:
   result = yield tornado.gen.Task(...)
   if result:
       tweeter = self.save_tweeter_user(result)
       # deal with the error!
elif age(tweeter['modify_date']) > 3600:
   tasks.refresh_user_info.delay(username, ...)
# render the template!

What the client gets, i.e. the user using the site, is it that apart from the very first time that URL is request is instant results but data is being maintained and refreshed.

This pattern works great for data that doesn't have to be up-to-date to the second but that still needs a way to cache invalidate and re-fetch. This works because my limit of 1 hour is quite arbitrary. An alternative implementation would be something like this:

tweeter = self.db.Tweeter.find_one({'username': username})
if not tweeter or (tweeter and age(tweeter) > 3600 * 24 * 7):
    # re-fetch from Twitter REST API
elif age(tweeter) > 3600:
    # fire-and-forget update

That way you don't suffer from persistently cached data that is too old.


Shawn Wheatley
What you describe is really a specific implementation of Memoization - You're right, it's a very powerful design.
I'm just testing something here.
Paul Winkler
Can you explain the use of "yield" in that code, or link to something that explains it? This doesn't look like a generator. I don't think I've seen the value of a yield expression assigned to a local before.
Peter Bengtsson
It's a Tornado thing. It's awesome because it's an alternative to using callbacks basically. Unlike callbacks which need a new function/method with new parameters and scope, this method just carries on on the next line like any procedural program. I was turned on by Tornado before this was added and now it just makes it even sexier.
Thank you for posting a comment

Your email will never ever be published

Related posts

Cryptic errors when using django-nose 07 December 2011
When to __deepcopy__ classes in Python 14 March 2012
Related by Keyword:
How to use django-cache-memoize 03 November 2017
cache_memoize - a pretty decent cache decorator for Django 11 September 2017
Fastest *local* cache backend possible for Django 04 August 2017
Fastest Redis configuration for Django 11 May 2017
Fastest cache backend possible for Django 07 April 2017
Related by Text:
AngularJS $q notify and resolve as a local GET proxy 18 April 2015
Github Pull Request Triage tool 06 March 2014
WSSE Authentication and Apache 13 December 2007
localForage vs. XHR 22 October 2014
How to unit test the innards of a Django view function 15 November 2008