I would have to agree with Stefan. LRU is great for single-threaded apps that run for a long time, but they would exist only for a short term during request processing within a Django environment. You need to externalise the cache using redis or similar to reap the benefit here. That brings its own pitfalls, but correctly implemented this works really well with Django.
Django comes with a great caching framework. I have mine set up to use Redis and using `django_redis.cache.RedisCache`. I extended the benchmark to include:
``` from django.core.cache import cache
def f3(): value = cache.get('all-categories') if value is None: value = f1() cache.set('all-categories', value, timeout=60) return value ```
Re-running the benchmark yields a median that is 80% faster than the PostgreSQL ORM. However, this benchmark was made were the Redis AND the Postgres are both available on localhost which might not be realistic thing in a production system (which is where optimizations matter)
Comment
This looks like it won’t work in multi-thread/process servers as the signal is only sent/received in one thread.
Replies
I would have to agree with Stefan. LRU is great for single-threaded apps that run for a long time, but they would exist only for a short term during request processing within a Django environment. You need to externalise the cache using redis or similar to reap the benefit here. That brings its own pitfalls, but correctly implemented this works really well with Django.
Django comes with a great caching framework. I have mine set up to use Redis and using `django_redis.cache.RedisCache`.
I extended the benchmark to include:
```
from django.core.cache import cache
def f3():
value = cache.get('all-categories')
if value is None:
value = f1()
cache.set('all-categories', value, timeout=60)
return value
```
Re-running the benchmark yields a median that is 80% faster than the PostgreSQL ORM.
However, this benchmark was made were the Redis AND the Postgres are both available on localhost which might not be realistic thing in a production system (which is where optimizations matter)
Yeah, it's fraught. You need to be careful when you have depend on something like `gunicorn wsgi -w 2` which I actually do for my Django server.
Another solution is using a TTL cache from `cachetools` and setting it to something like 60 seconds just to feel a little safer.