Or you could just use fewer workers and a bunch of threads. Then you could code with global mutables without worrying about thread-safety. All in all, sounds scary but there might be cases where there are performance benefits.
The obvious drawbacks are that you won't be able to reach that cache data from somewhere else. E.g. `./manage.py post-process-cached-things` and as soon as you destroy the gunicorn worker all that memory is lost and needs to be rebuilt. If you just stuff it in a db like Redis, the you can destroy and create new web workers without any risk of empty caches or stampeding herd problems.
Comment
Any thoughts on using `LocMemCache` with the Gunicorn preload (http://docs.gunicorn.org/en/stable/settings.html#preload-app) option to share a section of memory across workers? When would this be appropriate to use?
Replies
Or you could just use fewer workers and a bunch of threads. Then you could code with global mutables without worrying about thread-safety. All in all, sounds scary but there might be cases where there are performance benefits.
The obvious drawbacks are that you won't be able to reach that cache data from somewhere else. E.g. `./manage.py post-process-cached-things` and as soon as you destroy the gunicorn worker all that memory is lost and needs to be rebuilt. If you just stuff it in a db like Redis, the you can destroy and create new web workers without any risk of empty caches or stampeding herd problems.