I have a virtual server, run by Digital Ocean, that runs my various side projects including this blog that you're reading right now. Of all the things running, Elasticsearch uses up over 60% of the total RAM memory.
Here is the raw report:
==================================== MEMORY ==================================== 1 54.9 (61.8%) /usr/share/elasticsearch/jdk/bin/java (697 records) 2 3.7 (4.2%) /var/lib/django/django-peterbecom/.venv/bin/python /var/lib/django/django-peterbecom/.venv/bin/gunicorn wsgi -w 3 -b 0.0.0.0:9898 --access-logfile=- (885 records) 3 3.2 (3.6%) python ./manage.py run_huey --flush-locks --huey-verbose (353 records) 4 3.7 (4.2%) /var/lib/django/django-peterbecom/.venv/bin/python /var/lib/django/django-peterbecom/.venv/bin/gunicorn wsgi -w 4 -b 0.0.0.0:9898 --access-logfile=- (358 records) 5 1.2 (1.4%) postgres: 16/main: django peterbecom [local] idle (942 records) 6 1.7 (1.9%) bun run start-server.ts ./build/server/index.js (371 records) 7 1.7 (1.9%) postgres: 16/main: checkpointer (324 records) 8 1.7 (1.9%) postgres: 16/main: background writer (324 records) 9 1.6 (1.8%) /usr/bin/redis-server 127.0.0.1:6379 (324 records) 10 6.5 (7.3%) /home/django/.vscode-server/cli/servers/Stable-bf9252a2fb45be6893dd8870c0bf37e2e1766d61/server/node (56 records) 11 0.8 (0.9%) /home/django/.cache/puppeteer/chrome/linux-131.0.6778.204/chrome-linux64/chrome (325 records) 12 0.8 (0.9%) /var/lib/django/workon/venv/bin/python3 /var/lib/django/workon/venv/bin/gunicorn app -w 1 -b 0.0.0.0:8686 --access-logfile=- (122 records) 13 0.7 (0.8%) /var/lib/django/whatsdeployed/venv/bin/python3 /var/lib/django/whatsdeployed/venv/bin/gunicorn app -w 1 -b 0.0.0.0:8787 --access-logfile=- (102 records) 14 1.2 (1.4%) /var/lib/django/premailer.io/venv/bin/python3 /var/lib/django/premailer.io/venv/bin/gunicorn app -w 1 -b 0.0.0.0:8888 --access-logfile=- (32 records) 15 0.6 (0.7%) python ./app.py --port=8989 --allowed-origins=https://sockshootout.app (24 records) 16 0.6 (0.7%) node /var/lib/django/react-router-peterbecom/node_modules/.bin/cross-env NODE_ENV=production bun run start-server.ts ./build/server/index.js (19 records) 17 1.3 (1.5%) python dummy.py (8 records) 18 1.2 (1.4%) redis-rdb-bgsave 127.0.0.1:6379 (2 records) 19 0.6 (0.7%) /var/lib/django/battleshits/venv/bin/python3 /var/lib/django/battleshits/venv/bin/gunicorn battleshits.wsgi -w 1 -b 0.0.0.0:9999 --access-logfile=- (3 records) 20 0.3 (0.3%) nginx: worker process (3 records) 21 0.3 (0.3%) postgres: 16/main: autovacuum launcher (2 records) 22 0.3 (0.3%) /sbin/multipathd -d -s (2 records) 23 0.2 (0.2%) /usr/libexec/packagekitd (2 records)
The columns are:
- ranking
- current memory usage
- percent compared to all other
- the command
I ran this command every 2 minutes in a cron job:
ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -15
Then I tallied that up into the above report.
What's eating Elasticsearch
Elasticsearch is a memory hog. It makes sense. It can be so fast because all the records are stored in RAM. What I don't understand, why is it so much?! Or rather, why does it have to be so much?
The JVM heap size isn't set so I guess it's set to the default of 4GB.
Here's what's in the indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size green open blog_comments_20251211000334 LZcsiZspRieORGcDLCtrVQ 1 0 7311 0 4.6mb 4.6mb 4.6mb green open blog_items_20251211000234 UcnEWuh8Q2KJVHpO_bmnTQ 1 0 1415 0 4.5mb 4.5mb 4.5mb green open search_terms_20251211000035 JtFz3fo3R6Ovj9lUMwPYcQ 1 0 13218 0 1mb 1mb 1mb
A couple of megabytes worth of data sets. Not much.
According to localhost:9200/_cat/nodes?h=heap*, it's using:
- current: 145.7mb
- percent: 3
- max: 3.8gb
The documentation recommends you don't set the JVM heap size:
By default, Elasticsearch automatically sets the JVM heap size based on a node’s roles and total memory. We recommend the default sizing for most production environments.
Conclusion
Elasticsearch is using up a lot of RAM on the virtual server. It does amazing stuff that is hard to beat with PostgreSQL or Redis.
It's also surprisingly much, given how little data is in there in terms of indices.
Comments