How I added brotli_static to nginx 1.17 in Ubuntu (Eoan Ermine) 19.10
April 9, 2020
0 comments Nginx, Linux
I knew I didn't want to download the sources to nginx
to install it on my new Ubuntu 19.10 server because I'll never have the discipline to remember to keep it upgraded. No, I'd rather just run apt update && apt upgrade
every now and then.
Why is this so hard?! All I need is the ability to set brotli_static on;
in my Nginx config so it'll automatically pick the .br
file if it exists on disk.
These instructions totally helped but here they are specifically for my version (all run as root
):
git clone --recursive https://github.com/google/ngx_brotli.git apt install brotli apt-get build-dep nginx # Note the version of which nginx you have installed nginx -v # ...which informs which URL to wget wget https://nginx.org/download/nginx-1.17.9.tar.gz aunpack nginx-1.17.9.tar.gz nginx -V 2>&1 >/dev/null | grep -o " --.*" | grep -oP .+?(?=--add-dynamic-module)| head -1 > nginx-1.17.9/build_args.txt cd nginx-1.17.9/ ./configure --with-compat $(cat build_args.txt) --add-dynamic-module=../ngx_brotli make install cp objs/ngx_http_brotli_filter_module.so /usr/lib/nginx/modules/ chmod 644 /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so cp objs/ngx_http_brotli_static_module.so /usr/lib/nginx/modules/ chmod 644 /usr/lib/nginx/modules/ngx_http_brotli_static_module.so ls -l /etc/nginx/modules
Now I can edit my /etc/nginx/nginx.conf
(somewhere near the top) to:
load_module /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so; load_module /usr/lib/nginx/modules/ngx_http_brotli_static_module.so;
And test that it works:
nginx -t
How to install Node 12 on Ubuntu (Eoan Ermine) 19.10
April 8, 2020
0 comments Node, Linux
I'm setting up a new Ubuntu (Eoan Ermine) 19.10 server and I noticed that apt install nodejs
gives you Node v10 which is an LTS (Long Term Support) version that'll last till April 2021. However, I want Node v12 which is the most recent LTS release as of April 2020.
To install it I used these instructions:
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash - sudo apt-get install -y nodejs
That worked great.
When it finished, it spat out this nice little blurb about how to install yarn
:
... Fetched 7454 B in 1s (12.3 kB/s) Reading package lists... Done ## Run `sudo apt-get install -y nodejs` to install Node.js 12.x and npm ## You may also need development tools to build native addons: sudo apt-get install gcc g++ make ## To install the Yarn package manager, run: curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list sudo apt-get update && sudo apt-get install yarn
By the way, I have no idea what nodejs-mozilla
but running apt show nodejs-mozilla
yields:
Package: nodejs-mozilla Version: 12.16.1-0ubuntu0.19.10.1 Priority: optional Section: universe/javascript Origin: Ubuntu Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> Bugs: https://bugs.launchpad.net/ubuntu/+filebug Installed-Size: 42.0 MB Depends: libc6 (>= 2.29), libgcc1 (>= 1:3.4), libstdc++6 (>= 9) Homepage: http://nodejs.org/ Download-Size: 10.4 MB APT-Sources: http://mirrors.digitalocean.com/ubuntu eoan-updates/universe amd64 Packages Description: evented I/O for V8 javascript Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. . Node.js is bundled with several useful libraries to handle server tasks: . System, Events, Standard I/O, Modules, Timers, Child Processes, POSIX, HTTP, Multipart Parsing, TCP, DNS, Assert, Path, URL, Query Strings.
Installing it doesn't add a node
executable and I can't find a home page for it. apt
can be weird sometimes.
uwsgi weirdness with --http
September 19, 2019
2 comments Python, Linux
Instead of upgrading everything on my server, I'm just starting from scratch. From Ubuntu 16.04 to Ubuntu 19.04 and I also upgraded everything else in sight. One of them was uwsgi
. I copied various user config files but for uwsgi
things didn't very well. On the old server I had uwsgi
version 2.0.12-debian
and on the new one 2.0.18-debian
. The uWSGI changelog is pretty hard to read but I sure don't see any mention of this.
You see, on SongSearch I have it so that Nginx talks to Django via a uWSGI socket. But the NodeJS server talks to Django via 127.0.0.1:PORT
. So I need my uWSGI config to start both. Here was the old config:
[uwsgi] plugins = python35 virtualenv = /var/lib/django/songsearch/venv pythonpath = /var/lib/django/songsearch user = django uid = django master = true processes = 3 enable-threads = true touch-reload = /var/lib/django/songsearch/uwsgi-reload.touch http = 127.0.0.1:9090 module = songsearch.wsgi:application env = LANG=en_US.utf8 env = LC_ALL=en_US.UTF-8 env = LC_LANG=en_US.UTF-8
(The only difference on the new server was the python37
plugin instead)
I start it and everything looks fine. No errors in the log files. And netstat
looks like this:
# netstat -ntpl | grep 9090 tcp 0 0 127.0.0.1:9090 0.0.0.0:* LISTEN 1855/uwsgi
But every time I try to curl localhost:9090
I kept getting curl: (52) Empty reply from server
. Nothing in the log files! It seemed no matter what I tried I just couldn't talk to it over HTTP. No, I'm not a sysadmin. I'm just a hobbyist trying to stand up my little server with the tools and limited techniques I know but I was stumped.
The solution
After endless Googling for a resolution and trying all sorts of uwsgi
commands directly, I somehow stumbled on the solution.
[uwsgi]
plugins = python35
virtualenv = /var/lib/django/songsearch/venv
pythonpath = /var/lib/django/songsearch
user = django
uid = django
master = true
processes = 3
enable-threads = true
touch-reload = /var/lib/django/songsearch/uwsgi-reload.touch
-http = 127.0.0.1:9090
+http-socket = 127.0.0.1:9090
module = songsearch.wsgi:application
env = LANG=en_US.utf8
env = LC_ALL=en_US.UTF-8
env = LC_LANG=en_US.UTF-8
With this one subtle change, I can now curl localhost:9090
and I still have the /var/run/uwsgi/app/songsearch/socket
socket. So, yay!
I'm blogging about this in case someone else ever gets stuck in the same nasty surprise as me.
Also, I have to admit, I was fuming with rage from this frustration. It's really inspired me to revive the quest for an alternative to uwsgi
because I'm not sure it's that great anymore. There are new alternatives such as gunicorn
, gunicorn
with Meinheld
, bjoern
etc.
Experimenting with Nginx worker_processes
February 14, 2019
0 comments Web development, Nginx, MacOSX, Linux
I have Nginx 1.15.8 installed with Homebrew on my macOS. By default the /usr/local/etc/nginx/nginx.conf
it set to...:
worker_processes 1;
But, from the documentation, it says:
"The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard disk drives that store data, and load pattern. When one is in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it)." (bold emphasis mine)
What is the ideal number for me? The performance of Nginx on my laptop doesn't really matter. But for my side-projects it's important to have a fast Nginx since it serves static HTML and lots of static assets. However, on my personal servers I have a bunch of other resource hungry stuff going on that I know is more likely to need the resources, like Elasticsearch and uwsgi
.
To figure this out, I wrote a benchmark program that requested a small index.html
about 10,000 times across 10 concurrent clients with hey.
hey -n 10000 -c 10 http://peterbecom.local/plog/variable_cache_control/awspa
I ran this 10 times between changing the worker_processes
in the nginx.conf
file. Here's the output:
1 WORKER PROCESSES BEST : 13,607.24 reqs/s 2 WORKER PROCESSES BEST : 17,422.76 reqs/s 3 WORKER PROCESSES BEST : 18,886.60 reqs/s 4 WORKER PROCESSES BEST : 19,417.35 reqs/s 5 WORKER PROCESSES BEST : 19,094.18 reqs/s 6 WORKER PROCESSES BEST : 19,855.32 reqs/s 7 WORKER PROCESSES BEST : 19,824.86 reqs/s 8 WORKER PROCESSES BEST : 20,118.25 reqs/s
Or, as a graph:
Now note, this is done here on my MacBook Pro. Not on my Ubuntu DigitalOcean servers. For now, I just want to get a feeling for how these numbers correlate.
Conclusion
The benchmark isn't good enough. The numbers are pretty stable but I'm doing this on my laptop with multiple browsers idling, Slack, and Spotify running. Clearly, the throughput goes up a bit when you allocate more workers but if anything can be learned from this, start with going beyond 1 for a quick fix and from there start poking and more exhaustive benchmarks. And don't forget, if you have time to go deeper on this, to look at the combination of worker_connections
and worker_processes
.
How to encrypt a file with Emacs on macOS (ccrypt)
January 29, 2019
0 comments MacOSX, Linux
Suppose you have a cleartext file that you want to encrypt with a password, here's how you do that with ccrypt
on macOS. First:
▶ brew install ccrypt
Now, you have the ccrypt
program. Let's test it:
▶ cat secrets.txt
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X
▶ ccrypt secrets.txt
Enter encryption key: ▉▉▉▉▉▉▉▉▉▉▉
Enter encryption key: (repeat) ▉▉▉▉▉▉▉▉▉▉▉
# Note that the original 'secrets.txt' is replaced
# with the '.cpt' version.
▶ ls | grep secrets
secrets.txt.cpt
▶ less secrets.txt.cpt
"secrets.txt.cpt" may be a binary file. See it anyway?
There. Now you can back up that file on Dropbox or whatever and not have to worry about anybody being able to open it without your password. To read it again:
▶ ccrypt --decrypt --cat secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X
▶ ls | grep secrets
secrets.txt.cpt
Or, to edit it you can do these steps:
▶ ccrypt --decrypt secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉
▶ vi secrets.txt
▶ ccrypt secrets.txt
Enter encryption key:
Enter encryption key: (repeat)
Clunky that you have you extract the file and remember to encrypt it back again. That's where you can use emacs
. Assuming you have emacs
already installed and you have a ~/.emacs
file. Add these lines to your ~/.emacs
:
(setq auto-mode-alist
(append '(("\\.cpt$" . sensitive-mode))
auto-mode-alist))
(add-hook 'sensitive-mode (lambda () (auto-save-mode nil)))
(setq load-path (cons "/usr/local/share/emacs/site-lisp/ccrypt" load-path))
(require 'ps-ccrypt "ps-ccrypt.el")
By the way, how did I know that the load path should be /usr/local/share/emacs/site-lisp/ccrypt
? I looked at the output from brew
:
▶ brew info ccrypt
ccrypt: stable 1.11 (bottled)
Encrypt and decrypt files and streams
...
==> Caveats
Emacs Lisp files have been installed to:
/usr/local/share/emacs/site-lisp/ccrypt
...
Anyway, now I can use emacs
to open the secrets.txt.cpt
file and it will automatically handle the password stuff:
This is really convenient. Now you can open an encrypted file, type in your password, and it will take care of encrypting it for you when you're done (saving the file).
Be warned! I'm not an expert at either emacs
or encryption so just be careful and if you get nervous take precaution and set aside more time to study this deeper.
elapsed function in bash to print how long things take
December 12, 2018
0 comments MacOSX, Linux
I needed this for a project and it has served me pretty well. Let's jump right into it:
# This is elapsed.sh
SECONDS=0
function elapsed()
{
local T=$SECONDS
local D=$((T/60/60/24))
local H=$((T/60/60%24))
local M=$((T/60%60))
local S=$((T%60))
(( $D > 0 )) && printf '%d days ' $D
(( $H > 0 )) && printf '%d hours ' $H
(( $M > 0 )) && printf '%d minutes ' $M
(( $D > 0 || $H > 0 || $M > 0 )) && printf 'and '
printf '%d seconds\n' $S
}
And here's how you use it:
# Assume elapsed.sh to be in the current working directory
source elapsed.sh
echo "Doing some stuff..."
# Imagine it does something slow that
# takes about 3 seconds to complete.
sleep 3
elapsed
echo "Some quick stuff..."
sleep 1
elapsed
echo "Doing some slow stuff..."
sleep 61
elapsed
The output of running that is:
Doing some stuff... 3 seconds Some quick stuff... 4 seconds Doing some slow stuff... 1 minutes and 5 seconds
Basically, if you have a bash script that does a bunch of slow things, it having a like of elapsed
there after some blocks of code will print out how long the script has been running.
It's not beautiful but it works.