How to NOT start two servers on the same port
June 11, 2018
2 comments Linux, Web development
First of all, you can't start two servers on the same port. Ultimately it will fail. However, you might not want a late notice of this. For example, if you do this:
# In one terminal
$ cd elasticsearch-6.1.0
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
"number" : "6.1.0",
...
# In *another* terminal
$ cd elasticsearch-6.2.4
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
"number" : "6.1.0",
...
In other words, what happened to the elasticsearch-6.2.4/bin/elasticsearch
?? It actually started on port :9201
. But that's a rather scary thing because as you jump between project in different tabs or you might not notice that you have Elasticsearch running with docker-compose
somewhere.
To remedy this I use this curl one-liner:
$ curl -s localhost:9200 > /dev/null && echo "Already running!" && exit || ./bin/elasticsearch
Now if you try to start a server on a used port it will exit early.
To wrap this up in a script, take this:
#!/bin/bash
set -eo pipefail
hostandport=$1
shift
curl -s "$hostandport" >/dev/null && \
echo "Already running on $hostandport" && \
exit 1 || exec "$@"
...and make it an executable called unlessalready.sh
and now you can do this:
$ unlessalready.sh localhost:9200 ./bin/elasticsearch
How I found out where a bash alias was set up
May 9, 2018
0 comments Linux
I wanted to install a command line tool called gg
. But for some reason, gg
was already tied to an alias. No problem, I'll just delete that alias. I looked in ~/.bash_profile
and I looked in ~/.zshrc
and it wasn't there!
But here's how I managed to figure out where it came from:
▶ which gg
gg: aliased to git gui citool
Then I copied the git gui citool
part of that output and ran:
▶ rg --hidden 'git gui citool'
.oh-my-zsh/plugins/git/git.plugin.zsh
104:alias gg='git gui citool'
105:alias gga='git gui citool --amend'
A ha! So it was .oh-my-zsh/plugins/git/git.plugin.zsh
that was the culprit. Totally forgot about the plugin. It's full of other useful aliases so I just commented out the one(s) I knew I don't need any more.
By the way rg
, aka. ripgrep
is probably one of the best tools I have. I use it so often that it's attached to my belt rather than in my toolbox.
gtop is best
May 2, 2018
0 comments Linux, MacOSX, JavaScript
To me, using top
inside a Linux server via SSH is all muscle-memory and it's definitely good enough. On my Macbook when working on some long-running code that is resource intensive the best tool I know of is: gtop
I like it because it has the graphs I want and need. It splits up the work of each CPU which is awesome. That's useful for understanding how well a program is able to leverage more than one CPU process.
And it's really nice to have the list of Processes there to be able to quickly compare which programs are running and how that might affect the use of the CPUs.
Instead of listing alternatives I've tried before, hopefully this Reddit discussion has good links to other alternatives
Run something forever in bash until you want to stop it
February 13, 2018
6 comments Linux
I often use this in various projects. I find it very useful. Thought I'd share to see if others find it useful.
Running something forever
Suppose you have some command that you want to run a lot. One way is to do this:
$ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command && \ ./manage.py run-some-command
That runs the command 10 times. Clunky but effective.
Another alternative is to hijack the watch
command. By default it waits 2 seconds between each run but if the command takes longer than 2 seconds, it'll just wait. Running...
$ watch ./manage.py run-some-command
Is almost the same as running...:
$ clear && sleep 2 && ./manage.py run-some-command && \ clear && sleep 2 && ./manage.py run-some-command && \ clear && sleep 2 && ./manage.py run-some-command && \ clear && sleep 2 && ./manage.py run-some-command && \ clear && sleep 2 && ./manage.py run-some-command && \ clear && sleep 2 && ./manage.py run-some-command && \ ... ...forever until you Ctrl-C it...
But that's clunky too because you might not want it to clear the screen between each run and you get an un-necessary delay between each run.
The biggest problem is that with using watch
or copy-n-paste the command many times with &&
between is that if you need to stop it you have to Ctrl-C
and that might kill the command at a precious time.
A better solution
The important thing is that if you want to stop the command repeater, is that it gets to finish what it's working on at the moment.
Here's a great and simple solution:
#!/usr/bin/env bash
set -eo pipefail
_stopnow() {
test -f stopnow && echo "Stopping!" && rm stopnow && exit 0 || return 0
}
while true
do
_stopnow
# Below here, you put in your command you want to run:
./manage.py run-some-command
done
Save that file as run-forever.sh
and now you can do this:
$ bash run-forever.sh
It'll sit there and do its thing over and over. If you want to stop it (from another terminal):
$ touch stopnow
(the file stopnow
will be deleted after it's spotted once)
Getting fancy
Instead of taking this bash script and editing it every time you need it to run a different command you can make it a globally available command. Here's how I do it:
#!/usr/bin/env bash
set -eo pipefail
count=0
_stopnow() {
count="$(($count+1))"
test -f stopnow && \
echo "Stopping after $count iterations!" && \
rm stopnow && exit 0 || return 0
}
control_c()
# run if user hits control-c
{
echo "Managed to do $count iterations"
exit $?
}
# trap keyboard interrupt (control-c)
trap control_c SIGINT
echo "To stop this forever loop created a file called stopnow."
echo "E.g: touch stopnow"
echo ""
echo "Now going to run '$@' forever"
echo ""
while true
do
_stopnow
eval $@
# Do this in case you accidentally pass an argument
# that finishes too quickly.
sleep 1
done
Put this file in ~/bin/run-forever.sh
and chmod +x ~/bin/run-forever.sh
.
Now you can do this:
$ run-forever.sh ./manage.py run-some-command
If the command you want to run, forever, requires an operator you have to wrap everything in single quotation marks. For example:
$ run-forever.sh './manage.py run-some-command && echo "Cooling CPUs..." && sleep 10'
Be very careful with your add_header in Nginx! You might make your site insecure
February 11, 2018
17 comments Linux, Web development, Nginx
tl;dr; When you use add_header
in a location
block in Nginx, it undoes all "parent" add_header
directives. Dangerous!
Gist of the problem is this:
There could be several
add_header
directives. These directives are inherited from the previous level if and only if there are noadd_header
directives defined on the current level.
From the documentation on add_header
The grand but subtle mistake
Basically, I had this:
server { server_name example.com; ...gzip... ...ssl... ...root... # Great security headers... add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; ...more security headers... location / { try_files $uri /index.html; } }
And when you curl it, you can see that it works:
$ curl -I https://example.com [snip] X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Strict-Transport-Security: max-age=63072000; includeSubdomains; preload
The mistake I had, was that I added a new add_header
inside a relevant location
block. If you do that, all the other "global" add_headers
are dropped.
E.g.
server { server_name example.com; ...gzip... ...ssl... ...root... # Great security headers... add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; ...more security headers... location / { try_files $uri /index.html; # NOTE! Adding some more headers here + add_header X-debug-whats-going-on on; } }
Now, same curl
command:
$ curl -I https://example.com [snip] X-debug-whats-going-on: on
Yikes! Now those other useful security headers are gone!
Here are your options:
- Don't add headers like that inside
location
blocks. Yeah, that's not always a choice. - Copy-n-paste all the general security
add_header
blocks into thelocation
blocks where you have to have "custom"add_header
entries. - Use an include file, see below.
How to include files
First create a new file, like /etc/nginx/snippets/general-security-headers.conf
then put this into it:
# Great security headers... add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; ...more security headers... # More realistically, see https://gist.github.com/plentz/6737338
Now, instead of saying these add_header
lines in your /etc/nginx/sites-enabled/example.conf
change that to:
server { server_name example.com; ...gzip... ...ssl... ...root... include /etc/nginx/snippets/general-security-headers.conf; location / { try_files $uri /index.html; # Note! This gets included *again* because # this location block needs its own custom add_header # directives. include /etc/nginx/snippets/general-security-headers.conf; # NOTE! Adding some more headers here add_header X-debug-whats-going-on on; } }
(You need to use your imagination that a real Nginx config site probably has many different more complex location
directives)
It's arguably a bit clunky but it works and it's the best of both worlds. The right security headers for all locations and ability to set custom add_header
directives for specific locations.
Discussion
I'm most disappointed in myself for not noticing. Not for not noticing this in the Nginx documentation, but that I didn't check my security headers on more than one path. But I'm also quite disappointed in Nginx for this rather odd behaviour. To quote my security engineer at Mozilla, April King:
"add" doesn't usually mean "subtract everything else"
She agreed with me that the way it works is counter-intuitive and showed me this snippet which uses include files the same way.
Make .local domains NOT slow in macOS
January 29, 2018
19 comments Linux, MacOSX
Problem
I used to have a bunch of domains in /etc/hosts
like peterbecom.dev
for testing Nginx configurations locally. But then it became impossible to test local sites in Chrome because an .dev
is force redirected to HTTPS. No problem, so I use .local
instead. However, DNS resolution was horribly slow. For example:
▶ time curl -I http://peterbecom.local/about/minimal.css > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 1763 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
curl -I http://peterbecom.local/about/minimal.css > /dev/null 0.01s user 0.01s system 0% cpu 5.585 total
5.6 seconds to open a local file in Nginx.
Solution
Here's that one weird trick to solve it: Add an entry for IPv4 AND IPv6 in /etc/hosts
.
So now I have:
▶ cat /etc/hosts | grep peterbecom
127.0.0.1 peterbecom.local
::1 peterbecom.local
Verification
Ah! Much better. Thing are fast again:
▶ time curl -I http://peterbecom.local/about/minimal.css > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 1763 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl -I http://peterbecom.local/about/minimal.css > /dev/null 0.01s user 0.01s system 37% cpu 0.041 total
0.04 seconds instead of 5.6.