Peterbe.com

A blog and website by Peter Bengtsson

Filtered home page!
Currently only showing blog entries under the category: Web development. Clear filter

A darn good search filter function in JavaScript

12 September 2018 0 comments   Web development, Javascript

https://codesandbox.io/s/62x4mmxr0n


Demo here . The demo uses React and a list of blog post titles that get immediately filtered when you type in a search. I.e. you have the whole list but show less when a search term is entered.

That the demo uses React isn't important. What's important is the search function. It looks like this:

function filterList(q, list) {
  function escapeRegExp(s) {
    return s.replace(/[-/\\^$*+?.()|[\]{}]/g, "\\$&");
  }
  const words = q
    .split(/\s+/g)
    .map(s => s.trim())
    .filter(s => !!s);
  const hasTrailingSpace = q.endsWith(" ");
  const searchRegex = new RegExp(
    words
      .map((word, i) => {
        if (i + 1 === words.length && !hasTrailingSpace) {
          // The last word - ok with the word being "startswith"-like
          return `(?=.*\\b${escapeRegExp(word)})`;
        } else {
          // Not the last word - expect the whole word exactly
          return `(?=.*\\b${escapeRegExp(word)}\\b)`;
        }
      })
      .join("") + ".+",
    "gi"
  );
  return list.filter(item => {
    return searchRegex.test(item.title);
  });
}

In action
I use this in a single-page content management app. There's a list of records and a search input. Every character you put into the search bar updates the list of records shown.

What it does is that it allows you to search texts based on multiple whole words. But the key feature is that the last word doesn't have to be whole. For example, it will positively match "This is a blog post about JavaScript" if the search is "post javascript" or "post javasc". But it won't match on "pos blog".

The idea is that if a user has typed in a full word followed by a space, all previous words needs to be matched fully. For example if the input is "java " it won't match on "This is a blog post about JavaScript" because the word java, alone, isn't in the search text.

Sure, there are different ways to write this but I think this functionality is good for this kind of filtering search. A different implementation would have a function that returns the regex and then it can be used both for filtering and for highlighting.

Hope it helps.

django-pipeline and Zopfli

15 August 2018 0 comments   Python, Web development, Django


tl;dr; I wrote my own extension to django-pipeline that uses Zopfli to create .gz files from static assets collected in Django. Here's the code.

Nginx and Gzip

What I wanted was to continue to use django-pipeline which does a great job of reading a settings.BUNDLES setting and generating things like /static/js/myapp.min.a206ec6bd8c7.js. It has configurable options to not just make those files but also generate /static/js/myapp.min.a206ec6bd8c7.js.gz which means that with gzip_static in Nginx , Nginx doesn't have to Gzip compress static files on-the-fly but can basically just read it from disk. Nginx doesn't care how the file got there but an immediate advantage of preparing the file on disk is that the compression can be higher (smaller .gz files). That means smaller responses to be sent to the client and less CPU work needed from Nginx. Your job is to set gzip_static on; in your Nginx config (per location) and make sure every compressable file exists on disk with the same name but with the .gz suffix.

In other words, when the client does GET https://example.com/static/foo.js Nginx quickly does a read on the file system to see if there exists a ROOT/static/foo.js.gz and if so, return that. If the files doesn't exist, and you have gzip on; in your config, Nginx will read the ROOT/static/foo.js into memory, compress it (usually with a lower compression level) and return that. Nginx takes care of figuring out whether to do this, at all, dynamically by reading the Accept-Encoding header from the request.

Zopfli

The best solution today to generate these .gz files is Zopfli. Zopfli is slower than good old regular gzip but the files get smaller. To manually compress a file you can install the zopfli executable (e.g. brew install zopfli or apt install zopfli) and then run zopfli $ROOT/static/foo.js which creates a $ROOT/static/foo.js.gz file.

So your task is to build some pipelining code that generates .gz version of every static file your Django server creates.
At first I tried django-static-compress which has an extension to regular Django staticfiles storage. The default staticfiles storage is django.contrib.staticfiles.storage.StaticFilesStorage and that's what django-static-compress extends.

But I wanted more. I wanted all the good bits from django-pipeline (minification, hashes in filenames, concatenation, etc.) Also, in django-static-compress you can't control the parameters to zopfli such as the number of iterations. And with django-static-compress you have to install Brotli which I can't use because I don't want to compile my own Nginx.

Solution

So I wrote my own little mashup. I took some ideas from how django-pipeline does regular gzip compression as a post-process step. And in my case, I never want to bother with any of the other files that are put into the settings.STATIC_ROOT directory from the collectstatic command.

Here's my implementation: peterbecom.storage.ZopfliPipelineCachedStorage. Check it out. It's very tailored to my personal preferences and usecase but it works great. To use it, I have this in my settings.py: STATICFILES_STORAGE = "peterbecom.storage.ZopfliPipelineCachedStorage"

I know what you're thinking

Why not try to get this into django-pipeline or into django-compress-static . The answer is frankly laziness. Hopefully someone else can pick up this task. I have fewer and fewer projects where I use Django to handle static files. These days most of my projects are single-page-apps that are 100% static and using Django for XHR requests to get the data.

Django lock decorator with django-redis

14 August 2018 1 comment   Python, Web development, Django, Redis


Here's the code. It's quick-n-dirty but it works wonderfully:

import
 functools
import hashlib

from django.core.cache import cache
from django.utils.encoding import force_bytes


def lock_decorator(key_maker=None):
    """
    When you want to lock a function from more than 1 call at a time.
    """

    def decorator(func):
        @functools.wraps(func)
        def inner(*args, **kwargs):
            if key_maker:
                key = key_maker(*args, **kwargs)
            else:
                key = str(args) + str(kwargs)
            lock_key = hashlib.md5(force_bytes(key)).hexdigest()
            with cache.lock(lock_key):
                return func(*args, **kwargs)

        return inner

    return decorator

How To Use It

This has saved my bacon more than once. I use it on functions that really need to be made synchronous. For example, suppose you have a function like this:

def fetch_remote_thing(name):
    try:
        return Thing.objects.get(name=name).result
    except Thing.DoesNotExist:
        # Need to go out and fetch this
        result = some_internet_fetching(name)  # Assume this is sloooow
        Thing.objects.create(name=name, result=result)
        return result

That function is quite dangerous because if executed by two concurrent web requests for example, they will trigger
two "identical" calls to some_internet_fetching and if the database didn't have the name already, it will most likely trigger two calls to Thing.objects.create(name=name, ...) which could lead to integrity errors or if it doesn't the whole function breaks down because it assumes that there is only 1 or 0 of these Thing records.

Easy to solve, just add the lock_decorator:

@lock_decorator()
def fetch_remote_thing(name):
    try:
        return Thing.objects.get(name=name).result
    except Thing.DoesNotExist:
        # Need to go out and fetch this
        result = some_internet_fetching(name)  # Assume this is sloooow
        Thing.objects.create(name=name, result=result)
        return result

Now, thanks to Redis distributed locks, the function is always allowed to finish before it starts another one. All the hairy locking (in particular, the waiting) is implemented deep down in Redis which is rock solid.

Bonus Usage

Another use that has also saved my bacon is functions that aren't necessarily called with the same input argument but each call is so resource intensive that you want to make sure it only does one of these at a time. Suppose you have a Django view function that does some resource intensive work and you want to stagger the calls so that it only runs it one at a time. Like this for example:

def api_stats_calculations(request, part):
    if part == 'users-per-month':
        data = _calculate_users_per_month()  # expensive
    elif part == 'pageviews-per-week':
        data = _calculate_pageviews_per_week()  # intensive
    elif part == 'downloads-per-day':
        data = _calculate_download_per_day()  # slow
    elif you == 'get' and the == 'idea':
        ...

    return http.JsonResponse({'data': data})

If you just put @lock_decorator() on this Django view function, and you have some (almost) concurrent calls to this function, for example from a uWSGI server running with threads and multiple processes, then it will not synchronize the calls.

The solution to this is to write your own function for generating the lock key, like this for example:

@lock_decorator(
    key_maker=lamnbda request, part: 'api_stats_calculations'
)
def api_stats_calculations(request, part):
    if part == 'users-per-month':
        data = _calculate_users_per_month()  # expensive
    elif part == 'pageviews-per-week':
        data = _calculate_pageviews_per_week()  # intensive
    elif part == 'downloads-per-day':
        data = _calculate_download_per_day()  # slow
    elif you == 'get' and the == 'idea':
        ...

    return http.JsonResponse({'data': data})

Now it works.

How Time-Expensive Is It?

Perhaps you worry that 99% of your calls to the function don't have the problem of calling the function concurrently. How much is this overhead of this lock costing you? I wondered that too and set up a simple stress test where I wrote a really simple Django view function. It looked something like this:

@lock_decorator(key_maker=lambda request: 'samekey')
def sample_view_function(request):
    return http.HttpResponse('Ok\n')

I started a Django server with uWSGI with multiple processors and threads enabled. Then I bombarded this function with a simple concurrent stress test and observed the requests per minute. The cost was extremely tiny and almost negligable (compared to not using the lock decorator). Granted, in this test I used Redis on redis://localhost:6379/0 but generally the conclusion was that the call is extremely fast and not something to worry too much about. But your mileage may vary so do your own experiments for your context.

What's Needed

You need to use django-redis as your Django cache backend. I've blogged before about using django-redis, for example Fastest cache backend possible for Django and Fastest Redis configuration for Django.

django-html-validator now supports Django 2.x

13 August 2018 0 comments   Python, Web development, Django

https://pypi.org/project/django-html-validator/


django-html-validator is a Django project that can validate your generated HTML. It does so by sending the HTML to https://html5.validator.nu/ or you can start your own Java server locally with vnu.jar from here. The output is that you can have validation errors printed to stdout or you can have them put as .txt files in a temporary directory. You can also include it in your test suite and make it so that tests fail if invalid HTML is generated during rendering in Django unit tests.

The project seems to have become a lot more popular than I thought it would. It started as a one-evening-hack and because there was interest I wrapped it up in a proper project with "docs" and set up CI for future contributions.

I kinda of forgot the project since almost all my current projects generate JSON on the server and generates the DOM on-the-fly with client-side JavaScript but apparently a lot of issues and PRs were filed related to making it work in Django 2.x. So I took the time last night to tidy up the tox.ini etc. and the necessary compatibility fixes to make it work with but Django 1.8 up to Django 2.1. Pull request here.

Thank you all who contributed! I'll try to make a better job noticing filed issues in the future.

HTMLMinifier in use on this blog now

07 August 2018 3 comments   Web development, Javascript, Web Performance


Last week I enabled HTMLMinifier as a post-build step for server-rendered content here on this blog. Basically, after a page is rendered in Django, it's sent to a Celery queue that does things to the index.html file. The first thing it does its that it extracts the stylesheets and replaces them with a block of inline CSS. More details in this blog post. Secondly, what the background job does it that it sends the index.html file to node_modules/.bin/html-minifier. See the code here.

What that does is that it removes quotation marks where not needed (e.g. <div id=foo> instead of <div id="foo">), removes HTML comments, and lastly removes whitespace that is not needed. The result is that the HTML now looks like this:

View source

I also added a line of logging that spits out a measurement of the size of the HTML size before, before with gzip, after, and after with gzip. Why? Because the optimization of HTML minification is usually insignificant after you gzip. See this blog post about how insignificant space optimization is in comparison to gzip. Look at the sample log lines:

...
Minified before: 38,249 bytes (11,150 gzipped), After: 36,098 bytes (10,875 gzipped), Shaving 2,151 bytes (275 gzipped)
Minified before: 37,698 bytes (10,534 gzipped), After: 35,622 bytes (10,243 gzipped), Shaving 2,076 bytes (291 gzipped)
Minified before: 58,846 bytes (14,623 gzipped), After: 55,540 bytes (14,313 gzipped), Shaving 3,306 bytes (310 gzipped)
...

So this last one saved 3.2KB of HTML document which isn't a sneeze, but since 99% of clients support gzip, it actually only saved 310 bytes. As a matter of fact, I parsed the log lines and calculated the average and it was saving 338 bytes per page.

Worth it? I doubt it. It's not without risks and now it's slightly harder and weirder to view the source. However 338 bytes multiplied by the total number of visitors per month, I estimate to save a total of 161 MB of data less to be sent.

To defer or to async JavaScript tags. That's the question.

29 June 2018 0 comments   Web development, Javascript, Web Performance


tl;dr; async scores slightly better that defer (on script tags) in this experiment using Webpagetest.

Much has been written about the difference between <script defer src="..."> and <script async src="..."> but nothing beats seeing it visually in Webpagetest.

Here are some good articles/resources:

So I took a page off my own blog. Butchered it and cleaned up the 6 <script> tags. It uses HTTP/2 and some jQuery and some other vanilla JavaScript stuff. See the page here: neither.html
Then I copied that HTML file and replaced all <script src="..."> with <script defer src="...">: defer.html. And lastly, the same with: async.html.

First let's compare all three against each other:

Neither vs defer vs async
Neither vs defer vs async on Webpagetest.

Clearly, making the JavaScript non-blocking is critical for web performance. That's 1.7 seconds instead of 2.8 seconds.

Second, let's compare just defer vs. async on a 4G connection:

defer vs. async on 4G
defer vs. async on 4G Also, if you like here's defer vs. async on a desktop browser instead.

Conclusions

  1. Don't allow your JavaScript to block rendering unless it's OK to have your users staring at a white screen till everything has landed.

  2. There's not much difference between defer and async. async has a slight advantage as per these experiments. I'm only capable of guessing, but I suspect it's because it can "spread out" the work better and get some work done in parallel whilst defer has things that tell it to wait. In particular, since with defer the order of the <script> tags is respected. Suppose that the file some.jquery.plugin.js downloads before jquery.min.js , then that file has to be blocked and execution delayed whilst waiting for jquery.min.js to download, parse and execute. With async it's more of a wild west of executing whenever you can.

  3. The async.html is busted because of the unpredictable order of execution and these .js files depend on the order. Another reason to use defer if your scripts have that order-dependency problem.

  4. Consider using a mix of async and defer. async has the advantage that some parsing/execution can be done by the main thread whilst waiting for other blocking resources like images.

A good Django view function cache decorator for http.JsonResponse

20 June 2018 0 comments   Python, Web development, Django


I use this a lot. It has served me very well. The code:

import hashlib
import functools

import markus  # optional
from django.core.cache import cache
from django import http
from django.utils.encoding import force_bytes, iri_to_uri

metrics = markus.get_metrics(__name__)  # optional


def json_response_cache_page_decorator(seconds):
    """Cache only when there's a healthy http.JsonResponse response."""

    def decorator(func):

        @functools.wraps(func)
        def inner(request, *args, **kwargs):
            cache_key = 'json_response_cache:{}:{}'.format(
                func.__name__,
                hashlib.md5(force_bytes(iri_to_uri(
                    request.build_absolute_uri()
                ))).hexdigest()
            )
            content = cache.get(cache_key)
            if content is not None:

                # metrics is optional
                metrics.incr(
                    'json_response_cache_hit',
                    tags=['view:{}'.format(func.__name__)]
                )

                return http.HttpResponse(
                    content,
                    content_type='application/json'
                )
            response = func(request, *args, **kwargs)
            if (
                isinstance(response, http.JsonResponse) and
                response.status_code in (200, 304)
            ):
                cache.set(cache_key, response.content, seconds)
            return response

        return inner

    return decorator

To use it simply add to Django view functions that might return a http.JsonResponse. For example, something like this:

@json_response_cache_page_decorator(60)
def search(request):
    q = request.GET.get('q')
    if not q:
        return http.HttpResponseBadRequest('no q')
    results = search_database(q)
    return http.JsonResponse({
        'results': results,
    })

The reasons I use this instead of django.views.decorators.cache.cache_page() is because of a couple of reasons.

Disclaimer: This snippet of code comes from a side-project that has a very specific set of requirements. They're rather unique to that project and I have a full picture of the needs. E.g. I know what specific headers matter and don't matter. Your project might be different. For example, perhaps you don't have markus to handle your metrics. Or perhaps you need to re-write the query string for something to normalize the cache key differently. Point being, take the snippet of code as inspiration when you too find that django.views.decorators.cache.cache_page() isn't good enough for your Django view functions.

The best grep tool in the world; ripgrep

19 June 2018 0 comments   Linux, Web development, MacOSX

https://github.com/BurntSushi/ripgrep


tl;dr; ripgrep (aka. rg) is the best tool to grep today.

ripgrep is a tool for searching files. Its killer feature is that it's fast. Like, really really fast. Faster than sift, git grep, ack, regular grep etc.

If you don't believe me, either read this detailed blog post from its author or just jump straight to the conclusion:

Benchmark
Benchmark

I used to use git grep whenever I was inside a git repo and sift for everything else. That alone, was a huge step up from regular grep. Granted, almost all my git repos are small enough that regular git grep is faster than I can perceive many times. But with ripgrep I can just add --no-ignore-vcs and it searches in all the files mentioned in .gitignore too. That's useful when you want to search in your own source as well as the files in node_modules.

The installation instructions are easy. I installed it with brew install ripgrep and the best way to learn how to use it is rg --help. Remember that it has a lot of cool features that are well worth learning. It's written in Rust and so far I haven't had a single crash, ever. The ability to search by file type gets some getting used to (tip! use: rg --type-list) and remember that you can pipe rg output to another rg. For example, to search for all lines that contain query and string you can use rg query | rg string.

How to NOT start two servers on the same port

11 June 2018 2 comments   Linux, Web development


First of all, you can't start two servers on the same port. Ultimately it will fail. However, you might not want a late notice of this. For example, if you do this:

# In one terminal
$ cd elasticsearch-6.1.0
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...
# In *another* terminal
$ cd elasticsearch-6.2.4
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...

In other words, what happened to the elasticsearch-6.2.4/bin/elasticsearch?? It actually started on port :9201. But that's a rather scary thing because as you jump between project in different tabs or you might not notice that you have Elasticsearch running with docker-compose somewhere.

To remedy this I use this curl one-liner:

$ curl -s localhost:9200 > /dev/null && echo "Already running!" && exit || ./bin/elasticsearch

Now if you try to start a server on a used port it will exit early.

To wrap this up in a script, take this:

#!/bin/bash

set -eo pipefail

hostandport=$1
shift
curl -s "$hostandport" >/dev/null && \
  echo "Already running on $hostandport" && \
  exit 1 || exec "$@"

...and make it an executable called unlessalready.sh and now you can do this:

$ unlessalready.sh localhost:9200 ./bin/elasticsearch

To CDN assets or just HTTP/2

17 May 2018 1 comment   Web development, Javascript, Web Performance


tl;dr; I see little benefit in using a CDN at this point.

I took two random pages here on my blog. One and Another . Doesn't matter what they say but it's important to notice that they're extremely similar. No big pictures. Both have 1 banner ad each. Both served with HTTP/2. Neither have any blocking linked assets. I.e. there is no blocking <link ref="stylesheet" href="styles.css"> and the script tags are are either async or defer. Both pages reference one little .png that is not deliberately lazy loaded. That's the baseline.

The HTML document, in both URLs, is served with HTTP/2 but it references a the lazy loaded .css and (a bunch of) .js files, via a CDN. In other words, it looks like this:

▶ curl -v https://www.peterbe.com/plog/hashin-0.7.0
...
> GET /plog/hashin-0.7.0 HTTP/2
...
< HTTP/2 200
...
<
...
<link rel="preload" href="https://cdn-2916.kxcdn.com/static/css/base.min.e8df96d84663.css" 
 as="style" onload="this.onload=null;this.rel='stylesheet'">
...
<script defer src="https://cdn-2916.kxcdn.com/static/js/blogitem-post.min.f6c0be691e73.js"></script>
...

So, cdn-2916.kxcdn.com is a an awesome CDN, but to a first-time visitor, that is going to require a DNS lookup and the creation of a new TCP connection that can be kept alive. The alternative to this is to not put any of the of the .png, .css or .js assets on a CDN. Basically, instead of <script src="https://mycdn.example.com/foo.js">, just do <script src="/foo.js">.

CDNs are really important since latency is a killer to web performance and remember that "Use a CDN" is rule number 2 in the, now dated, YSlow ruleset. However, we're entering an era where HTTP/2 is becoming more and more available in mainstream browsers (hint: nearly 100% of visitors to my site are HTTP/2 support). Buuuuuut, the latency (DNS, connection and SSL negotiation) doesn't matter that much if you have already paid those costs to get to the origin web server (https://www.peterbe.com in this example).

The Experiment

What I'm interested in seeing if there is a way to gauge/measure when it's best to use a CDN and when it's best to use the origin web server to serve all assets. My friend @stereobooster suggested: "Webpagetest.org is all you need"

Ok. Let's measure that then with Webpagetest.org and see what we can learn.

Here's a visual comparison of the two URLs when they both use CDN for the static assets.

Here's a visual comparison of one using a CDN for static assets and one does not.

You can see their webpagetests individually here and here.

Assets over CDN
Two connection prices paid. Downloads individual assets faster but ultimately takes a longer time.

One HTTP/2 connection only
Only 1 connection price paid. ALL assets downloaded sooner, albeit individually slower.

Analysis

My web server is served from a highly optimized Nginx server in New York, USA. The two Webpagetest visual comparisons above are both done from Virgina, USA. But the killer feature of a CDN is that latency can be so much better thanks to edge locations of the CDN. In particular, KeyCDN have an edge location in Stockholm, Sweden. So what happens when you run the URLs from a Webpagetest machine in Stockholm, Sweden?

The both start to render at the same time (expected since the HTML document is still in New York, USA) but the (rougly) total time to download all the .css and .js is (about) 2.6 seconds when a CDN and 1.9 seconds without a CDN. In other words, despite the CDN geographically so much closer, the static assets are still available sooner without a CDN.

It's pretty clear at this point that it's not a good idea to use a CDN for static assets. Even if they're not critical. The "First Meaningful Paint" and "Time To Interactive" are about the same but when HTTP/2 can download all the .js files faster, their useful JavaScript can start being useful sooner with HTTP/2.

What Else

So in my site, it's easiest to host the whole site on an Nginx server in a Digital Ocean server. It's easy to invalidate its cache (just delete the file from disk and wait for Django to regenerate it). Another advantage with using plain Nginx is that I serve the HTML with Cache-Control headers and then do some post-processing of the .html file and since Nginx is disk-based, I don't have to update a CDN.

An alternative would be to put the whole site behind a CDN. That way, the initial HTML document can be served from a CDN edge location, using HTTP/2 and send the rest of the static assets on the same HTTP/2 connection. But this means that every single dynamic URL (e.g. HTTP POSTs or some per-user XHR requests) has to go via a CDN rather than going straight to the Nginx that is connected to the Django web server.

Last but not least, even though my Nginx server is on a decent machine and pretty well tuned, I very much doubt it's as fast and powerful as a KeyCDN or CloudFront or Akamai or Google Cloud CDN. Those servers are beasts! Mind you, the DNS + connection + SSL negotiation, when requesting from Stockholm, Sweden was about 0.75s to my Nginx in New York, USA. For the KeyCDN edge location the DNS + connection + SSL negotiation was about 0.52s. So not a huge difference actually.

Another important aspect is Service Workers. Perhaps I don't know how to hack it, but it doesn't work when you use differnet domains for the service worker .js file and the URIs it references.

In conclusion; I see little benefit in using a CDN at this point. Perhaps for larger assets like videos, GIFs or high-res images. HTTP/2 changes one of the major web performance rules. End of an era(?)