Filtered by Python

Page 3

Reset

How MDN's site-search works

February 26, 2021
3 comments Web development, Django, Python, MDN, Elasticsearch

tl;dr; Periodically, the whole of MDN is built, by our Node code, in a GitHub Action. A Python script bulk-publishes this to Elasticsearch. Our Django server queries the same Elasticsearch via /api/v1/search. The site-search page is a static single-page app that sends XHR requests to the /api/v1/search endpoint. Search results' sort-order is determined by match and "popularity".

Jamstack'ing

The challenge with "Jamstack" websites is with data that is too vast and dynamic that it doesn't make sense to build statically. Search is one of those. For the record, as of Feb 2021, MDN consists of 11,619 documents (aka. articles) in English. Roughly another 40,000 translated documents. In English alone, there are 5.3 million words. So to build a good search experience we need to, as a static site build side-effect, index all of this in a full-text search database. And Elasticsearch is one such database and it's good. In particular, Elasticsearch is something MDN is already quite familiar with because it's what was used from within the Django app when MDN was a wiki.

Note: MDN gets about 20k site-searches per day from within the site.

Build

Diagram

When we build the whole site, it's a script that basically loops over all the raw content, applies macros and fixes, dumps one index.html (via React server-side rendering) and one index.json. The index.json contains all the fully rendered text (as HTML!) in blocks of "prose". It looks something like this:


{
  "doc": {
    "title": "DOCUMENT TITLE",
    "summary": "DOCUMENT SUMMARY",
    "body": [
      {
        "type": "prose", 
        "value": {
          "id": "introduction", 
          "title": "INTRODUCTION",
          "content": "<p>FIRST BLOCK OF TEXTS</p>"
       }
     },
     ...
   ],
   "popularity": 0.12345,
   ...
}

You can see one here: /en-US/docs/Web/index.json

Indexing

Next, after all the index.json files have been produced, a Python script takes over and it traverses all the index.json files and based on that structure it figures out the, title, summary, and the whole body (as HTML).

Next up, before sending this into the bulk-publisher in Elasticsearch it strips the HTML. It's a bit more than just turning <p>Some <em>cool</em> text.</p> to Some cool text. because it also cleans up things like <div class="hidden"> and certain <div class="notecard warning"> blocks.

One thing worth noting is that this whole thing runs roughly every 24 hours and then it builds everything. But what if, between two runs, a certain page has been removed (or moved), how do you remove what was previously added to Elasticsearch? The solution is simple: it deletes and re-creates the index from scratch every day. The whole bulk-publish takes a while so right after the index has been deleted, the searches won't be that great. Someone could be unlucky in that they're searching MDN a couple of seconds after the index was deleted and now waiting for it to build up again.
It's an unfortunate reality but it's a risk worth taking for the sake of simplicity. Also, most people are searching for things in English and specifically the Web/ tree so the bulk-publishing is done in a way the most popular content is bulk-published first and the rest was done after. Here's what the build output logs:

Found 50,461 (potential) documents to index
Deleting any possible existing index and creating a new one called mdn_docs
Took 3m 35s to index 50,362 documents. Approximately 234.1 docs/second
Counts per priority prefixes:
    en-us/docs/web                 9,056
    *rest*                         41,306

So, yes, for 3m 35s there's stuff missing from the index and some unlucky few will get fewer search results than they should. But we can optimize this in the future.

Searching

The way you connect to Elasticsearch is simply by a URL it looks something like this:

https://USER:PASSWD@HASH.us-west-2.aws.found.io:9243

It's an Elasticsearch cluster managed by Elastic running inside AWS. Our job is to make sure that we put the exact same URL in our GitHub Action ("the writer") as we put it into our Django server ("the reader").
In fact, we have 3 Elastic clusters: Prod, Stage, Dev.
And we have 2 Django servers: Prod, Stage.
So we just need to carefully make sure the secrets are set correctly to match the right environment.

Now, in the Django server, we just need to convert a request like GET /api/v1/search?q=foo&locale=fr (for example) to a query to send to Elasticsearch. We have a simple Django view function that validates the query string parameters, does some rate-limiting, creates a query (using elasticsearch-dsl) and packages the Elasticsearch results back to JSON.

How we make that query is important. In here lies the most important feature of the search; how it sorts results.

In one simple explanation, the sort order is a combination of popularity and "matchness". The assumption is that most people want the popular content. I.e. they search for foreach and mean to go to /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach not /en-US/docs/Web/API/NodeList/forEach both of which contains forEach in the title. The "popularity" is based on Google Analytics pageviews which we download periodically, normalize into a floating-point number between 1 and 0. At the of writing the scoring function does something like this:

rank = doc.popularity * 10 + search.score

This seems to produce pretty reasonable results.

But there's more to the "matchness" too. Elasticsearch has its own API for defining boosting and the way we apply is:

  • match phrase in the title: Boost = 10.0
  • match phrase in the body: Boost = 5.0
  • match in title: Boost = 2.0
  • match in body: Boost = 1.0

This is then applied on top of whatever else Elasticsearch does such as "Term Frequency" and "Inverse Document Frequency" (tf and if). This article is a helpful introduction.

We're most likely not done with this. There's probably a lot more we can do to tune this myriad of knobs and sliders to get the best possible ranking of documents that match.

Web UI

The last piece of the puzzle is how we display all of this to the user. The way it works is that developer.mozilla.org/$locale/search returns a static page that is blank. As soon as the page has loaded, it lazy-loads JavaScript that can actually issue the XHR request to get and display search results. The code looks something like this:


function SearchResults() {
  const [searchParams] = useSearchParams();
  const sp = createSearchParams(searchParams);
  // add defaults and stuff here
  const fetchURL = `/api/v1/search?${sp.toString()}`;

  const { data, error } = useSWR(
    fetchURL,
    async (url) => {
      const response = await fetch(URL);
      // various checks on the response.statusCode here
      return await response.json();
    }
  );

  // render 'data' or 'error' accordingly here

A lot of interesting details are omitted from this code snippet. You have to check it out for yourself to get a more up-to-date insight into how it actually works. But basically, the window.location (and pushState) query string drives the fetch() call and then all the component has to do is display the search results with some highlighting.

The /api/v1/search endpoint also runs a suggestion query as part of the main search query. This extracts out interest alternative search queries. These are filtered and scored and we issue "sub-queries" just to get a count for each. Now we can do one of those "Did you mean...". For example: search for intersections.

In conclusion

There are a lot of interesting, important, and careful details that are glossed over here in this blog post. It's a constantly evolving system and we're constantly trying to improve and perfect the system in a way that it fits what users expect.

A lot of people reach MDN via a Google search (e.g. mdn array foreach) but despite that, nearly 5% of all traffic on MDN is the site-search functionality. The /$locale/search?... endpoint is the most frequently viewed page of all of MDN. And having a good search engine that's reliable is nevertheless important. By owning and controlling the whole pipeline allows us to do specific things that are unique to MDN that other websites don't need. For example, we index a lot of raw HTML (e.g. <video>) and we have code snippets that needs to be searchable.

Hopefully, the MDN site-search will elevate from being known to be very limited to something now that can genuinely help people get to the exact page better than Google can. Yes, it's worth aiming high!

Fastest way to turn HTML into text in Python

January 8, 2021
4 comments Python

tl;dr; selectolax is best for stripping HTML down to plain text.

The problem is that I have 10,000+ HTML snippets that I need to index into Elasticsearch as plain text. (Before you ask, yes I know Elasticsearch has a html_strip text filter but it's not what I want/need to use in this context).
Turns out, stripping the HTML into plain text was actually quite expensive at that scale. So what's the most performant way?

PyQuery


from pyquery import PyQuery as pq

text = pq(html).text()

selectolax


from selectolax.parser import HTMLParser

text = HTMLParser(html).text()

regular expression


import re

regex = re.compile(r'<.*?>')
text = clean_regex.sub('', html)

Results

I wrote a script that iterated through 10,000 files that contains HTML snippets. Note! The snippets aren't complete <html> documents (with a <head> and <body> etc) Just blobs of HTML. The average size is 10,314 bytes (5,138 bytes median).

pyquery
  SUM:    18.61 seconds
  MEAN:   1.8633 ms
  MEDIAN: 1.0554 ms
selectolax
  SUM:    3.08 seconds
  MEAN:   0.3149 ms
  MEDIAN: 0.1621 ms
regex
  SUM:    1.64 seconds
  MEAN:   0.1613 ms
  MEDIAN: 0.0881 ms

I've run it a bunch of times. The results are pretty stable.

Point is: selectolax is ~7 times faster than PyQuery

Regex? Really?

No, I don't think I want to use that. It makes me nervous without even attempting to dig up some examples where it goes wrong. It might work just fine for the most basic blobs of HTML. Actually, if the HTML is <p>Foo &amp; Bar</p>, I expect the plain text transformation should be Foo & Bar, not Foo &amp; Bar.

More pressing, both PyQuery and selectolax supports something very specific but important to my use case. I need to remove certain tags (and its content) before I proceed. For example:


<h4 class="warning">This should get stripped.</h4>
<p>Please keep.</p>
<div style="display: none">This should also get stripped.</div>

That can never be done with a regex.

Version 2.0

So my requirement will probably change but basically, I want to delete certain tags. E.g. <div class="warning"> and <div class="hidden"> and <div style="display: none">. So let's implement that:

PyQuery


from pyquery import PyQuery as pq

_display_none_regex = re.compile(r'display:\s*none')

doc = pq(html)
doc.remove('div.warning, div.hidden')
for div in doc('div[style]').items():
    style_value = div.attr('style')
    if _display_none_regex.search(style_value):
        div.remove()
text = doc.text()

selectolax


from selectolax.parser import HTMLParser

_display_none_regex = re.compile(r'display:\s*none')

tree = HTMLParser(html)
for tag in tree.css('div.warning, div.hidden'):
    tag.decompose()
for tag in tree.css('div[style]'):
    style_value = tag.attributes['style']
    if style_value and _display_none_regex.search(style_value):
        tag.decompose()
text = tree.body.text()

This actually works. When I now run the same benchmark for 10,000 of these are the new results:

pyquery
  SUM:    21.70 seconds
  MEAN:   2.1701 ms
  MEDIAN: 1.3989 ms
selectolax
  SUM:    3.59 seconds
  MEAN:   0.3589 ms
  MEDIAN: 0.2184 ms
regex
  Skip

Again, selectolax beats PyQuery by a factor of ~6.

Conclusion

Regular expressions are fast but weak in power. Makes sense.

This selectolax is very impressive.
I got the inspiration from this blog post which sets out to do something very similar to what I'm doing.

I hope this helps someone. Thank you Artem Golubin of selectolax and @lexborisov for Modest which selectolax is built upon.

Gcm - git checkout master or main

December 21, 2020
1 comment Python

I love git on the command line and I actually never use a GUI to navigate git branches. But sometimes, I need scripting to make abstractions that make life more convenient. What often happens is that I need to go back to the "main" branch. I write main in quotation marks because it's not always called main. Sometimes it's called master. And it's tedious to have to remember which one is the default.

So I wrote a script called Gcm:


#!/usr/bin/env python3
import subprocess


def run(*args):
    default_branch = get_default_branch()
    current_branch = get_current_branch()
    if default_branch != current_branch:
        checkout_branch(default_branch)
    else:
        print(f"Already on {default_branch}")
        return 1


def checkout_branch(branch_name):
    subprocess.run(f"git checkout {branch_name}".split())


def get_default_branch():
    res = subprocess.run(
        "git config --worktree --get git-checkout-default.default-branch".split(),
        check=False,
        capture_output=True,
    )
    if res.returncode == 0:
        return res.stdout.decode("utf-8").strip()

    origin_name = "origin"
    res = subprocess.run(
        f"git remote show {origin_name}".split(),
        check=True,
        capture_output=True,
    )
    for line in res.stdout.decode("utf-8").splitlines():
        if line.strip().startswith("HEAD branch:"):
            default_branch = line.replace("HEAD branch:", "").strip()
            subprocess.run(
                f"git config --worktree git-checkout-default.default-branch "
                f"{default_branch}".split(),
                check=True,
            )
            return default_branch

    raise NotImplementedError(f"No remote called {origin_name!r}")


def get_current_branch():
    res = subprocess.run("git branch --show-current".split(), capture_output=True)
    for line in res.stdout.decode("utf-8").splitlines():
        return line.strip()

    res = subprocess.run("git show -s --pretty=%d HEAD".split(), capture_output=True)
    for line in res.stdout.decode("utf-8").splitlines():
        return line.strip()

    raise NotImplementedError("Don't know what to do!")


if __name__ == "__main__":
    import sys

    sys.exit(run(*sys.argv[1:]))

It ain't pretty or a spiffy one-liner, but it works. It assumes that the repo has a remote called origin which doesn't matter if it's the upstream or your fork. Put this script into a file called ~/bin/Gcm and run chmox +x ~/bin/Gcm.

Now, whenever I want to go back to the main branch I type Gcm and it takes me there.

Gcm in action

It might seem silly, and it might not be for you, but I love it and use it many times per day. Perhaps by sharing this tip, it'll inspire someone else to set up something similar for themselves.

Why it's spelled with an uppercase G

I have a pattern (or rule?) that all scripts that I write myself are always capitalized like that. It avoids clashes with stuff I install with brew or other bash/zsh aliases.

For example:

ls -l ~/bin/RemoteVSCodePeterbecom.sh
ls -l ~/bin/Cleanupfiles
ls -l ~/bin/RandomString.py

UPDATE: July 2021

Jason @silverjam Mobarak on Twitter suggested an optimization. I've incorporated his suggestion into the original blog post above. See this twitter thread.
Essentially, it now uses git config --worktree to set and get the outcome of the default branch name so the next time it's needed it does not depend on network.
But watch out, if you, like me, change the default branch of some existing repos from master to main; then you'll need to run: git config --worktree --unset git-checkout-default.default-branch.

Generating random avatar images in Django/Python

October 28, 2020
1 comment Web development, Django, Python

tl;dr; <img src="/avatar.random.png" alt="Random avataaar"> generates this image:

Random avataaar
(try reloading to get a random new one. funny aren't they?)

When you use Gravatar you can convert people's email addresses to their mugshot.
It works like this:


<img src="https://www.gravatar.com/avatar/$(md5(user.email))">

But most people don't have their mugshot on Gravatar.com unfortunately. But you still want to display an avatar that is distinct per user. Your best option is to generate one and just use the user's name or email as a seed (so it's always random but always deterministic for the same user). And you can also supply a fallback image to Gravatar that they use if the email doesn't match any email they have. That's where this blog post comes in.

I needed that so I shopped around and found avataaars generator which is available as a React component. But I need it to be server-side and in Python. And thankfully there's a great port called: py-avataaars.

It depends on CairoSVG to convert an SVG to a PNG but it's easy to install. Anyway, here's my hack to generate random "avataaars" from Django:


import io
import random

import py_avataaars
from django import http
from django.utils.cache import add_never_cache_headers, patch_cache_control


def avatar_image(request, seed=None):
    if not seed:
        seed = request.GET.get("seed") or "random"

    if seed != "random":
        random.seed(seed)

    bytes = io.BytesIO()

    def r(enum_):
        return random.choice(list(enum_))

    avatar = py_avataaars.PyAvataaar(
        style=py_avataaars.AvatarStyle.CIRCLE,
        # style=py_avataaars.AvatarStyle.TRANSPARENT,
        skin_color=r(py_avataaars.SkinColor),
        hair_color=r(py_avataaars.HairColor),
        facial_hair_type=r(py_avataaars.FacialHairType),
        facial_hair_color=r(py_avataaars.FacialHairColor),
        top_type=r(py_avataaars.TopType),
        hat_color=r(py_avataaars.ClotheColor),
        mouth_type=r(py_avataaars.MouthType),
        eye_type=r(py_avataaars.EyesType),
        eyebrow_type=r(py_avataaars.EyebrowType),
        nose_type=r(py_avataaars.NoseType),
        accessories_type=r(py_avataaars.AccessoriesType),
        clothe_type=r(py_avataaars.ClotheType),
        clothe_color=r(py_avataaars.ClotheColor),
        clothe_graphic_type=r(py_avataaars.ClotheGraphicType),
    )
    avatar.render_png_file(bytes)

    response = http.HttpResponse(bytes.getvalue())
    response["content-type"] = "image/png"
    if seed == "random":
        add_never_cache_headers(response)
    else:
        patch_cache_control(response, max_age=60, public=True)

    return response

It's not perfect but it works. The URL to this endpoint is /avatar.<seed>.png and if you make the seed parameter random the response is always different.

To make the image not random, you replace the <seed> with any string. For example (use your imagination):


{% for comment in comments %}
  <img src="/avatar.{{ comment.user.id }}.png" alt="{{ comment.user.name }}">
  <blockquote>{{ comment.text }}</blockquote>
  <i>{{ comment.date }}</i>
{% endfor %}

hashin 0.15.0 now copes nicely with under_scores

June 15, 2020
0 comments Python

tl;dr hashin 0.15.0 makes package comparison agnostic to underscore or hyphens

See issue #116 for a fuller story. Basically, now it doesn't matter if you write...

hashin python_memcached

...or...

hashin python-memcached

And the same can be said about the contents of your requirements.txt file. Suppose it already had something like this:

python_memcached==1.59 \
    --hash=sha256:4dac64916871bd35502 \
    --hash=sha256:a2e28637be13ee0bf1a8

and you type hashin python-memcached it will do the version comparison on these independent of the underscore or hyphen.

Thank @caphrim007 who implemented this for the benefit of Renovate.

./bin/huey-isnt-running.sh - A bash script to prevent lurking ghosts

June 10, 2020
0 comments Python, Linux, Bash

tl;dr; Here's a useful bash script to avoid starting something when its already running as a ghost process.

Huey is a great little Python library for doing background tasks. It's like Celery but much lighter, faster, and easier to understand.

What cost me almost an hour of hair-tearing debugging today was that I didn't realize that a huey daemon process had gotten stuck in the background with code that wasn't updating as I made changes to the tasks.py file in my project. I just couldn't understand what was going on.

The way I start my project is with honcho which is a Python Foreman clone. The Procfile looks something like this:


elasticsearch: cd /Users/peterbe/dev/PETERBECOM/elasticsearch-7.7.0 && ./bin/elasticsearch -q
web: ./bin/run.sh web
minimalcss: cd minimalcss && PORT=5000 yarn run start
huey: ./manage.py run_huey --flush-locks --huey-verbose
adminui: cd adminui && yarn start
pulse: cd pulse && yarn run dev

And you start that with simply typing:


honcho start

When you Ctrl-C, it kills all those processes but somehow somewhere it doesn't always kill everything. Restarting the computer isn't a fun alternative.

So, to prevent my sanity from draining I wrote this script:


#!/usr/bin/env bash
set -eo pipefail

# This is used to make sure that before you start huey, 
# there isn't already one running the background.
# It has happened that huey gets lingering stuck as a 
# ghost and it's hard to notice it sitting there 
# lurking and being weird.

bad() {
    echo "Huey is already running!"
    exit 1
}

good() {
    echo "Huey is NOT already running"
    exit 0
}

ps aux | rg huey | rg -v 'rg huey' | rg -v 'huey-isnt-running.sh' && bad || good

(If you're wondering what rg is; it's short for ripgrep)

And I change my Procfile accordingly:


-huey: ./manage.py run_huey --flush-locks --huey-verbose
+huey: ./bin/huey-isnt-running.sh && ./manage.py run_huey --flush-locks --huey-verbose

There really isn't much rocket science or brain surgery about this blog post but I hope it inspires someone who's been in similar trenches that a simple bash script can make all the difference.

Check your email addresses in Python, as a whole

May 22, 2020
0 comments Python, MDN

So recently, in MDN, we changed the setting WELCOME_EMAIL_FROM. Seems harmless right? Wrong, it failed horribly in runtime and we didn't notice until it was in production. Here's the traceback:

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')
(8 additional frame(s) were not displayed)
...
  File "newrelic/api/function_trace.py", line 151, in literal_wrapper
    return wrapped(*args, **kwargs)
  File "django/core/mail/message.py", line 291, in send
    return self.get_connection(fail_silently).send_messages([self])
  File "django/core/mail/backends/smtp.py", line 110, in send_messages
    sent = self._send(message)
  File "django/core/mail/backends/smtp.py", line 126, in _send
    self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n'))
  File "python3.8/smtplib.py", line 871, in sendmail
    raise SMTPSenderRefused(code, resp, from_addr)

SMTPSenderRefused: (552, b"5.1.7 The sender's address was syntactically invalid.\n5.1.7 see : http://support.socketlabs.com/kb/84 for more information.", '=?utf-8?q?Janet?=')

Yikes!

So, to prevent this from happening every again we're putting this check in:


from email.utils import parseaddr

WELCOME_EMAIL_FROM = config("WELCOME_EMAIL_FROM", ...)

# If this fails, SMTP will probably also fail.
assert parseaddr(WELCOME_EMAIL_FROM)[1].count('@') == 1, parseaddr(WELCOME_EMAIL_FROM)

You could go to town even more on this. Perhaps use the email validator within django but for now I'd call that overkill. This is just a decent check before anything gets a chance to go wrong.

Build pyenv Python versions on macOS Catalina 10.15

February 19, 2020
9 comments Python, macOS

UPDATE Mar 7, 2022: For OSX 12.2 Monterey

Here's what I needed to do in 2022 to get this to work:

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.1.sdk \
  MACOSX_DEPLOYMENT_TARGET=12.2 \
  PYTHON_CONFIGURE_OPTS="--enable-framework" \
  pyenv install 3.10.2

BELOW IS ORIGINAL BLOG POST

I'm still working on getting pyenv in my bloodstream. It seems like totally the right tool for having different versions of Python available on macOS that don't suddenly break when you run brew upgrade periodically. But every thing I tried failed with an error similar to this:

python-build: use openssl from homebrew
python-build: use readline from homebrew
Installing Python-3.7.0...
python-build: use readline from homebrew

BUILD FAILED (OS X 10.15.x using python-build 20XXXXXX)

Inspect or clean up the working tree at /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751
Results logged to /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751.log

Last 10 log lines:
./Modules/posixmodule.c:5924:9: warning: this function declaration is not a prototype [-Wstrict-prototypes]
    if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
        ^
./Modules/posixmodule.c:6018:11: error: implicit declaration of function 'forkpty' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
    pid = forkpty(&master_fd, NULL, NULL, NULL);
          ^
./Modules/posixmodule.c:6018:11: warning: this function declaration is not a prototype [-Wstrict-prototypes]
2 warnings and 2 errors generated.
make: *** [Modules/posixmodule.o] Error 1
make: *** Waiting for unfinished jobs....

I read through the Troubleshooting FAQ and the "Common build problems" documentation. xcode was up to date and I had all the related brew packages upgraded. Nothing seemed to work.

Until I saw this comment on an open pyenv issue: "Unable to install any Python version on MacOS"

All I had to do was replace the 10.14 for 10.15 and now it finally worked here on Catalina 10.15. So, the magical line was this:

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk \
MACOSX_DEPLOYMENT_TARGET=10.15 \
PYTHON_CONFIGURE_OPTS="--enable-framework" \
pyenv install -v 3.7.6

Hopefully, by blogging about it you'll find this from Googling and I'll remember the next time I need it because it did eat 2 hours of precious evening coding time.

redirect-chain - Getting a comfortable insight input URL redirects history

February 14, 2020
0 comments Python

You can accomplish the same with curl -L but I've had this as a little personal hack script in my ~/bin folder on my computer. Thought I'd make it a public tool. Also, from here, a lot more can be done to this script if you wanna help out with ideas.

▶ redirect-chain http://developer.mozilla.org/en-US/docs/xpcshell
0  http://developer.mozilla.org/en-US/docs/xpcshell 301
1 > https://developer.mozilla.org/en-US/docs/xpcshell 301
2 >> https://developer.mozilla.org/docs/en/XPConnect/xpcshell 302
3 >>> https://developer.mozilla.org/en-US/docs/en/XPConnect/xpcshell 301
4 >>>> https://developer.mozilla.org/en-US/docs/XPConnect/xpcshell 301
5 >>>>> https://developer.mozilla.org/en-US/docs/Mozilla/XPConnect/xpcshell 301
6 >>>>>> https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Language_bindings/XPConnect/xpcshell 200

It basically gives you a pretty summary of redirects from a starting URL.

To install it on your system run:

pipx install redirect-chain

Happy Friday!

How to resolve a git conflict in poetry.lock

February 7, 2020
8 comments Python

We use poetry in MDN Kuma. That means there's a pyproject.toml and a poetry.lock file. To add or remove dependencies, you don't touch either file in an editor. For example, to add a package:

poetry add --dev black

It changes pyproject.toml and poetry.lock for you. (Same with yarn add somelib which edits package.json and yarn.lock).

Suppose that you make a pull request to add a new dependency, but someone sneaks a new pull request in before you and have theirs landed in master before. Well, that's how you end up in this place:

Conflicting files

So how do you resolve that?

So, you go back to your branch and run something like:

git checkout master 
git pull origin master
git checkout my-branch
git merge master

Now you get this in git status:

Unmerged paths:
  (use "git add <file>..." to mark resolution)
    both modified:   poetry.lock

And the contents of poetry.lock looks something like this:

Conflict

I wish there just was a way poetry itself could just figure fix this.

What you need to do is to run:

# Get poetry.lock to look like it does in master
git checkout --theirs poetry.lock
# Rewrite the lock file
poetry lock --no-update

Now, your poetry.lock file should correctly reflect the pyproject.toml that has been merged from master.

To finish up, resolve the conflict:

git add poetry.lock
git commit -a -m "conflict resolved"

# and most likely needed
poetry install

content-hash

Inside the poetry.lock file there's the lock file's hash. It looks like this:

[metadata]
content-hash = "875b6a3628489658b323851ce6fe8dafacd5f69e5150d8bb92b8c53da954c1be"

So, as can be seen in my screenshot, when git conflicted on this it looks like this:


 [metadata]
+<<<<<<< HEAD
+content-hash = "6658b1379d6153dd603bbc27d04668e5e93068212c50e76bd068e9f10c0bec59"
+=======
 content-hash = "5c00dce18ddffd5d6f797dfa14e4d56bf32bbc3769d7b761a2b1b3ff14bce287"
+>>>>>>> master

Basically, the content-hash = "5c00dce1... is what you'd find in master and content-hash = "6658b137... is what you would see in your branch before the conflict.

When you run that poetry lock you can validate that the new locking worked because it should be a hash. One that is neither 5c00dce1... or 6658b137....

Notes

I'm still new to poetry and I'm learning. This was just some loud note-to-self so I can remember for next time.

I don't yet know what else can be automated if there's a conflict in pyproject.toml too. And what do you do if there are serious underlying conflicts in Python packages, like they added a package that requires somelib<=0.99 and you added something that requires somelib>=1.11.

Also, perhaps there are ongoing efforts within the poetry project to help out with this.

UPDATE Feb 12, 2020

My colleague informed me that this change was actually NOT what I wanted. poetry lock actually updates some dependencies as it makes a completely new lock file. I didn't immediately notice that in my case because the lock file is large. See this open issue which is about the ability to update the lock file without upgrading any other dependencies.

UPDATE June 24, 2021

To re-lock the file, use poetry lock --no-update after you've run git checkout --theirs poetry.lock.