If you know you're going to look up a related Django ORM object from another one, Django automatically takes care of that for you.
To illustrate, imaging a mapping that looks like this:
class Artist(models.Models): name = models.CharField(max_length=200) ... class Song(models.Models): artist = models.ForeignKey(Artist) ...
And with that in mind, suppose you do this:
>>> Song.objects.get(id=1234567).artist.name 'Frank Zappa'
Internally, what Django does is that it looks the
Song object first, then it does a look up automatically on the
Artist. In PostgreSQL it looks something like this:
SELECT "main_song"."id", "main_song"."artist_id", ... FROM "main_song" WHERE "main_song"."id" = 1234567 SELECT "main_artist"."id", "main_artist"."name", ... FROM "main_artist" WHERE "main_artist"."id" = 111
Pretty clear. Right.
Now if you know you're going to need to look up that related field you can ask Django to make a join before the lookup even happens. It looks like this:
>>> Song.objects.select_related('artist').get(id=1234567).artist.name 'Frank Zappa'
And the SQL needed looks like this:
SELECT "main_song"."id", ... , "main_artist"."name", ... FROM "main_song" INNER JOIN "main_artist" ON ("main_song"."artist_id" = "main_artist"."id") WHERE "main_song"."id" = 1234567
Well, there's only one way to find out and that is to measure with some relatistic data.
Here's the benchmarking code:
def f1(id): try: return Song.objects.get(id=id).artist.name except Song.DoesNotExist: pass def f2(id): try: return Song.objects.select_related('artist').get(id=id).artist.name except Song.DoesNotExist: pass def _stats(r): #returns the median, average and standard deviation of a sequence tot = sum(r) avg = tot/len(r) sdsq = sum([(i-avg)**2 for i in r]) s = list(r) s.sort() return s[len(s)//2], avg, (sdsq/(len(r)-1 or 1))**.5 times = defaultdict(list) functions = [f1, f2] for id in range(100000, 103000): for f in functions: t0 = time.time() r = f(id) t1 = time.time() if r: times[f.__name__].append(t1-t0) # Shuffle the order so that one doesn't benefit more # from deep internal optimizations/caching in Postgre. random.shuffle(functions) for k, values in times.items(): print(k, [round(x * 1000, 2) for x in _stats(values)])
For the record, here are the parameters of this little benchmark:
If you use the median, using
select_related is 30% faster and if you use the average, using
select_related is 46% faster.
So, if you know you're going to need to do that lookup put in
.select_related(relation) before every
.get(id=...) in your Django code.
Deep down in PostgreSQL, the inner join is ultimately two ID-by-index lookups. And that's what the first method is too. It's likely that the reason the inner join approach is faster is simply because there's less connection overheads.
Lastly, YOUR MILEAGE WILL VARY. Every benchmark is flawed but this quite realistic because it's not trying to be optimized in either way.