This has been a problem for a long time for me. Whenever someone enters a title in my CMS the id of the document is derived from the title. Spaces are replaced with '- and &' is replaced with and etc. The final thing I wanted to do was to make sure the Id is ASCII encoded when it's saved. My original attempt looked like this:

>>> title = u"Klüft skräms inför på fédéral électoral große"
>>> print title.encode('ascii','ignore')
Klft skrms infr p fdral lectoral groe

But as you can see, a lot of the characters are gone. I'd much rather that a word like "Klüft" is converted to "Kluft" which will be more human readable and still correct. My second attempt was to write a big table of unicode to ascii replacements.

It looked something like this:

u'\xe4': u'a',
u'\xc4': u'A',
etc...

Long, awful and not pythonic. Too risky to miss something but the result was good. Now for the final solution which I'm very happy with. It uses a module called unicodedata which is new to me. Here's how it works:

>>> import unicodedata
>>> unicodedata.normalize('NFKD', title).encode('ascii','ignore')
'Kluft skrams infor pa federal electoral groe'

It's not perfect (große should have become grosse) but's only two lines of code.

infidel - 08 August 2006 [«« Reply to this]
It's been years since I took any German, but wouldn't 'Klüft' more accurately be saved as 'Klueft'? I recall that 'Küchen' and 'Kuchen' are two different words entirely (Kitchen and Cake, respectively).
Daverz - 09 August 2006 [«« Reply to this]
How about running replace on the string before normalizing:

title.replace(u'\xdf', 'ss')

and so on for any other special cases.
Andreas - 09 August 2006 [«« Reply to this]
infidel is right. It could create some form of ambiguity - at least with german words.
Michael Kallas - 10 August 2006 [«« Reply to this]
1) Klüft is not a german word, so don't worry too much.
2) Why do you want to generate ids from the title? This is potentially insecure as I might find a clever way for entering cross-site-scripting that way.
3) If the id should match the title, why does it have to be ascii?
Anonymous - 10 August 2006 [«« Reply to this]
> 2) Why do you want to generate ids from the title? This is potentially insecure as I might find a clever way for entering cross-site-scripting that way.
> 3) If the id should match the title, why does it have to be ascii?

http://www.peterbe.com/plog/unicode-to-ascii
Jonathan Holst - 10 August 2006 [«« Reply to this]
I can also (by test) say, that it doesn't work with Scandinavian letters (æ, ø and å) -- they get ignored completely.
Peter Bengtsson - 10 August 2006 [«« Reply to this]
"på" became "pa"
Jonathan Holst - 10 August 2006 [«« Reply to this]
Well okay, but "Rødgrød med fløde" became "Rdgrd med flde".
Ian Bicking - 10 August 2006 [«« Reply to this]
This might assist, or maybe what you do is sufficiently equivalent:
http://www.crummy.com/cgi-bin/msm/map.cgi/ASCII%2C+Dammit
Victor Stinner - 14 August 2006 [«« Reply to this]
Hi, I wrote a script based on your idea. It transforms number, str and unicode to ASCII: http://www.haypocalc.com/perso/prog/python/any2ascii.py

It takes care of some caracters like "ßøł" (just fill smart_unicode dictionnary ;-)).

Haypo
Fredrik - 25 August 2006 [«« Reply to this]
Yet another approach:

http://effbot.python-hosting.com/file/stuff/sandbox/text/unaccent.py
Peter Bengtsson - 31 August 2006 [«« Reply to this]
Brilliant! Thank you.
gfd - 17 October 2007 [«« Reply to this]
gb
Bryan Eastin - 19 January 2008 [«« Reply to this]
Hey, I just wanted to thank you for this page. It was really helpful. I wanted to retain all 8-bit characters, so my solution was more complicated (see http://beastin.livejournal.com/6819.html), but I made use of your example.
ben - 16 January 2010 [«« Reply to this]
This is fantastic stuff - I was having trouble parsing film results where, for example, Rashômon was represented as Rashomon. Testing for both the unicode and ascii normalized strings before iterating to the next result really sealed it. Thanks.
Robson - 27 June 2010 [«« Reply to this]
Excelent! It's save my day... really thanks!
Anonymous - 18 January 2011 [«« Reply to this]
when writing about character encodings you want your page encoded properly.

page claims to be encoded in utf-8 but is encoded iso-latin-1
Peter Bengtsson - 18 January 2011 [«« Reply to this]
I know. It's terrible. It's because it's changed over time.
Gilles Lenfant - 31 May 2012 [«« Reply to this]
There's now the "unidecode" package that does all the job http://pypi.python.org/pypi/Unidecode/

>>> from unidecode import unidecode
>>> utext = u"œuf dür"
>>> unidecode(utext)
u'oeuf dur'
>>> from unicodedata import normalize
>>> normalize('NFKD', utext).encode('ascii','ignore')
'uf dur'

A better support for special latin extended characters (French, German) that should tranlitterate to multiple ASCII characters.


Your email will never ever be published