Normalizing unicode text to filenames, etc. in Python Normalizing unicode text to filenames, etc. in Python python python

Normalizing unicode text to filenames, etc. in Python


What you want to do is also known as "slugify" a string. Here's a possible solution:

import refrom unicodedata import normalize_punct_re = re.compile(r'[\t !"#$%&\'()*\-/<=>?@\[\\\]^_`{|},.:]+')def slugify(text, delim=u'-'):    """Generates an slightly worse ASCII-only slug."""    result = []    for word in _punct_re.split(text.lower()):        word = normalize('NFKD', word).encode('ascii', 'ignore')        if word:            result.append(word)    return unicode(delim.join(result))

Usage:

>>> slugify(u'My International Text: åäö')u'my-international-text-aao'

You can also change the delimeter:

>>> slugify(u'My International Text: åäö', delim='_')u'my_international_text_aao'

Source: Generating Slugs

For Python 3: pastebin.com/ft7Yb3KS (thanks @MrPoxipol).


The way to solve this problem is to make a decision on which characters are allowed (different systems have different rules for valid identifiers.

Once you decide on which characters are allowed, write an allowed() predicate and a dict subclass for use with str.translate:

def makesafe(text, allowed, substitute=None):    ''' Remove unallowed characters from text.        If *substitute* is defined, then replace        the character with the given substitute.    '''    class D(dict):        def __getitem__(self, key):            return key if allowed(chr(key)) else substitute    return text.translate(D())

This function is very flexible. It let's you easily specify rules for deciding which text is kept and which text is either replaced or removed.

Here's a simple example using the rule, "only allow characters that are in the unicode category L":

import unicodedatadef allowed(character):    return unicodedata.category(character).startswith('L')print(makesafe('the*ides&of*march', allowed, '_'))print(makesafe('the*ides&of*march', allowed))

That code produces safe output as follows:

the_ides_of_marchtheidesofmarch


The following will remove accents from whatever characters Unicode can decompose into combining pairs, discard any weird characters it can't, and nuke whitespace:

# encoding: utf-8from unicodedata import normalizeimport reoriginal = u'ľ š č ť ž ý á í é'decomposed = normalize("NFKD", original)no_accent = ''.join(c for c in decomposed if ord(c)<0x7f)no_spaces = re.sub(r'\s', '_', no_accent)print no_spaces# output: l_s_c_t_z_y_a_i_e

It doesn't try to get rid of characters disallowed on filesystems, but you can steal DANGEROUS_CHARS_REGEX from the file you linked for that.