Fast way to filter illegal xml unicode chars in python? Fast way to filter illegal xml unicode chars in python? xml xml

Fast way to filter illegal xml unicode chars in python?


Recently we (Trac XmlRpcPlugin maintainers) have been notified of the fact that the regular expression above strips surrogate pairs on Python narrow builds (see th:comment:13:ticket:11050) . An alternative approach consists in using the following regex (see th:changeset:13729) .

_illegal_unichrs = [(0x00, 0x08), (0x0B, 0x0C), (0x0E, 0x1F),                         (0x7F, 0x84), (0x86, 0x9F),                         (0xFDD0, 0xFDDF), (0xFFFE, 0xFFFF)] if sys.maxunicode >= 0x10000:  # not narrow build         _illegal_unichrs.extend([(0x1FFFE, 0x1FFFF), (0x2FFFE, 0x2FFFF),                                  (0x3FFFE, 0x3FFFF), (0x4FFFE, 0x4FFFF),                                  (0x5FFFE, 0x5FFFF), (0x6FFFE, 0x6FFFF),                                  (0x7FFFE, 0x7FFFF), (0x8FFFE, 0x8FFFF),                                  (0x9FFFE, 0x9FFFF), (0xAFFFE, 0xAFFFF),                                  (0xBFFFE, 0xBFFFF), (0xCFFFE, 0xCFFFF),                                  (0xDFFFE, 0xDFFFF), (0xEFFFE, 0xEFFFF),                                  (0xFFFFE, 0xFFFFF), (0x10FFFE, 0x10FFFF)]) _illegal_ranges = ["%s-%s" % (unichr(low), unichr(high))                    for (low, high) in _illegal_unichrs] _illegal_xml_chars_RE = re.compile(u'[%s]' % u''.join(_illegal_ranges)) 

p.s. See this post on surrogates explaining what they are for .

Update so as to not to match (replace) 0x0D which is a valid XML character.


Here's an updated version of Olemis Lang's answer for Python 3:

import reimport sysillegal_unichrs = [(0x00, 0x08), (0x0B, 0x0C), (0x0E, 0x1F),                   (0x7F, 0x84), (0x86, 0x9F),                   (0xFDD0, 0xFDDF), (0xFFFE, 0xFFFF)]if sys.maxunicode >= 0x10000:  # not narrow build    illegal_unichrs.extend([(0x1FFFE, 0x1FFFF), (0x2FFFE, 0x2FFFF),                            (0x3FFFE, 0x3FFFF), (0x4FFFE, 0x4FFFF),                            (0x5FFFE, 0x5FFFF), (0x6FFFE, 0x6FFFF),                            (0x7FFFE, 0x7FFFF), (0x8FFFE, 0x8FFFF),                            (0x9FFFE, 0x9FFFF), (0xAFFFE, 0xAFFFF),                            (0xBFFFE, 0xBFFFF), (0xCFFFE, 0xCFFFF),                            (0xDFFFE, 0xDFFFF), (0xEFFFE, 0xEFFFF),                            (0xFFFFE, 0xFFFFF), (0x10FFFE, 0x10FFFF)])illegal_ranges = [fr'{chr(low)}-{chr(high)}' for (low, high) in illegal_unichrs]xml_illegal_character_regex = '[' + ''.join(illegal_ranges) + ']'illegal_xml_chars_re = re.compile(xml_illegal_character_regex)# filtered_string = illegal_xml_chars_re.sub('', original_string)


You could also use unicode's translate method to delete selected codepoints. However, the mapping you have is pretty big (2128 codepoints) and that might make it much slower than just using a regex:

ranges = [(0, 8), (0xb, 0x1f), (0x7f, 0x84), (0x86, 0x9f), (0xd800, 0xdfff), (0xfdd0, 0xfddf), (0xfffe, 0xffff)]# fromkeys creates  the wanted (codepoint -> None) mappingnukemap = dict.fromkeys(r for start, end in ranges for r in range(start, end+1))clean = dirty.translate(nukemap)