How should I deal with an XMLSyntaxError in Python's lxml while parsing a large XML file? How should I deal with an XMLSyntaxError in Python's lxml while parsing a large XML file? xml xml

How should I deal with an XMLSyntaxError in Python's lxml while parsing a large XML file?


The right thing to do here is make sure that the creator of the XML file makes sure that:A.) that the encoding of the file is declaredB.) that the XML file is well formed (no invalid characters control characters, no invalid characters that are not falling into the encoding scheme, all elements are properly closed etc.)C.) use a DTD or an XML schema if you want to ensure that certain attributes/elements exist, have certain values or correspond to a certain format (note: this will take a performance hit)

So, now to your question. LXml supports a whole bunch of arguments when you use it to parse XML. Check out the documentation. You will want to look at these two arguments:

--> recover --> try hard to parse through broken XML
--> huge_tree --> disable security restrictions and support very deep trees and very long text content (only affects libxml2 2.7+)

They will help you to some degree, but certain invalid characters can just not be recovered from, so again, ensuring that the file is written correctly is your best bet to clean/well working code.

Ah yeah and one more thing. 2GB is huge. I assume you have a list of similar elements in this file (example list of books). Try to split the file up with a Regex Expression on the OS, then start multiple processes to part the pieces. That way you will be able to use more of your cores on your box and the processing time will go down. Of course you then have to deal with the complexity of merging the results back together. I can not make this trade off for you, but wanted to give it to you as "food for thought"

Addition to post:If you have no control over the input file and have bad characters in it, I would try to replace/remove these bad characters by iterating over the string before parsing it as a file. Here a code sample that removes Unicode control characters that you wont need:

#all unicode characters from 0x0000 - 0x0020 (33 total) are bad and will be replaced by "" (empty string)for line in fileinput.input(xmlInputFileLocation, inplace=1):    for pos in range(0,len(line)):        if unichr(line[pos]) < 32:            line[pos] = None    print u''.join([c for c in line if c])


I ran into this too, getting \x16 in data (the unicode 'synchronous idle' or 'SYN' character, displayed in the xml as ^V) which leads to an error when parsing the xml: XMLSyntaxError: PCDATA invalid Char value 22. The 22 is because because ord('\x16') is 22.

The answer from @michael put me on the right track. But some control characters below 32 are fine, like the return or the tab, and a few higher characters are still bad. So:

# Get list of bad characters that would lead to XMLSyntaxError.# Calculated manually like this:from lxml import etreefrom StringIO import StringIOBAD = []for i in range(0, 10000):    try:        x = etree.parse(StringIO('<p>%s</p>' % unichr(i)))    except etree.XMLSyntaxError:        BAD.append(i)

This leads to a list of 31 characters that can be hardcoded instead of doing the above calculation in code:

BAD = [    0, 1, 2, 3, 4, 5, 6, 7, 8,    11, 12,    14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,    # Two are perfectly valid characters but go wrong for different reasons.    # 38 is '&' which gives: xmlParseEntityRef: no name.    # 60 is '<' which gives: StartTag: invalid element namea different error.]BAD_BASESTRING_CHARS = [chr(b) for b in BAD]BAD_UNICODE_CHARS = [unichr(b) for b in BAD]

Then use it like this:

def remove_bad_chars(value):    # Remove bad control characters.    if isinstance(value, unicode):        for char in BAD_UNICODE_CHARS:            value = value.replace(char, u'')    elif isinstance(value, basestring):        for char in BAD_BASESTRING_CHARS:            value = value.replace(char, '')    return value

If value is 2 Gigabyte you might need to do this in a more efficient way, but I am ignoring that here, although the question mentions it. In my case, I am the one creating the xml file, but I need to deal with these characters in the original data, so I will use this function before putting data in the xml.


Found this thread from Google and while @Michael's answer ultimately lead me to a solution (to my problem at least) I wanted to provide a bit more of a copy/paste answer here for issues that can be solved so simply:

from lxml import etree# Create a parserparser = etree.XMLParser(recover=True)parsed_file = etree.parse('/path/to/your/janky/xml/file.xml', parser=parser)

I was facing an issue where I had no control over the XML pre-processing and was being given a file with invalid characters. @Michael's answer goes on to elaborate on a way to approach invalid characters from which recover=True can't address. Fortunately for me, this was enough to keep things moving along.