error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte python python

error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte


Python tries to convert a byte-array (a bytes which it assumes to be a utf-8-encoded string) to a unicode string (str). This process of course is a decoding according to utf-8 rules. When it tries this, it encounters a byte sequence which is not allowed in utf-8-encoded strings (namely this 0xff at position 0).

Since you did not provide any code we could look at, we only could guess on the rest.

From the stack trace we can assume that the triggering action was the reading from a file (contents = open(path).read()). I propose to recode this in a fashion like this:

with open(path, 'rb') as f:  contents = f.read()

That b in the mode specifier in the open() states that the file shall be treated as binary, so contents will remain a bytes. No decoding attempt will happen this way.


Use this solution it will strip out (ignore) the characters and return the string without them. Only use this if your need is to strip them not convert them.

with open(path, encoding="utf8", errors='ignore') as f:

Using errors='ignore'You'll just lose some characters. but if your don't care about them as they seem to be extra characters originating from a the bad formatting and programming of the clients connecting to my socket server.Then its a easy direct solution.reference


Use encoding format ISO-8859-1 to solve the issue.