Convert UTF-16 to UTF-8 and remove BOM? Convert UTF-16 to UTF-8 and remove BOM? python python

Convert UTF-16 to UTF-8 and remove BOM?


This is the difference between UTF-16LE and UTF-16

  • UTF-16LE is little endian without a BOM
  • UTF-16 is big or little endian with a BOM

So when you use UTF-16LE, the BOM is just part of the text. Use UTF-16 instead, so the BOM is automatically removed. The reason UTF-16LE and UTF-16BE exist is so people can carry around "properly-encoded" text without BOMs, which does not apply to you.

Note what happens when you encode using one encoding and decode using the other. (UTF-16 automatically detects UTF-16LE sometimes, not always.)

>>> u'Hello, world'.encode('UTF-16LE')'H\x00e\x00l\x00l\x00o\x00,\x00 \x00w\x00o\x00r\x00l\x00d\x00'>>> u'Hello, world'.encode('UTF-16')'\xff\xfeH\x00e\x00l\x00l\x00o\x00,\x00 \x00w\x00o\x00r\x00l\x00d\x00' ^^^^^^^^ (BOM)>>> u'Hello, world'.encode('UTF-16LE').decode('UTF-16')u'Hello, world'>>> u'Hello, world'.encode('UTF-16').decode('UTF-16LE')u'\ufeffHello, world'    ^^^^ (BOM)

Or you can do this at the shell:

for x in * ; do iconv -f UTF-16 -t UTF-8 <"$x" | dos2unix >"$x.tmp" && mv "$x.tmp" "$x"; done


Just use str.decode and str.encode:

with open(ff_name, 'rb') as source_file:  with open(target_file_name, 'w+b') as dest_file:    contents = source_file.read()    dest_file.write(contents.decode('utf-16').encode('utf-8'))

str.decode will get rid of the BOM for you (and deduce the endianness).