如何在Python中将文件转换为UTF-8格式?

83

我需要在Python中将一堆文件转换为UTF-8编码,而且我遇到了“转换文件”的问题。

我想要做的相当于:

iconv -t utf-8 $file > converted/$file # this is shell code

谢谢!

10个回答

64
你可以使用 codecs 模块,像这样:
import codecs
BLOCKSIZE = 1048576 # or some other, desired size in bytes
with codecs.open(sourceFileName, "r", "your-source-encoding") as sourceFile:
    with codecs.open(targetFileName, "w", "utf-8") as targetFile:
        while True:
            contents = sourceFile.read(BLOCKSIZE)
            if not contents:
                break
            targetFile.write(contents)

编辑:添加了 BLOCKSIZE 参数以控制文件块大小。


6
read()会一次性读取整个文件,你可能需要使用.read(BLOCKSIZE)的方式,其中BLOCKSIZE是适合一次读写的大小。 - Brian
3
在Python 3中:考虑使用open而不是codecs.open(请参见此处)。 - Rafael-WO
我在我的测试文件夹中运行代码,但是出现了以下错误:Traceback (most recent call last): File "D:\2022_12_02\TEST\convert txt to UTF-8 - versiune 2.py", line 3, in <module> with codecs.open(sourceFileName, "r", "d:\2022_12_02\TEST") as sourceFile: NameError: name 'sourceFileName' is not defined - Just Me

35

在我的小型测试中,这对我起作用:

sourceEncoding = "iso-8859-1"
targetEncoding = "utf-8"
source = open("source")
target = open("target", "w")

target.write(unicode(source.read(), sourceEncoding).encode(targetEncoding))

更好的做法是指定二进制模式。 - Arafangion
@Arafangion 为什么二进制模式会更好呢?谢谢! - Honghe.Wu
@Honghe.Wu:在Windows上,默认情况下是文本模式,这意味着操作系统会损坏您的行结尾,如果您不确定磁盘上的编码方式,则不希望出现这种情况。 - Arafangion
@Arafangion 如果我想指定二进制模式,示例会是什么样子?target = open("target", "wb")还需要做出其他更改吗? - The Bndr

17

谢谢回复,它起作用了!

由于源文件是混合格式的,我添加了一个按顺序尝试的源格式列表(sourceFormats),在出现UnicodeDecodeError时,我会尝试下一个格式:

from __future__ import with_statement

import os
import sys
import codecs
from chardet.universaldetector import UniversalDetector

targetFormat = 'utf-8'
outputDir = 'converted'
detector = UniversalDetector()

def get_encoding_type(current_file):
    detector.reset()
    for line in file(current_file):
        detector.feed(line)
        if detector.done: break
    detector.close()
    return detector.result['encoding']

def convertFileBestGuess(filename):
   sourceFormats = ['ascii', 'iso-8859-1']
   for format in sourceFormats:
     try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
      except UnicodeDecodeError:
        pass

def convertFileWithDetection(fileName):
    print("Converting '" + fileName + "'...")
    format=get_encoding_type(fileName)
    try:
        with codecs.open(fileName, 'rU', format) as sourceFile:
            writeConversion(sourceFile)
            print('Done.')
            return
    except UnicodeDecodeError:
        pass

    print("Error: failed to convert '" + fileName + "'.")


def writeConversion(file):
    with codecs.open(outputDir + '/' + fileName, 'w', targetFormat) as targetFile:
        for line in file:
            targetFile.write(line)

# Off topic: get the file list and call convertFile on each file
# ...

(由Rudro Badhon编辑:这包含了原始的“尝试多种格式,直到不出现异常”的方法,以及使用chardet.universaldetector的另一种方法)


对于复杂的情况,您可以尝试使用feedparser.org中的chardet模块来检测编码,但在您的情况下这是过度杀伤力。 - itsadok
1
我的Python 3.5不认识file函数。它是从哪里来的? - physicalattraction
是的,这个答案是8年前发布的,所以它是一段旧的Python 2代码。 - Sébastien RoccaSerra
我尝试了这段代码,运行了它,但它无法将ANSI文本文件转换为UTF-8... - Just Me

16

对于未知的源编码类型的解答

基于@Sébastien RoccaSerra

python3.6

import os    
from chardet import detect

# get file encoding type
def get_encoding_type(file):
    with open(file, 'rb') as f:
        rawdata = f.read()
    return detect(rawdata)['encoding']

from_codec = get_encoding_type(srcfile)

# add try: except block for reliability
try: 
    with open(srcfile, 'r', encoding=from_codec) as f, open(trgfile, 'w', encoding='utf-8') as e:
        text = f.read() # for small files, for big use chunks
        e.write(text)

    os.remove(srcfile) # remove old encoding file
    os.rename(trgfile, srcfile) # rename new encoding
except UnicodeDecodeError:
    print('Decode Error')
except UnicodeEncodeError:
    print('Encode Error')

8
你可以使用这个单行代码(假设你想从utf16转换为utf8)。
    python -c "from pathlib import Path; path = Path('yourfile.txt') ; path.write_text(path.read_text(encoding='utf16'), encoding='utf8')"

其中yourfile.txt是指向你的$file的路径。

要使其工作,您需要python 3.4或更高版本(现在可能已经有了)。

下面是上述代码的更易读版本。

from pathlib import Path
path = Path("yourfile.txt")
path.write_text(path.read_text(encoding="utf16"), encoding="utf8")

根据您的操作系统,这可能会更改换行控制字符。尽管如此,这是一个很好的答案,谢谢。它需要更多的赞。就这么简单,不需要按照Path.write_text文档管理资源。只需在文本模式下打开文件,写入内容并关闭文件即可。 - david

5
这是一个用于将任何文本文件转换为UTF-8编码的Python3函数。(不使用不必要的包)
def correctSubtitleEncoding(filename, newFilename, encoding_from, encoding_to='UTF-8'):
    with open(filename, 'r', encoding=encoding_from) as fr:
        with open(newFilename, 'w', encoding=encoding_to) as fw:
            for line in fr:
                fw.write(line[:-1]+'\r\n')

您可以在循环中轻松使用它来转换文件列表。

这对于从ISO-8859-1转换为UTF-8非常有效! - beep_check
1
不要使用"line[:-1]",最好使用"line.rstrip('\r\n')"。这样无论你遇到什么行结尾,都能得到正确的结果。 - fskoras

2

要猜测源编码,您可以使用*nix命令file

例如:

$ file --mime jumper.xml

jumper.xml: application/xml; charset=utf-8

它并没有回答这个问题。 - Arthur Julião

1

将目录中的所有文件转换为UTF-8编码。它是递归的,并且可以通过后缀过滤文件。感谢@Sole Sensei

# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple chardet
import os
import re
from chardet import detect


def get_file_list(d):
    result = []
    for root, dirs, files in os.walk(d):
        dirs[:] = [d for d in dirs if d not in ['venv', 'cmake-build-debug']]
        for filename in files:
            # your filter
            if re.search(r'(\.c|\.cpp|\.h|\.txt)$', filename):
                result.append(os.path.join(root, filename))
    return result


# get file encoding type
def get_encoding_type(file):
    with open(file, 'rb') as f:
        raw_data = f.read()
    return detect(raw_data)['encoding']


if __name__ == "__main__":
    file_list = get_file_list('.')
    for src_file in file_list:
        print(src_file)
        trg_file = src_file + '.swp'
        from_codec = get_encoding_type(src_file)
        try:
            with open(src_file, 'r', encoding=from_codec) as f, open(trg_file, 'w', encoding='utf-8') as e:
                text = f.read()
                e.write(text)
            os.remove(src_file)
            os.rename(trg_file, src_file)
        except UnicodeDecodeError:
            print('Decode Error')
        except UnicodeEncodeError:
            print('Encode Error')

非常好的代码,谢谢。 - Just Me

0

这是我的暴力方法。它还处理了输入中混杂的\n和\r\n。

    # open the CSV file
    inputfile = open(filelocation, 'rb')
    outputfile = open(outputfilelocation, 'w', encoding='utf-8')
    for line in inputfile:
        if line[-2:] == b'\r\n' or line[-2:] == b'\n\r':
            output = line[:-2].decode('utf-8', 'replace') + '\n'
        elif line[-1:] == b'\r' or line[-1:] == b'\n':
            output = line[:-1].decode('utf-8', 'replace') + '\n'
        else:
            output = line.decode('utf-8', 'replace') + '\n'
        outputfile.write(output)
    outputfile.close()
except BaseException as error:
    cfg.log(self.outf, "Error(18): opening CSV-file " + filelocation + " failed: " + str(error))
    self.loadedwitherrors = 1
    return ([])
try:
    # open the CSV-file of this source table
    csvreader = csv.reader(open(outputfilelocation, "rU"), delimiter=delimitervalue, quoting=quotevalue, dialect=csv.excel_tab)
except BaseException as error:
    cfg.log(self.outf, "Error(19): reading CSV-file " + filelocation + " failed: " + str(error))

0
import codecs
import glob

import chardet

ALL_FILES = glob.glob('*.txt')

def kira_encoding_function():
    """Check encoding and convert to UTF-8, if encoding no UTF-8."""
    for filename in ALL_FILES:

        # Not 100% accuracy:
        # https://dev59.com/WnRC5IYBdhLWcg3wAcbU#436299
        # Check:
        # https://chardet.readthedocs.io/en/latest/usage.html#example-using-the-detect-function
        # https://dev59.com/2pffa4cB1Zd3GeqP9Ije#37531241
        with open(filename, 'rb') as opened_file:
            bytes_file = opened_file.read()
            chardet_data = chardet.detect(bytes_file)
            fileencoding = (chardet_data['encoding'])
            print('fileencoding', fileencoding)

            if fileencoding in ['utf-8', 'ascii']:
                print(filename + ' in UTF-8 encoding')
            else:
                # Convert file to UTF-8:
                # https://dev59.com/9XnZa4cB1Zd3GeqPlAvi
                cyrillic_file = bytes_file.decode('cp1251')
                with codecs.open(filename, 'w', 'utf-8') as converted_file:
                    converted_file.write(cyrillic_file)
                print(filename +
                      ' in ' +
                      fileencoding +
                      ' encoding automatically converted to UTF-8')


kira_encoding_function()

此处放置源代码:


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接