词汇列表的词形还原

4

我有一个文本文件里面有一些单词,我想对它们进行词形还原,以便去除那些意思相同但时态不同的单词,比如try和tried。但是当我尝试这么做时,总是会出现TypeError: unhashable type: 'list'这样的错误。

    results=[]
    with open('/Users/xyz/Documents/something5.txt', 'r') as f:
       for line in f:
          results.append(line.strip().split())

    lemma= WordNetLemmatizer()

    lem=[]

    for r in results:
       lem.append(lemma.lemmatize(r))

    with open("lem.txt","w") as t:
      for item in lem:
        print>>t, item

如何对已经分词的单词进行词形归并处理?
2个回答

5
方法WordNetLemmatizer.lemmatize可能需要一个字符串,但是你把字符串的列表传给了它。这导致了TypeError异常。 line.split()的结果是一个字符串的列表,你正在将其作为一个列表添加到results中,即一个由列表组成的列表。
你需要使用results.extend(line.strip().split())
results = []
with open('/Users/xyz/Documents/something5.txt', 'r') as f:
    for line in f:
        results.extend(line.strip().split())

lemma = WordNetLemmatizer()

lem = map(lemma.lemmatize, results)

with open("lem.txt", "w") as t:
    for item in lem:
        print >> t, item

或者进行重构,无需中间结果列表。
def words(fname):
    with open(fname, 'r') as document:
        for line in document:
            for word in line.strip().split():
                yield word

lemma = WordNetLemmatizer()
lem = map(lemma.lemmatize, words('/Users/xyz/Documents/something5.txt'))

1
Open a text file and and read lists as results as shown below
fo = open(filename)
results1 = fo.readlines()

results1
['I have a list of words in a text file', ' \n I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses', '']

# Tokenize lists

results2 = [line.split() for line in results1]

# Remove empty lists

results2 = [ x for x in results2 if x != []]

# Lemmatize each word from a list using WordNetLemmatizer

from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemma_list_of_words = []
for i in range(0, len(results2)):
     l1 = results2[i]
     l2 = ' '.join([lemmatizer.lemmatize(word) for word in l1])
     lemma_list_of_words.append(l2)
lemma_list_of_words
['I have a list of word in a text file', 'I want to perform lemmatization on them to remove word which have the same meaning but are in different tense']

Please look at the lemmatized difference between lemma_list_of_words and results1.

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接