如何对一组句子进行词形归并

6
如何在Python中对句子列表进行词形还原?
from nltk.stem.wordnet import WordNetLemmatizer
a = ['i like cars', 'cats are the best']
lmtzr = WordNetLemmatizer()
lemmatized = [lmtzr.lemmatize(word) for word in a]
print(lemmatized)

这是我尝试过的方法,但输出结果相同。我需要在处理前对单词进行分词吗?
2个回答

7

简而言之:

pip3 install -U pywsd

然后:

>>> from pywsd.utils import lemmatize_sentence

>>> text = 'i like cars'
>>> lemmatize_sentence(text)
['i', 'like', 'car']
>>> lemmatize_sentence(text, keepWordPOS=True)
(['i', 'like', 'cars'], ['i', 'like', 'car'], ['n', 'v', 'n'])

>>> text = 'The cat likes cars'
>>> lemmatize_sentence(text, keepWordPOS=True)
(['The', 'cat', 'likes', 'cars'], ['the', 'cat', 'like', 'car'], [None, 'n', 'v', 'n'])

>>> text = 'The lazy brown fox jumps, and the cat likes cars.'
>>> lemmatize_sentence(text)
['the', 'lazy', 'brown', 'fox', 'jump', ',', 'and', 'the', 'cat', 'like', 'car', '.']

否则,请看看pywsd中的函数:
  • 对字符串进行分词
  • 使用POS标注器并映射到WordNet POS标记集
  • 尝试进行词干提取
  • 最后使用POS和/或词干来调用词形还原器
请参见https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L129

5

您需要分别对每个单词进行词形还原。而不是对整个句子进行词形还原。正确的代码片段如下:

from nltk.stem.wordnet import WordNetLemmatizer
from nltk import word_tokenize
sents = ['i like cars', 'cats are the best']
lmtzr = WordNetLemmatizer()
lemmatized = [[lmtzr.lemmatize(word) for word in word_tokenize(s)]
              for s in sents]
print(lemmatized)
#[['i', 'like', 'car'], ['cat', 'are', 'the', 'best']]

如果您先进行词性标注,然后将词性信息提供给词形还原器,您还可以获得更好的结果。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接