排除停用词的文本文件中最常出现的单词

3

我有一个法语文本文件,想要统计出该文件中出现最频繁的单词,不考虑停用词。以下是代码:

with open('./text_file.txt', 'r', encoding='utf8') as f:
    s = f.read()

num_chars = len(s)
num_lines = s.count('\n')

#call split with no arguments
words = s.split()
d = {}
for w in words:
    if w in d:
        d[w] += 1
    else:
        d[w] = 1

num_words = sum(d[w] for w in d)

lst = [(d[w],w) for w in d]
lst.sort()
lst.reverse()

# nltk treatment
from nltk.corpus import stopwords # Import the stop word list
from nltk.tokenize import wordpunct_tokenize

stop_words = set(stopwords.words('french')) # creating a set makes the searching faster
print (stop_words)
print ([word for word in lst if word not in stop_words])


print('\n The 50 most frequent words are /n')

i = 1
for count, word in lst[:50]:
    print('%2s. %4s %s' %(i,count,word))
    i+= 1

这将返回包括停用词在内的出现频率最高的单词。您有更好的想法吗?

3
你可以提前加载 stop_words 并在 if w in d 中检查它们。这样你就不必先计数然后再删除它们了。 - Finn
4个回答

2
以下是简化版:

这里是一个简化版本:

from nltk.corpus import stopwords # Import the stop word list
from nltk.tokenize import wordpunct_tokenize

with open('./text_file.txt', 'r', encoding='utf8') as f:
    words = f.read().split()

d = {}
stop_words = set(stopwords.words('french')) # creating a set makes the searching faster
for w in words:
    if w not in stop_words:
        if w in d:
            d[w] += 1
        else:
            d[w] = 1

lst = sorted([(d[w],w) for w in d],reverse=True)
print (stop_words)
print ([word for word in lst if word not in stop_words])
print('\n The 50 most frequent words are /n')

i = 1
for count, word in lst[:50]:
    print('%2s. %4s %s' %(i,count,word))
    i += 1

你好,感谢您的时间。我仍然有同样的问题,即我有“de”(法语中“the”的翻译)作为最常见的单词。我想删除这些“通用”单词。这就是为什么我使用nltk的原因。 - user93804
@user93804,我已经添加了它。 - Ann Zen
你好,还是不行。它给了我以下错误:TypeError: 类型为“WordListCorpusReader”的参数不可迭代。 - user93804
1
我的错,忘记了下划线。 - Ann Zen

1

这里有一个更加简洁(可能也更快)的解决方案,使用 collections.Counter

from collections import Counter
from nltk.corpus import stopwords # Import the stop word list
from nltk.tokenize import wordpunct_tokenize
NUM_WORDS = 50

with open('./text_file.txt', 'r', encoding='utf8') as f:
    words = f.read().split()

word_counts = Counter(word for word in words
                      if word not in set(stopwords.words('french')))
print(f'\nThe {NUM_WORDS} most frequent words are:\n')
for i, (word, count) in enumerate(word_counts.most_common(NUM_WORDS)):
    print('%2s. %4s %s' % (i, count, word))

感谢您的时间。45分钟了,代码仍在运行...是正常的吗? - user93804

1
NLTK 有一个叫做 FreqDist 的类可以用于计算频率,提供了许多便捷的方法。您可以按照以下方式使用它:
from nltk.tokenize import wordpunct_tokenize
from nltk.probability import FreqDist
from nltk.corpus import stopwords


with open('text_file.txt', 'r', encoding='utf8') as f:
    text = f.read()

fd = FreqDist(
    word
    for word in wordpunct_tokenize(text)
    if word not in set(stopwords.words('french'))
)
fd.pprint()


1
with open("/yourFile.txt", "r") as file:
    words = file.read().split()

    cptwords = {}

    for word in words:
        if word[-1] in [",", ".", "\n", ":", "!", "?", ";"]:
            word.rstrip()

        cptwords.setdefault(word, 0)
        cptwords[word] += 1

    cptwords = sorted(cptwords.items(), key = lambda x: x[1], reverse = True)

    print(f"The first 50 most used words are {[truc[0] for truc in cptwords[:50]]}")

那是一种简单的方法来完成这个任务。

你好,感谢您的时间。我仍然面临同样的问题,即“de”(法语中“The”的翻译)是最常见的单词。我想删除这些“通用”单词,这就是为什么我使用nltk的原因。 - user93804

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接