Python TfidfVectorizer报错:空词汇表;可能文档仅包含停用词。

21

我想使用Python的Tfidf来转换一组文本。但是,在尝试进行fit_transform时,我遇到了一个value error:ValueError: empty vocabulary; perhaps the documents only contain stop words.

In [69]: TfidfVectorizer().fit_transform(smallcorp)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-69-ac16344f3129> in <module>()
----> 1 TfidfVectorizer().fit_transform(smallcorp)

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
   1217         vectors : array, [n_samples, n_features]
   1218         """
-> 1219         X = super(TfidfVectorizer, self).fit_transform(raw_documents)
   1220         self._tfidf.fit(X)
   1221         # X is already a transformed view of raw_documents so

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
    778         max_features = self.max_features
    779 
--> 780         vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
    781         X = X.tocsc()
    782 

/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in _count_vocab(self, raw_documents, fixed_vocab)
    725             vocabulary = dict(vocabulary)
    726             if not vocabulary:
--> 727                 raise ValueError("empty vocabulary; perhaps the documents only"
    728                                  " contain stop words")
    729 

ValueError: empty vocabulary; perhaps the documents only contain stop words

我阅读了这里的SO问题:Problems using a custom vocabulary for TfidfVectorizer scikit-learn并尝试了ogrisel的建议,使用TfidfVectorizer(**params).build_analyzer()(dataset2)来检查文本分析步骤的结果,看起来效果符合预期:以下是一小段示例:

In [68]: TfidfVectorizer().build_analyzer()(smallcorp)
Out[68]: 
[u'due',
 u'to',
 u'lack',
 u'of',
 u'personal',
 u'biggest',
 u'education',
 u'and',
 u'husband',
 u'to',

我还有其他做错的地方吗?我输入到语料库里的只是一个由换行符分隔的大字符串。

谢谢!


我遇到了相同的问题,所以我将版本从v0.19降级到0.18。 - Nacho
5个回答

22

我猜是因为你只有一个字符串。尝试将其拆分为字符串列表,例如:

In [51]: smallcorp
Out[51]: 'Ah! Now I have done Philosophy,\nI have finished Law and Medicine,\nAnd sadly even Theology:\nTaken fierce pains, from end to end.\nNow here I am, a fool for sure!\nNo wiser than I was before:'

In [52]: tf = TfidfVectorizer()

In [53]: tf.fit_transform(smallcorp.split('\n'))
Out[53]: 
<6x28 sparse matrix of type '<type 'numpy.float64'>'
    with 31 stored elements in Compressed Sparse Row format>

2
这应该是正确的答案。有关此事的文档链接在哪里?我找不到它。 - Goodword
这个文档中有一个例子。http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction(在4.2.3.3节中,一个语料库变量) - satojkovic

4
在版本0.12中,我们将最小文档频率设置为2,这意味着仅考虑出现至少两次的单词。要使您的示例起作用,您需要将min_df = 1设置。自0.13版本以来,这是默认设置。所以我猜你正在使用0.12版本,对吗?

0

如果您坚持只有一个字符串,您可以将单个字符串作为元组放置。而不是:

smallcorp = "your text"

您可以将其放在元组中。

In [22]: smallcorp = ("your text",)
In [23]: tf.fit_transform(smallcorp)
Out[23]: 
<1x2 sparse matrix of type '<type 'numpy.float64'>'
    with 2 stored elements in Compressed Sparse Row format>

0

我也遇到了同样的问题。 将整数列表(nums)转换为字符串列表(nums)没有帮助。 但是我进行了转换:

['d'+str(nums) for nums in set] #where d is some letter which mention, we work with strings

这有所帮助。


0

我在对大型语料库运行TF-IDF Python 3脚本时,遇到了类似的错误。一些小文件(显然)缺少关键词,导致出现错误信息。

我尝试了几种解决方案(如向我的filtered列表中添加虚拟字符串len(filtered = 0,...),但都没有帮助。最简单的解决方案是添加一个try: ... except ... continue表达式。

pattern = "(?u)\\b[\\w-]+\\b"
cv = CountVectorizer(token_pattern=pattern)

# filtered is a list
filtered = [w for w in filtered if not w in my_stopwords and not w.isdigit()]

# ValueError:
# cv.fit(text)
# File "tfidf-sklearn.py", line 1675, in tfidf
#   cv.fit(filtered)
#   File "/home/victoria/venv/py37/lib/python3.7/site-packages/sklearn/feature_extraction/text.py", line 1024, in fit
#   self.fit_transform(raw_documents)
#   ...
#   ValueError: empty vocabulary; perhaps the documents only contain stop words

# Did not help:
# https://dev59.com/5mEi5IYBdhLWcg3w0O-Q#20933883
#
# if len(filtered) == 0:
#     filtered = ['xxx', 'yyy', 'zzz']

# Solution:
try:
    cv.fit(filtered)
    cv.fit_transform(filtered)
    doc_freq_term_matrix = cv.transform(filtered)
except ValueError:
    continue

嗨,@Victoria, 在我的看法中,你只是避免执行你想要的操作(向量化)。你对此的理解是什么? - Aleksander Molak

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接