我看到过类似的帖子,但没有完整的答案,所以在这里发帖。
我正在使用Spark中的TF-IDF来获取文档中具有最大tf-idf值的单词。我使用以下代码:
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover
tokenizer = Tokenizer(inputCol="doc_cln", outputCol="tokens")
remover1 = StopWordsRemover(inputCol="tokens",
outputCol="stopWordsRemovedTokens")
stopwordList =["word1","word2","word3"]
remover2 = StopWordsRemover(inputCol="stopWordsRemovedTokens",
outputCol="filtered" ,stopWords=stopwordList)
hashingTF = HashingTF(inputCol="filtered", outputCol="rawFeatures", numFeatures=2000)
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=5)
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[tokenizer, remover1, remover2, hashingTF, idf])
model = pipeline.fit(df)
results = model.transform(df)
results.cache()
我得到的结果类似于
|[a8g4i9g5y, hwcdn] |(2000,[905,1104],[7.34977707433047,7.076179741760428])
在哪里
filtered: array (nullable = true)
features: vector (nullable = true)
如何从“feature”中提取数组?理想情况下,我想获得与最高tfidf对应的单词,如下所示。
|a8g4i9g5y|7.34977707433047
事先感谢您!
a8g4i9g5y
与特征905相关联,因此具有tf-idf值为7.34977707433047。哈希过程不一定保持该特定句子中单词的顺序。您只能确定a8g4i9g5y
或hwcdn
之一由列905表示,而另一个由1104
表示。 - ldavid