sklearn MultinomialNB如何在类中找到最具区分性的单词

3
我正在使用sklearn的多项式朴素贝叶斯分类器来对20NewsGroup数据进行分类。代码如下:
import numpy as np
import operator
from sklearn import datasets, naive_bayes, metrics, feature_extraction

data_train = datasets.fetch_20newsgroups(subset = 'train', shuffle = True,  random_state = 2016, remove = ('headers', 'footers', 'quotes'))
data_test = datasets.fetch_20newsgroups(subset = 'test', shuffle = True, random_state = 2016, remove = ('headers', 'footers', 'quotes'))
categories = data_train.target_names

target_map = {}

for i in range(len(categories)):
    if 'comp.' in categories[i]:
        target_map[i] = 0
    elif 'rec.' in categories[i]:
        target_map[i] = 1
    elif 'sci.' in categories[i]:
        target_map[i] = 2
    elif 'misc.forsale' in categories[i]:
        target_map[i] = 3
    elif 'talk.politics' in categories[i]:
        target_map[i] = 4
    else:
        target_map[i] = 5

y_temp = data_train.target
y_train = []

for y in y_temp:
    y_train.append(target_map[y])

y_temp = data_test.target
y_test = []

for y in y_temp:
    y_test.append(target_map[y])

count_vectorizer = feature_extraction.text.CountVectorizer(min_df = 0.01, max_df = 0.5, stop_words = 'english')
x_train = count_vectorizer.fit_transform(data_train.data)
x_test = count_vectorizer.transform(data_test.data)

feature_names= count_vectorizer.get_feature_names()

mnb_alpha_001  = naive_bayes.MultinomialNB(alpha = 0.01)

mnb_alpha_001.fit(x_train, y_train)

y_pred_001  = mnb_alpha_001.predict(x_test)

print('Accuracy Of MNB With Alpha = 0.01  : ', metrics.accuracy_score(y_test,y_pred_001))

上述代码可以很好地执行分类。此外,我想列出每个类别(类别0-类别5)中区分该类别与其他类别的10个最具代表性的单词。
如果我只有两个类别(类别0-类别1),我可以使用feature_log_prob_来比较对数概率,如下所示:
diff = mnb_alpha_001.feature_log_prob_[1,:] - mnb_alpha_001.feature_log_prob_[0,:]
name_diff = {}
for i in range(len(feature_names)):
    name_diff[feature_names[i]] = diff[i]
names_diff_sorted = sorted(name_diff.items(), key = operator.itemgetter(1), reverse = True)
for i in range(10):
    print(names_diff_sorted[i])

上面的代码将列出区分类别1和类别0最具有区分性的10个单词。问题在于,如果我有超过2个类别,就不能简单地对对数概率进行减法运算。
需要您的专业建议,如何执行此任务,以便在每个类别中获取最具区分性的10个单词?
非常感谢。
1个回答

1
acc=[]
i=0
rr=[0.001,0.01,0.1,1,10]
for alp in [0,1,2,3,4]:
    mnb = naive_bayes.MultinomialNB(alpha = alp)
    mnb.fit(x_train, y_train)
    y_pred = mnb.predict(x_test)
    print('accuracy of Multinomial Naive Bayes for alpha ',rr[alp],'=', metrics.accuracy_score(y_test, y_pred))
    acc.append(metrics.accuracy_score(y_test, y_pred))


import operator
pos,m = max(enumerate(acc), key=operator.itemgetter(1))
print("Max accuracy=",m," for alpha=",rr[pos])

for ss in [0,1,2,3,4,5]:
    mnb = naive_bayes.MultinomialNB(alpha = rr[pos])
    mnb.fit(x_train, y_train)
    y_pred = mnb.predict(x_test)

    acc[alp]=metrics.accuracy_score(y_test, y_pred)
    feature_names = count_vectorizer.get_feature_names()
    diff = mnb.feature_log_prob_[ss,:] - np.max(mnb.feature_log_prob_[-ss:])

    name_diff = {}
    for i in range(len(feature_names)):
       name_diff[feature_names[i]] = diff[i]

       names_diff_sorted = sorted(name_diff.items(), key = op.itemgetter(1), reverse = True)
    for i in range(10):
       print(ss,names_diff_sorted[i])

1
你能在答案中详细说明吗? - Miguel

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接