如何在 pandas 数据框中计算出现频率最高的 500 个单词

4

我有一个数据框,其中一列名为Text(每行1个文本),我想计算所有文本中最常见的单词。

到目前为止,我尝试了以下两种方法(均来自stackoverflow):

pd.Series(' '.join(df['Text']).lower().split()).value_counts()[:100]

and

Counter(" ".join(df["Text"]).split()).most_common(100)

两者都给了我以下错误:

TypeError:序列项0:期望str实例,但找到列表

我已经尝试了计数器方法,只需使用:

df.Text.apply(Counter()) 

这个程序可以给我每个文本中的单词数,并且我修改了计数器的方法,使其返回每个文本中最常见的单词。

但是我想要得到整体最常见的单词。

以下是数据框的示例(文本已经小写、去除了标点符号、进行了分词,并移除了停用词)

    Datum   File    File_type                                         Text                         length    len_cleaned_text
Datum                                                   
2000-01-27  2000-01-27  _04.txt     _04     [business, date, jan, heineken, starts, integr...       396         220

编辑:代码“重现”它
  for file in file_list:
    name = file[len(input_path):]
        date = name[11:17]
        type_1 = name[17:20] 


with open(file, "r", encoding="utf-8", errors="surrogateescape") as rfile:
                format
                text = rfile.read()
                text = text.encode('utf-8', 'ignore')
                text = text.decode('utf-8', 'ignore')
     a={"File": name, "Text": text,'the':count_the, 'Datum': date, 'File_type': type_1, 'length':length,}
        result_list.append(a)

新单元格

  df['Text']= df['Text'].str.lower()
    p = re.compile(r'[^\w\s]+')
    d = re.compile(r'\d+')
    for index, row in df.iterrows():
        df['Text']=df['Text'].str.replace('\n',' ')
        df['Text']=df['Text'].str.replace('################################ end of story 1 ##############################','')
        df['Text'] = [p.sub('', x) for x in df['Text'].tolist()]
        df['Text'] = [d.sub('', x) for x in df['Text'].tolist()]
    df['Text']=df['Text'].apply(word_tokenize)


    Datum   File    File_type   Text    length  the
Datum                       
2000-01-27  2000-01-27  0864820040_000127_04.txt    _04     [business, date, jan, heineken, starts, integr...   396     0
2000-02-01  2000-02-01  0910068040_000201_04.txt    _04     [group, english, cns, date, feb, bat, acquisit...   305     0
2000-05-03  2000-05-03  1070448040_000503_04.txt    _04     [date, may, cobham, plc, cob, acquisitionsdisp...   701     0
2000-05-11  2000-05-11  0865985020_000511_04.txt    _04     [business, date, may, swedish, match, complete...   439     0
2000-11-28  2000-11-28  1067252020_001128_04.txt    _04     [date, nov, intec, telecom, sys, itl, doc, pla...   158     0
2000-12-18  2000-12-18  1963867040_001218_04.txt    _04     [associated, press, apw, date, dec, volvo, div...   367     0
2000-12-19  2000-12-19  1065767020_001219_04.txt    _04     [date, dec, spirent, plc, spt, acquisition, co...   414     0
2000-12-21  2000-12-21  1076829040_001221_04.txt    _04     [bloomberg, news, bn, date, dec, eni, ceo, cfo...   271     0
2001-02-06  2001-02-06  1084749020_010206_04.txt    _04     [date, feb, chemring, group, plc, chg, acquisi...   130     0
2001-02-15  2001-02-15  1063497040_010215_04.txt    _04     [date, feb, electrolux, ab, elxb, acquisition,...   420     0

And a description of the dataframe:

<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 557 entries, 2000-01-27 to 2017-10-06
Data columns (total 13 columns):
Datum               557 non-null datetime64[ns]
File                557 non-null object
File_type           557 non-null object
Text                557 non-null object
customers           557 non-null int64
grwoth              557 non-null int64
human               557 non-null int64
intagibles          557 non-null int64
length              557 non-null int64
synergies           557 non-null int64
technology          557 non-null int64
the                 557 non-null int64
len_cleaned_text    557 non-null int64
dtypes: datetime64[ns](1), int64(9), object(3)
memory usage: 60.9+ KB

Thanks in advance


你能提供一个数据框(Dataframe)和一个 mcve 的示例吗? - JE_Muc
如果您想尝试,请访问以下网址:https://medium.com/@cristhianboujon/how-to-list-the-most-common-words-from-text-corpus-using-scikit-learn-dad4d0cab41d - iamklaus
3个回答

1

好的,我明白了。你的df ['Text']由文本列表组成。因此,你可以这样做:

full_list = []  # list containing all words of all texts
for elmnt in df['Text']:  # loop over lists in df
    full_list += elmnt  # append elements of lists to full list

val_counts = pd.Series(full_list).value_counts()  # make temporary Series to count

这种解决方案避免了使用过多的列表推导式,因此使代码易于阅读和理解。此外,不需要使用额外的模块,如recollections

尝试了值计数...它将每个文本都计算为一个。 - user10395806
它可以使用您提供的示例数据框。您能否提供一个示例数据框,其中包含您数据框的前10行?请包括构建数据框所需的代码片段。 - JE_Muc

1
这是我的版本,其中我将列值转换为列表,然后创建一个单词列表,进行清理,就能得到计数器:
your_text_list = df['Text'].tolist()
your_text_list_nan_rm = [x for x in your_text_list if str(x) != 'nan']
flat_list = [inner for item in your_text_list_nan_rm for inner in ast.literal_eval(item)] 

counter = collections.Counter(flat_list)
top_words = counter.most_common(100)

1
你可以通过 applyCounter.update 方法来完成:
from collections import Counter

counter = Counter()
df = pd.DataFrame({'Text': values})
_ = df['Text'].apply(lambda x: counter.update(x))

counter.most_common(10) 
Out:

[('Amy', 3), ('was', 3), ('hated', 2),
 ('Kamal', 2), ('her', 2), ('and', 2), 
 ('she', 2), ('She', 2), ('sent', 2), ('text', 2)]

其中df['Text']是:

0    [Amy, normally, hated, Monday, mornings, but, ...
1    [Kamal, was, in, her, art, class, and, she, li...
2    [She, was, waiting, outside, the, classroom, w...
3              [Hi, Amy, Your, mum, sent, me, a, text]
4                         [You, forgot, your, inhaler]
5    [Why, don’t, you, turn, your, phone, on, Amy, ...
6    [She, never, sent, text, messages, and, she, h...
Name: Text, dtype: object

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接