统计双字母组合的频率

3

我写了一段代码,基本上是计算单词频率并将其插入到ARFF文件中,以便与weka一起使用。我想修改它,使其可以计算二元组频率,即单词对而不是单个单词,尽管我的尝试最多只是无功而返。

我意识到有很多要看的东西,但任何关于此的帮助都将不胜感激。 这是我的代码:

    import re
    import nltk

    # Quran subset
    filename = raw_input('Enter name of file to convert to ARFF with extension, eg. name.txt: ')

    # create list of lower case words
    word_list = re.split('\s+', file(filename).read().lower())
    print 'Words in text:', len(word_list)
    # punctuation and numbers to be removed
    punctuation = re.compile(r'[-.?!,":;()|0-9]')
    word_list = [punctuation.sub("", word) for word in word_list]

    word_list2 = [w.strip() for w in word_list if w.strip() not in nltk.corpus.stopwords.words('english')]



    # create dictionary of word:frequency pairs
    freq_dic = {}


    for word in word_list2:

        # form dictionary
        try: 
            freq_dic[word] += 1
        except: 
            freq_dic[word] = 1


    print '-'*30

    print "sorted by highest frequency first:"
    # create list of (val, key) tuple pairs
    freq_list2 = [(val, key) for key, val in freq_dic.items()]
    # sort by val or frequency
    freq_list2.sort(reverse=True)
    freq_list3 = list(freq_list2)
    # display result as top 10 most frequent words
    freq_list4 =[]
    freq_list4=freq_list3[:10]

    words = []

    for item in freq_list4:
        a = str(item[1])
        a = a.lower()
        words.append(a)



    f = open(filename)

    newlist = []

    for line in f:
        line = punctuation.sub("", line)
        line = line.lower()
        newlist.append(line)

    f2 = open('Lines.txt','w')

    newlist2= []
    for line in newlist:
        line = line.split()
        newlist2.append(line)
        f2.write(str(line))
        f2.write("\n")


    print newlist2

    # ARFF Creation

    arff = open('output.arff','w')
    arff.write('@RELATION wordfrequency\n\n')
    for word in words:
        arff.write('@ATTRIBUTE ')
        arff.write(str(word))
        arff.write(' numeric\n')

    arff.write('@ATTRIBUTE class {endofworld, notendofworld}\n\n')
    arff.write('@DATA\n')
    # Counting word frequencies for each verse
    for line in newlist2:
        word_occurrences = str("")
        for word in words:
            matches = int(0)
            for item in line:
                if str(item) == str(word):
                matches = matches + int(1)
                else:
                continue
            word_occurrences = word_occurrences + str(matches) + ","
        word_occurrences = word_occurrences + "endofworld"
        arff.write(word_occurrences)
        arff.write("\n")

    print words
4个回答

5
这应该能帮助你入门:
def bigrams(words):
    wprev = None
    for w in words:
        yield (wprev, w)
        wprev = w

请注意,第一个二元组是 (None, w1),其中 w1 是第一个单词,因此您有一个特殊的二元组标记文本开头。如果您还想要一个文本结尾的二元组,请在循环后添加 yield (wprev, None)

如果第一项不是(None,first_word),而是(first_word,second_word),那么调用者就不需要为第一项编写特殊情况,这将更好。 - Steven Rumbalski
@Steven:在文本处理中,有一个特殊的二元组标记表示文本的开头是很常见的。实际上,在一个真实应用程序中,我也会在最后添加一行代码 yield (wprev, None) - Fred Foo
1
这个答案的想法与itertools模块文档中的成对迭代器配方(参见http://docs.python.org/library/itertools.html#recipes)相同。 - Steven Rumbalski

3

泛化到n-grams,可选择填充,还使用defaultdict(int)来计算频率,在2.6中工作:

from collections import defaultdict

def ngrams(words, n=2, padding=False):
    "Compute n-grams with optional padding"
    pad = [] if not padding else [None]*(n-1)
    grams = pad + words + pad
    return (tuple(grams[i:i+n]) for i in range(0, len(grams) - (n - 1)))

# grab n-grams
words = ['the','cat','sat','on','the','dog','on','the','cat']
for size, padding in ((3, 0), (4, 0), (2, 1)):
    print '\n%d-grams padding=%d' % (size, padding)
    print list(ngrams(words, size, padding))

# show frequency
counts = defaultdict(int)
for ng in ngrams(words, 2, False):
    counts[ng] += 1

print '\nfrequencies of bigrams:'
for c, ng in sorted(((c, ng) for ng, c in counts.iteritems()), reverse=True):
    print c, ng

输出:

3-grams padding=0
[('the', 'cat', 'sat'), ('cat', 'sat', 'on'), ('sat', 'on', 'the'), 
 ('on', 'the', 'dog'), ('the', 'dog', 'on'), ('dog', 'on', 'the'), 
 ('on', 'the', 'cat')]

4-grams padding=0
[('the', 'cat', 'sat', 'on'), ('cat', 'sat', 'on', 'the'), 
 ('sat', 'on', 'the', 'dog'), ('on', 'the', 'dog', 'on'), 
 ('the', 'dog', 'on', 'the'), ('dog', 'on', 'the', 'cat')]

2-grams padding=1
[(None, 'the'), ('the', 'cat'), ('cat', 'sat'), ('sat', 'on'), 
 ('on', 'the'), ('the', 'dog'), ('dog', 'on'), ('on', 'the'), 
 ('the', 'cat'), ('cat', None)]

frequencies of bigrams:
2 ('the', 'cat')
2 ('on', 'the')
1 ('the', 'dog')
1 ('sat', 'on')
1 ('dog', 'on')
1 ('cat', 'sat')

1

0

我已经为您改写了第一部分,因为它很麻烦。注意事项如下:

  1. 尽可能使用列表理解。
  2. collections.Counter 很棒!

好的,下面是代码:

import re
import nltk
import collections

# Quran subset
filename = raw_input('Enter name of file to convert to ARFF with extension, eg. name.txt: ')

# punctuation and numbers to be removed
punctuation = re.compile(r'[-.?!,":;()|0-9]')

# create list of lower case words
word_list = re.split('\s+', open(filename).read().lower())
print 'Words in text:', len(word_list)

words = (punctuation.sub("", word).strip() for word in word_list)
words = (word for word in words if word not in ntlk.corpus.stopwords.words('english'))

# create dictionary of word:frequency pairs
frequencies = collections.Counter(words)

print '-'*30

print "sorted by highest frequency first:"
# create list of (val, key) tuple pairs
print frequencies

# display result as top 10 most frequent words
print frequencies.most_common(10)

[word for word, frequency in frequencies.most_common(10)]

重写得很好,但您没有回答问题,该问题要求修改代码以计算双字母频率。 - Steven Rumbalski
我似乎无法弄清楚为什么它一直出现错误。frequencies = collections.Counter(words) AttributeError: 'module' object has no attribute 'Counter' - Alex
2
@Alex:你正在使用Python 2.6或更低版本;Counter是在2.7中引入的。要么升级,要么编写自己的Counter... - Katriel

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接