快速实现字符n-gram用于单词的处理

23
我为计算字符二元组编写了以下代码,输出结果如下。我的问题是,如何获得一个不包括最后一个字符(即t)的输出?有没有更快、更有效的方法来计算字符 n-gram?
b='student'
>>> y=[]
>>> for x in range(len(b)):
    n=b[x:x+2]
    y.append(n)
>>> y
['st', 'tu', 'ud', 'de', 'en', 'nt', 't']

这里是我想要得到的结果:['st','tu','ud','de','nt]

谢谢您提前的建议。


4个回答

47

生成二元组:

In [8]: b='student'

In [9]: [b[i:i+2] for i in range(len(b)-1)]
Out[9]: ['st', 'tu', 'ud', 'de', 'en', 'nt']

为了泛化到不同的n

In [10]: n=4

In [11]: [b[i:i+n] for i in range(len(b)-n+1)]
Out[11]: ['stud', 'tude', 'uden', 'dent']

13

尝试使用zip

>>> def word2ngrams(text, n=3, exact=True):
...   """ Convert text into character ngrams. """
...   return ["".join(j) for j in zip(*[text[i:] for i in range(n)])]
... 
>>> word2ngrams('foobarbarblacksheep')
['foo', 'oob', 'oba', 'bar', 'arb', 'rba', 'bar', 'arb', 'rbl', 'bla', 'lac', 'ack', 'cks', 'ksh', 'she', 'hee', 'eep']

但请注意,它速度较慢:

import string, random, time

def zip_ngrams(text, n=3, exact=True):
  return ["".join(j) for j in zip(*[text[i:] for i in range(n)])]

def nozip_ngrams(text, n=3):
    return [text[i:i+n] for i in range(len(text)-n+1)]

# Generate 10000 random strings of length 100.
words = [''.join(random.choice(string.ascii_uppercase) for j in range(100)) for i in range(10000)]

start = time.time()
x = [zip_ngrams(w) for w in words]
print time.time() - start

start = time.time()
y = [nozip_ngrams(w) for w in words]
print time.time() - start        

print x==y
[输出]:
0.314492940903
0.197558879852
True

2
尽管较晚,NLTK具有内置函数可实现ngrams。
# python 3
from nltk import ngrams
["".join(k1) for k1 in list(ngrams("hello world",n=3))]

['hel', 'ell', 'llo', 'lo ', 'o w', ' wo', 'wor', 'orl', 'rld']

0

此函数为您提供从n = 1到n的ngrams:

def getNgrams(sentences, n):
    ngrams = []
    for sentence in sentences:
        _ngrams = []
        for _n in range(1,n+1):
            for pos in range(1,len(sentence)-_n):
                _ngrams.append([sentence[pos:pos+_n]])
        ngrams.append(_ngrams)
    return ngrams

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接