Generic Human的答案很好。但我见过的最好的实现是Peter Norvig自己在他的书“美丽的数据”中写的。
在我贴出他的代码之前,让我解释一下为什么Norvig的方法更准确(虽然稍微慢一些,代码也长一些)。
- 数据更好 - 在大小和精度方面都更好(他使用单词计数而不是简单的排名)
- 更重要的是,n-gram背后的逻辑真正使得这种方法如此准确。
他在书中提供的例子是将字符串'sitdown'拆分成两个单词。现在,一个非bigram方法的字符串拆分会考虑p('sit') * p ('down'),如果这个结果小于p('sitdown') - 这通常是情况 - 它就不会拆分,但我们希望它能够拆分(大多数情况下)。
然而,当你使用bigram模型时,你可以将p('sit down')视为bigram,而不是p('sitdown'),前者更优。基本上,如果你不使用bigrams,它会将你正在拆分的单词的概率视为独立的,这并不是事实,有些单词更有可能相互跟随出现。不幸的是,这些单词在很多情况下也会被黏在一起,从而使分割器变得混乱。
以下是数据链接(它是3个单独问题的数据,分词仅是其中之一。请阅读章节以了解详情):http://norvig.com/ngrams/
这是代码链接:http://norvig.com/ngrams/ngrams.py
这些链接已经存在一段时间了,但我还是会将代码的分割部分复制粘贴在此处。
import re, string, random, glob, operator, heapq
from collections import defaultdict
from math import log10
def memo(f):
"Memoize function f."
table = {}
def fmemo(*args):
if args not in table:
table[args] = f(*args)
return table[args]
fmemo.memo = table
return fmemo
def test(verbose=None):
"""Run some tests, taken from the chapter.
Since the hillclimbing algorithm is randomized, some tests may fail."""
import doctest
print 'Running tests...'
doctest.testfile('ngrams-test.txt', verbose=verbose)
@memo
def segment(text):
"Return a list of words that is the best segmentation of text."
if not text: return []
candidates = ([first]+segment(rem) for first,rem in splits(text))
return max(candidates, key=Pwords)
def splits(text, L=20):
"Return a list of all possible (first, rem) pairs, len(first)<=L."
return [(text[:i+1], text[i+1:])
for i in range(min(len(text), L))]
def Pwords(words):
"The Naive Bayes probability of a sequence of words."
return product(Pw(w) for w in words)
def product(nums):
"Return the product of a sequence of numbers."
return reduce(operator.mul, nums, 1)
class Pdist(dict):
"A probability distribution estimated from counts in datafile."
def __init__(self, data=[], N=None, missingfn=None):
for key,count in data:
self[key] = self.get(key, 0) + int(count)
self.N = float(N or sum(self.itervalues()))
self.missingfn = missingfn or (lambda k, N: 1./N)
def __call__(self, key):
if key in self: return self[key]/self.N
else: return self.missingfn(key, self.N)
def datafile(name, sep='\t'):
"Read key,value pairs from file."
for line in file(name):
yield line.split(sep)
def avoid_long_words(key, N):
"Estimate the probability of an unknown word."
return 10./(N * 10**len(key))
N = 1024908267229
Pw = Pdist(datafile('count_1w.txt'), N, avoid_long_words)
def cPw(word, prev):
"Conditional probability of word, given previous word."
try:
return P2w[prev + ' ' + word]/float(Pw[prev])
except KeyError:
return Pw(word)
P2w = Pdist(datafile('count_2w.txt'), N)
@memo
def segment2(text, prev='<S>'):
"Return (log P(words), words), where words is the best segmentation."
if not text: return 0.0, []
candidates = [combine(log10(cPw(first, prev)), first, segment2(rem, first))
for first,rem in splits(text)]
return max(candidates)
def combine(Pfirst, first, (Prem, rem)):
"Combine first and rem results into one (probability, words) pair."
return Pfirst+Prem, [first]+rem
['able', 'air', 'apple', 'boa', 'boar', 'board', 'chair', 'cup', 'cupboard', 'ha', 'hair', 'lea', 'leap', 'oar', 'tab', 'table', 'up']
- reclosedev