Python - 通过readlines(size)提高大文件搜索效率

7

我是Python的新手,目前正在使用Python 2。我有一些源文件,每个文件都包含大量数据(约1900万行)。它看起来像下面这样:

apple   \t N   \t apple
n&apos
garden  \t N   \t garden
b\ta\md 
great   \t Adj \t great
nice    \t Adj \t (unknown)
etc

我的任务是搜索每个文件的第三列,以查找目标词,并且每次在语料库中找到目标词时,都需要将该词前后10个单词添加到多维字典中。
编辑:应排除包含“&”、“\”或字符串“(unknown)”的行。
我尝试使用readlines()和enumerate()解决此问题,如下面的代码所示。该代码确实完成了它的任务,但显然对于提供的源文件中的数据量而言不够高效。
我知道不应该使用readlines()或read()处理大型数据集,因为它会将整个文件加载到内存中。尽管如此,逐行读取文件时,我无法使用枚举方法获取目标词前后的10个单词。我也不能使用mmap,因为我没有权限在该文件上使用它。
因此,我想readlines方法加上一些大小限制可能是最有效的解决方案。然而,这样做的话,每次达到大小限制时,目标词后面的10个单词都不会被捕获,代码就会停止。
def get_target_to_dict(file):
targets_dict = {}
with open(file) as f:
    for line in f:
            targets_dict[line.strip()] = {}
return targets_dict

targets_dict = get_target_to_dict('targets_uniq.txt')
# browse directory and process each file 
# find the target words to include the 10 words before and after to the dictionary
# exclude lines starting with <,-,; to just have raw text

    def get_co_occurence(path_file_dir, targets, results):
        lines = []
        for file in os.listdir(path_file_dir):
            if file.startswith('corpus'):
            path_file = os.path.join(path_file_dir, file)
            with gzip.open(path_file) as corpusfile:
                # PROBLEMATIC CODE HERE
                # lines = corpusfile.readlines()
                for line in corpusfile:
                    if re.match('[A-Z]|[a-z]', line):
                        if '(unknown)' in line:
                            continue
                        elif '\\' in line:
                            continue
                        elif '&' in line:
                            continue
                        lines.append(line)
                for i, line in enumerate(lines):
                    line = line.strip()
                    if re.match('[A-Z][a-z]', line):
                        parts = line.split('\t')
                        lemma = parts[2]
                        if lemma in targets:
                            pos = parts[1]
                            if pos not in targets[lemma]:
                                targets[lemma][pos] = {}
                            counts = targets[lemma][pos]
                            context = []
                            # look at 10 previous lines
                            for j in range(max(0, i-10), i):
                                context.append(lines[j])
                            # look at the next 10 lines
                            for j in range(i+1, min(i+11, len(lines))):
                                context.append(lines[j])
                            # END OF PROBLEMATIC CODE
                            for context_line in context:
                                context_line = context_line.strip()
                                parts_context = context_line.split('\t')
                                context_lemma = parts_context[2]
                                if context_lemma not in counts:
                                    counts[context_lemma] = {}
                                context_pos = parts_context[1]
                                if context_pos not in counts[context_lemma]:
                                    counts[context_lemma][context_pos] = 0
                                counts[context_lemma][context_pos] += 1
                csvwriter = csv.writer(results, delimiter='\t')
                for k,v in targets.iteritems():
                    for k2,v2 in v.iteritems():
                        for k3,v3 in v2.iteritems():
                            for k4,v4 in v3.iteritems():
                                csvwriter.writerow([str(k), str(k2), str(k3), str(k4), str(v4)])
                                #print(str(k) + "\t" + str(k2) + "\t" + str(k3) + "\t" + str(k4) + "\t" + str(v4))

results = open('results_corpus.csv', 'wb')
word_occurrence = get_co_occurence(path_file_dir, targets_dict, results)

出于完整性的原因,我复制了代码的整个部分,因为它是创建多维字典的一个函数的一部分,该函数从提取的所有信息中创建多维字典并将其写入csv文件中。

我非常希望能得到任何提示或建议,使这段代码更加高效。

编辑 我已经更正了代码,以便考虑到目标单词前后确切的10个单词。


你可以使用 mapfiltergroupbyislice 高效地完成这个任务。 - Eli Korvigo
谢谢,我已经了解了它,看起来非常高效。您介意再详细解释一下上面的代码吗?使用 map,我肯定需要将corpusfile转换为列表,对吧? - dani_anyman
你是在查找列中的前10个单词,还是仅仅是前10个单词呢? - Eli Korvigo
我正在寻找第三列中确切的前10个单词。 - dani_anyman
这可能更适合于代码审查堆栈交换 - Xiong Chiamiov
2个回答

3
我的想法是创建一个缓冲区来存储前10行和另一个缓冲区来存储后10行,在读取文件时,它将被推入前面的缓冲区,并且如果大小超过10,则会弹出该缓冲区。
对于后面的缓冲区,我从文件迭代器克隆了另一个迭代器。然后在循环中同时运行两个迭代器,克隆迭代器提前10次迭代以获取后10行。
这避免使用readlines()并在内存中加载整个文件。希望在实际情况下能起作用。
编辑后: 仅在第3列不包含'&', '\', '(unknown)'时才填充前后缓冲区。也将split('\t')更改为只是split(),这样它就可以处理所有的空格或制表符。
import itertools
def get_co_occurence(path_file_dir, targets, results):
    excluded_words = ['&', '\\', '(unknown)'] # modify excluded words here 
    for file in os.listdir(path_file_dir): 
        if file.startswith('testset'): 
            path_file = os.path.join(path_file_dir, file) 
            with open(path_file) as corpusfile: 
                # CHANGED CODE HERE
                before_buf = [] # buffer to store before 10 lines 
                after_buf = []  # buffer to store after 10 lines 
                corpusfile, corpusfile_clone = itertools.tee(corpusfile) # clone file iterator to access next 10 lines 
                for line in corpusfile: 
                    line = line.strip() 
                    if re.match('[A-Z]|[a-z]', line): 
                        parts = line.split() 
                        lemma = parts[2]

                        # before buffer handling, fill buffer excluded line contains any of excluded words 
                        if not any(w in line for w in excluded_words): 
                            before_buf.append(line) # append to before buffer 
                        if len(before_buf)>11: 
                            before_buf.pop(0) # keep the buffer at size 10 
                        # next buffer handling
                        while len(after_buf)<=10: 
                            try: 
                                after = next(corpusfile_clone) # advance 1 iterator 
                                after_lemma = '' 
                                after_tmp = after.split()
                                if re.match('[A-Z]|[a-z]', after) and len(after_tmp)>2: 
                                    after_lemma = after_tmp[2]
                            except StopIteration: 
                                break # copy iterator will exhaust 1st coz its 10 iteration ahead 
                            if after_lemma and not any(w in after for w in excluded_words): 
                                after_buf.append(after) # append to buffer
                                # print 'after',z,after, ' - ',after_lemma
                        if (after_buf and line in after_buf[0]):
                            after_buf.pop(0) # pop off one ready for next

                        if lemma in targets: 
                            pos = parts[1] 
                            if pos not in targets[lemma]: 
                                targets[lemma][pos] = {} 
                            counts = targets[lemma][pos] 
                            # context = [] 
                            # look at 10 previous lines 
                            context= before_buf[:-1] # minus out current line 
                            # look at the next 10 lines 
                            context.extend(after_buf) 

                            # END OF CHANGED CODE
                            # CONTINUE YOUR STUFF HERE WITH CONTEXT

哇,好主意!非常感谢您的帮助和代码。我今天稍后会尝试并立即给您反馈。 - dani_anyman
看起来你的原始代码并没有做到你所描述的,它只是获取前后10行而不管内容,然后在处理时检查上下文;如果在前10行中有2行包含未知单词等无效单词,则只剩下8行。所以你想要的应该是过滤并确保缓冲区前后的所有10行都是有效的,没有任何过滤单词,我说得对吗?我稍后会尝试编辑我的代码来实现这个功能。 - Skycc
修改了回答以满足您的要求,希望能够满足您的需求 :) - Skycc
1
让我们在聊天中继续这个讨论。链接:http://chat.stackoverflow.com/rooms/128102/discussion-between-skycc-and-dani-anyman。 - Skycc
我看到你添加的内容在逻辑上是正确的,如果不匹配,它应该被排除,这就是进阶1更迭所做的事情,它只推进了corpusfile_clone迭代器。我做得有点不同,最终结果应该是相同的。 - Skycc
显示剩余3条评论

1
一个用Python 3.5编写的功能性替代方案。我简化了你的示例,仅使用了双方的5个单词。在垃圾值过滤方面还有其他简化,但只需要进行轻微修改。我将使用来自PyPI的fn包,使这个功能代码更加易于阅读。
from typing import List, Tuple
from itertools import groupby, filterfalse
from fn import F

首先,我们需要提取列:
def getcol3(line: str) -> str:
    return line.split("\t")[2]

然后我们需要按照谓词将行拆分为块:

TARGET_WORDS = {"target1", "target2"}

# this is out predicate
def istarget(word: str) -> bool:
    return word in TARGET_WORDS        

让我们过滤垃圾并编写一个函数,以获取最后和前5个单词:
def isjunk(word: str) -> bool:
    return word == "(unknown)"

def first_and_last(words: List[str]) -> (List[str], List[str]):
    first = words[:5]
    last = words[-5:]
    return first, last

现在,让我们获取组:
words = (F() >> (map, str.strip) >> (filter, bool) >> (map, getcol3) >> (filterfalse, isjunk))(lines)
groups = groupby(words, istarget)

现在,处理这些组。
def is_target_group(group: Tuple[str, List[str]]) -> bool:
    return istarget(group[0])

def unpack_word_group(group: Tuple[str, List[str]]) -> List[str]:
    return [*group[1]]

def unpack_target_group(group: Tuple[str, List[str]]) -> List[str]:
    return [group[0]]

def process_group(group: Tuple[str, List[str]]):
    return (unpack_target_group(group) if is_target_group(group) 
            else first_and_last(unpack_word_group(group)))

最后几步是:

words = list(map(process_group, groups))

P.S.

这是我的测试案例:

from io import StringIO

buffer = """
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\ttarget2
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
"""

# this simulates an opened file
lines = StringIO(buffer)

给定这个文件,你将得到这个输出:
[(['word', 'word', 'word', 'word', 'word'],
  ['word', 'word', 'word', 'word', 'word']),
 (['target1'], ['target1']),
 (['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word']),
 (['target2'], ['target2']),
 (['word', 'word', 'word', 'word', 'word'],
  ['word', 'word', 'word', 'word', 'word']),
 (['target1'], ['target1']),
 (['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word'])]

从这里开始,您可以删除前5个单词和后5个单词。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接