如何在大型数据集上训练一个分词器?

3

根据示例,我正在尝试为波斯语训练一个分词器和T5模型。 我使用Google Colab pro, 当我尝试运行以下代码时:

import datasets

from t5_tokenizer_model import SentencePieceUnigramTokenizer


vocab_size = 32_000
input_sentence_size = None # change to 100_000 works

# Initialize a dataset
dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_fa", split="train")

tokenizer = SentencePieceUnigramTokenizer(unk_token="<unk>", eos_token="</s>", pad_token="<pad>")

print("len dataset:", len(dataset))

# Build an iterator over this dataset
def batch_iterator(input_sentence_size=None):
    if input_sentence_size is None:
        input_sentence_size = len(dataset)
    batch_length = 100
    for i in range(0, input_sentence_size, batch_length):
        yield dataset[i: i + batch_length]["text"]


# Train tokenizer
tokenizer.train_from_iterator(
    iterator=batch_iterator(input_sentence_size=input_sentence_size),
    vocab_size=vocab_size,
    show_progress=True,
)

# Save files to disk
tokenizer.save("/content/drive/MyDrive/Pouramini/tokenizer.json")

由于数据集大小较大(input_sentence_size 约为 8M 句子),在 train_from_iterator 中卡住了。我该如何将数据集分块运行代码,然后将它们合并到一个 tokenizer 输出中?


可能适用分区。 - Nazmul81
1个回答

2

你尝试过使用可迭代数据集吗?

dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_fa", split="train", streaming=True)

tokenizer = SentencePieceUnigramTokenizer(unk_token="<unk>", eos_token="</s>", pad_token="<pad>")

def batch_iterator(dataset):
    for i in dataset:
        yield i["text"]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接