尝试使用预训练的100维word2vec嵌入来训练LSTM。
@staticmethod
def load_embeddings(pre_trained_embeddings_path, word_embed_size):
embd = []
import time
start_time = time.time()
cnt = 4
with codecs.open(pre_trained_embeddings_path, mode="r", encoding='utf-8') as f:
for line in f.readlines():
values = line.strip().split(' ')
embd.append(values[1:])
cnt += 1
if cnt % 100000 == 0:
print("word-vectors loaded: %d" % cnt)
embedding, vocab_size, embed_dim = embd, len(embd), len(embd[0])
load_end_time = time.time()
print("word vectors loaded from and start initialising, cnt: %d, time taken: %d secs " % (vocab_size, load_end_time - start_time))
embedding_init = tf.constant_initializer(embedding, dtype=tf.float16)
src_word_embedding = tf.get_variable(shape=[vocab_size, embed_dim], initializer=embedding_init, trainable=False, name='word_embedding', dtype=tf.float16)
print("word-vectors loaded and initialised, cnt: %d, time taken: %d secs" % (vocab_size, time.time() - load_end_time))
return src_word_embedding
这个方法执行后的输出结果如下:
word vectors loaded from and start initialising, cnt: 2419080, time taken: 74 secs
word-vectors loaded and initialised, cnt: 2419080, time taken: 1647 secs
系统信息:tensorflow 1.1.0,tcmalloc,python 3.6,ubuntu 14.04
初始化需要半个小时似乎非常慢,这是正常现象还是有问题?有什么想法或问题吗?
更新:使用@sirfz提供的嵌入方法使得加载嵌入变得非常快速,初始化完成只需85秒