Tensorflow中的定期抽样

9
最新的Tensorflow API关于seq2seq模型已经包含了计划采样技术:

https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/ScheduledEmbeddingTrainingHelper https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/ScheduledOutputTrainingHelper

原始的计划抽样论文可以在此找到:https://arxiv.org/abs/1506.03099 我阅读了这篇论文,但是我无法理解ScheduledEmbeddingTrainingHelper和ScheduledOutputTrainingHelper之间的区别。文档只说ScheduledEmbeddingTrainingHelper是一个训练助手,可以添加计划抽样,而ScheduledOutputTrainingHelper是一个训练助手,可以直接将计划抽样添加到输出中。
我想知道这两个助手之间有什么区别?
3个回答

11

我联系了这位工程师,他回答道:

输出采样器可以在该时间步骤发射原始rnn输出或原始ground truth。嵌入采样器将rnn输出视为分布的logits,并发射来自该分类分布的样本id的嵌入查找或该时间步骤的原始ground truth。


1
谢谢!我想知道在哪里可以找到关于定期采样和seq2seq API的一些示例用法? - Kevin Zeng
1
如果我可以稍微换个说法 - ScheduledOutputTrainingHelperScheduledEmbeddingTrainingHelper 之间的区别在于前者直接将 RNN 的输出作为下一个时间步骤的输入(当不使用当前时间步骤的目标作为下一个输入时),而后者(同样,当不使用当前时间步骤的目标作为下一个输入时)将 RNN 的输出视为一个逻辑值,对其应用 softmax 函数,从结果分布中采样一个令牌,然后使用它来索引嵌入矩阵中下一个时间步骤的输入。 - user1953384

6
这是使用TensorFlow 1.3和一些更高级的tf.contrib API使用ScheduledEmbeddingTrainingHelper的基本示例。这是一个sequence2sequence模型,其中解码器的初始隐藏状态是编码器的最终隐藏状态。仅展示如何在单个批次上进行训练(明显的任务是“反转此序列”)。对于实际的训练任务,我建议查看tf.contrib.learn APIs,例如learn_runner、Experiment和tf.estimator.Estimator。
import tensorflow as tf
import numpy as np
from tensorflow.python.layers.core import Dense

vocab_size = 7
embedding_size = 5
lstm_units = 10

src_batch = np.array([[1, 2, 3], [4, 5, 6]])
trg_batch = np.array([[3, 2, 1], [6, 5, 4]])

# *_seq will have shape (2, 3), *_seq_len will have shape (2)
source_seq = tf.placeholder(shape=(None, None), dtype=tf.int32)
target_seq = tf.placeholder(shape=(None, None), dtype=tf.int32)
source_seq_len = tf.placeholder(shape=(None,), dtype=tf.int32)
target_seq_len = tf.placeholder(shape=(None,), dtype=tf.int32)

# add Start of Sequence (SOS) tokens to each sequence
batch_size, sequence_size = tf.unstack(tf.shape(target_seq))
sos_slice = tf.zeros([batch_size, 1], dtype=tf.int32) # 0 = start of sentence token
decoder_input = tf.concat([sos_slice, target_seq], axis=1)

embedding_matrix = tf.get_variable(
    name="embedding_matrix",
    shape=[vocab_size, embedding_size],
    dtype=tf.float32)
source_seq_embedded = tf.nn.embedding_lookup(embedding_matrix, source_seq) # shape=(2, 3, 5)
decoder_input_embedded = tf.nn.embedding_lookup(embedding_matrix, decoder_input) # shape=(2, 4, 5)

unused_encoder_outputs, encoder_state = tf.nn.dynamic_rnn(
    tf.contrib.rnn.LSTMCell(lstm_units),
    source_seq_embedded,
    sequence_length=source_seq_len,
    dtype=tf.float32)

# Decoder:
# At each time step t and for each sequence in the batch, we get x_t by either
#   (1) sampling from the distribution output_layer(t-1), or
#   (2) reading from decoder_input_embedded.
# We do (1) with probability sampling_probability and (2) with 1 - sampling_probability.
# Using sampling_probability=0.0 is equivalent to using TrainingHelper (no sampling).
# Using sampling_probability=1.0 is equivalent to doing inference,
# where we don't supervise the decoder at all: output at t-1 is the input at t.
sampling_prob = tf.Variable(0.0, dtype=tf.float32)
helper = tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper(
    decoder_input_embedded,
    target_seq_len,
    embedding_matrix,
    sampling_probability=sampling_prob)

output_layer = Dense(vocab_size)
decoder = tf.contrib.seq2seq.BasicDecoder(
    tf.contrib.rnn.LSTMCell(lstm_units),
    helper,
    encoder_state,
    output_layer=output_layer)

outputs, state, seq_len = tf.contrib.seq2seq.dynamic_decode(decoder)
loss = tf.contrib.seq2seq.sequence_loss(
    logits=outputs.rnn_output,
    targets=target_seq,
    weights=tf.ones(trg_batch.shape))

train_op = tf.contrib.layers.optimize_loss(
    loss=loss,
    global_step=tf.contrib.framework.get_global_step(),
    optimizer=tf.train.AdamOptimizer,
    learning_rate=0.001)

with tf.Session() as session:
    session.run(tf.global_variables_initializer())
    _, _loss = session.run([train_op, loss], {
        source_seq: src_batch,
        target_seq: trg_batch,
        source_seq_len: [3, 3],
        target_seq_len: [3, 3],
        sampling_prob: 0.5
    })
    print("Loss: " + str(_loss))

对于ScheduledOutputTrainingHelper,我希望只需更换助手并使用:
helper = tf.contrib.seq2seq.ScheduledOutputTrainingHelper(
    target_seq,
    target_seq_len,
    sampling_probability=sampling_prob)

然而,这样会出现错误,因为LSTM单元期望每个时间步长的多维输入(形状为(batch_size, input_dims))。我将在GitHub上提出问题,以查明这是否是一个错误,或者是否有其他方法使用ScheduledOutputTrainingHelper。


请问您能提供一下您的 GitHub 问题链接吗? - JYun
我有点忙,最终没有提出来。 - Mattias Arro
@MattiasArro,针对你指出的ScheduledOutputTrainingHelper问题,有一个解决方法。如果将target_seq(即整数令牌序列)转换为一系列独热向量,就不会遇到这个错误,如下所示:tf.contrib.seq2seq.ScheduledOutputTrainingHelper(tf.one_hot(target_seq), target_seq_len, sampling_probability=sampling_prob) - user1953384
如果架构中没有编码器-解码器,您将如何使用ScheduledOutputTrainingHelper?假设它只是一个简单的堆叠LSTM。类似于这个例子 - betelgeuse

0

这也可能对你有所帮助。这是针对每个解码步骤单独进行定期抽样的情况。

import tensorflow as tf
import numpy as np
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops.distributions import categorical
from tensorflow.python.ops.distributions import bernoulli
batch_size = 64
vocab_size = 50000
emb_dim = 128
output = tf.get_variable('output', 
initializer=tf.constant(np.random.rand(batch_size,vocab_size)))
base_next_inputs = tf.get_variable('input', 
initializer=tf.constant(np.random.rand(batch_size,emb_dim)))
embedding = tf.get_variable('embedding', 
initializer=tf.constant(np.random.rand(vocab_size,emb_dim)))
select_sampler = bernoulli.Bernoulli(probs=0.99, dtype=tf.bool)
select_sample = select_sampler.sample(sample_shape=batch_size, 
seed=123)
sample_id_sampler = categorical.Categorical(logits=output)
sample_ids = array_ops.where(
    select_sample,
    sample_id_sampler.sample(seed=123),
    gen_array_ops.fill([batch_size], -1))

where_sampling = math_ops.cast(
   array_ops.where(sample_ids > -1), tf.int32)
where_not_sampling = math_ops.cast(
   array_ops.where(sample_ids <= -1), tf.int32)
sample_ids_sampling = array_ops.gather_nd(sample_ids, where_sampling)
inputs_not_sampling = array_ops.gather_nd(base_next_inputs, 
     where_not_sampling)
sampled_next_inputs = tf.nn.embedding_lookup(embedding, 
    sample_ids_sampling)
base_shape = array_ops.shape(base_next_inputs)
result1 = array_ops.scatter_nd(indices=where_sampling, 
   updates=sampled_next_inputs, shape=base_shape)
result2 = array_ops.scatter_nd(indices=where_not_sampling, 
   updates=inputs_not_sampling, shape=base_shape)
result = result1 + result2

我使用了TensorFlow文档中的代码来制作这个例子。 https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/contrib/seq2seq/python/ops/helper.py


如果在体系结构中没有编码器-解码器,您将如何使用ScheduledOutputTrainingHelper?比方说,它是一个简单的堆叠LSTM,类似于这个链接中所示的。 - betelgeuse

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接