Keras LSTM状态

4

我想在Keras中运行一个LSTM,并获得输出和状态。就像在TensorFlow中做的那样。

with tf.variable_scope("RNN"):
      for time_step in range(num_steps):
        if time_step > 0: tf.get_variable_scope().reuse_variables()
        (cell_output, state) = cell(inputs[:, time_step, :], state)
        outputs.append(cell_output)

在Keras中有没有一种方法可以在序列长度很大时获取最后一个状态并将其馈送到新输入中。我知道可以使用stateful=True,但我也想在训练过程中访问状态。我知道它使用的是scan而不是for循环,但基本上我想保存状态,然后在下一次运行时,将它们作为LSTM的起始状态。简而言之,同时获取输出和状态。

1个回答

2

由于LSTM是一种层,而在Keras中每个层只能有一个输出(如果我说错了请纠正),因此您无法在不修改源代码的情况下同时获得两个输出。

最近我正在攻略Keras以实现一些高级结构,一些您可能不喜欢的想法确实奏效了。我的做法是覆盖Keras层,以便我们可以访问表示隐藏状态的张量。

首先,您可以查看keras/layers/recurrent.py中的call()函数以了解Keras如何工作:

def call(self, x, mask=None):
    # input shape: (nb_samples, time (padded with zeros), input_dim)
    # note that the .build() method of subclasses MUST define
    # self.input_spec with a complete input shape.
    input_shape = self.input_spec[0].shape
    if K._BACKEND == 'tensorflow':
        if not input_shape[1]:
            raise Exception('When using TensorFlow, you should define '
                            'explicitly the number of timesteps of '
                            'your sequences.\n'
                            'If your first layer is an Embedding, '
                            'make sure to pass it an "input_length" '
                            'argument. Otherwise, make sure '
                            'the first layer has '
                            'an "input_shape" or "batch_input_shape" '
                            'argument, including the time axis. '
                            'Found input shape at layer ' + self.name +
                            ': ' + str(input_shape))
    if self.stateful:
        initial_states = self.states
    else:
        initial_states = self.get_initial_states(x)
    constants = self.get_constants(x)
    preprocessed_input = self.preprocess_input(x)

    last_output, outputs, states = K.rnn(self.step, preprocessed_input,
                                         initial_states,
                                         go_backwards=self.go_backwards,
                                         mask=mask,
                                         constants=constants,
                                         unroll=self.unroll,
                                         input_length=input_shape[1])
    if self.stateful:
        self.updates = []
        for i in range(len(states)):
            self.updates.append((self.states[i], states[i]))

    if self.return_sequences:
        return outputs
    else:
        return last_output

其次,我们需要覆盖我们的层(Layer),以下是一个简单的脚本:
import keras.backend as K
from keras.layers import Input, LSTM
class MyLSTM(LSTM):
   def call(self, x, mask=None):
   # .... blablabla, right before return

   # we add this line to get access to states
   self.extra_output = states

   if self.return_sequences:
   # .... blablabla, to the end

   # you should copy **exactly the same code** from keras.layers.recurrent

I = Input(shape=(...))
lstm = MyLSTM(20)
output = lstm(I) # by calling, we actually call the `call()` and create `lstm.extra_output`
extra_output = lstm.extra_output # refer to the target

calculate_function = K.function(inputs=[I], outputs=extra_output+[output]) # use function to calculate them **simultaneously**. 

顺便提一下,自定义层在加载和保存时会导致一些错误,因为Keras在加载时找不到MyLSTM。只需在调用model_from_json()时添加custom_object={'MyLSTM':MyLSTM}即可。这应该很简单。 - Van

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接