Tensorflow类型错误:获取参数None的类型无效<type 'NoneType'>?

24

我正在构建一个基于TensorFlow教程的RNN模型。

我的模型主要包括以下部分:

input_sequence = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, PIXEL_COUNT + AUX_INPUTS])
output_actual = tf.placeholder(tf.float32, [BATCH_SIZE, OUTPUT_SIZE])

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(CELL_SIZE, state_is_tuple=False)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * CELL_LAYERS, state_is_tuple=False)

initial_state = state = stacked_lstm.zero_state(BATCH_SIZE, tf.float32)
outputs = []

with tf.variable_scope("LSTM"):
    for step in xrange(TIME_STEPS):
        if step > 0:
            tf.get_variable_scope().reuse_variables()
        cell_output, state = stacked_lstm(input_sequence[:, step, :], state)
        outputs.append(cell_output)

final_state = state

喂养方式:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(output_actual * tf.log(prediction), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_actual, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.initialize_all_variables())
    numpy_state = initial_state.eval()

    for i in xrange(1, ITERATIONS):
        batch = DI.next_batch()

        print i, type(batch[0]), np.array(batch[1]).shape, numpy_state.shape

        if i % LOG_STEP == 0:
            train_accuracy = accuracy.eval(feed_dict={
                initial_state: numpy_state,
                input_sequence: batch[0],
                output_actual: batch[1]
            })

            print "Iteration " + str(i) + " Training Accuracy " + str(train_accuracy)

        numpy_state, train_step = sess.run([final_state, train_step], feed_dict={
            initial_state: numpy_state,
            input_sequence: batch[0],
            output_actual: batch[1]
            })

当我运行这个程序时,我得到了以下错误:

Traceback (most recent call last):
  File "/home/agupta/Documents/Projects/Image-Recognition-with-LSTM/RNN/feature_tracking/model.py", line 109, in <module>
    output_actual: batch[1]
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 698, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 838, in _run
    fetch_handler = _FetchHandler(self._graph, fetches)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 355, in __init__
    self._fetch_mapper = _FetchMapper.for_fetch(fetches)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 181, in for_fetch
    return _ListFetchMapper(fetch)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 288, in __init__
    self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 178, in for_fetch
    (fetch, type(fetch)))
TypeError: Fetch argument None has invalid type <type 'NoneType'>

也许最奇怪的是,这个错误在第次迭代时被抛出,而第一次完全正常。我正在试图解决这个问题,所以任何帮助都将不胜感激。

2个回答

33
你正在重新分配train_step变量为sess.run()结果的第二个元素(该元素恰好为None)。因此,在第二次迭代中,train_stepNone,导致错误发生。
解决方法非常简单:
for i in xrange(1, ITERATIONS):

    # ...

    # Discard the second element of the result.
    numpy_state, _ = sess.run([final_state, train_step], feed_dict={
        initial_state: numpy_state,
        input_sequence: batch[0],
        output_actual: batch[1]
        })

37
您,先生,是有史以来最伟大的人类。非常感谢您! - agupta231
12
当一个人在不同的上下文中遇到相同的错误时,您希望我用通俗易懂的话来解释这个错误是什么时候会发生的。 - MartianMartian

10

出现这种错误的另一个常见原因是如果您包括了summary fetch操作但没有编写任何摘要。

例如:

# tf.summary.scalar("loss", loss) # <- uncomment this line and it will work fine
summary_op = tf.summary.merge_all()
sess = tf.Session()
# ...
summary = sess.run([summary_op, ...], feed_dict={...}) # TypeError, summary_op is "None"!

更加令人困惑的是summary_op本身并不是None,这只是从会话的运行方法内部传递上来的错误。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接