最近我升级了TensorFlow版本后,出现了以下错误,我无法解决:
Traceback (most recent call last):
File "cross_train.py", line 177, in <module>
train_network(use_gpu=True)
File "cross_train.py", line 46, in train_network
with tf.control_dependencies([s_opt.apply_gradients(s_grads), s_increment_step]):
...
ValueError: Variable image-conv1-layer/weights/Adam/ already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "cross_train.py", line 34, in train_network
with tf.control_dependencies([e_opt.apply_gradients(e_grads), e_increment_step]):
File "cross_train.py", line 177, in <module>
train_network(use_gpu=True)
我的模型架构是3个不同的卷积神经网络分支:M,E和S。在训练中,我尝试交替步骤,其中我通过M和E传播样本(它们的嵌入的点积距离)并使用Adam进行更新;然后通过M和S传播样本并使用Adam进行更新;并重复此过程。因此,基本上M是固定的(每步都会更新),但是E和S分支会交替更新。
为此,我创建了两个AdamOptimizer实例(e_opt和s_opt),但当我尝试更新S分支时,权重变量M-conv1/weights/Adam/已存在,因此出现错误。
在更新TensorFlow版本之前,这种情况并未发生。我知道如何在TensorFlow中通常设置变量的重用,例如:
with tf.variable_scope(name, values=[input_to_layer]) as scope:
try:
weights = tf.get_variable("weights", [height, width, input_to_layer.get_shape()[3], channels], initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32))
bias = tf.get_variable("bias", [channels], initializer=tf.constant_initializer(0.0, dtype=tf.float32))
except ValueError:
scope.reuse_variables()
weights = tf.get_variable("weights", [height, width, input_to_layer.get_shape()[3], channels], initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32))
bias = tf.get_variable("bias", [channels], initializer=tf.constant_initializer(0.0, dtype=tf.float32))
但我不确定我是否能为Adam做同样的事情。有什么想法吗?非常感谢您的帮助。
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001, name='first_optimizer').minimize(loss)
- David Parks