无法在Tensorflow v2中创建不可训练变量

3

我手动实现了batchnormalize层。但是创建nontrainable变量的初始函数中的代码似乎无法工作。 代码:

import tensorflow as tf
class batchNormalization(tf.keras.layers.Layer):
    def __init__(self, shape, Trainable, **kwargs):
        super(batchNormalization, self).__init__(**kwargs)
        self.shape = shape
        self.Trainable = Trainable
        self.beta = tf.Variable(initial_value=tf.zeros(shape), trainable=Trainable)
        self.gamma = tf.Variable(initial_value=tf.ones(shape), trainable=Trainable)
        self.moving_mean = tf.Variable(initial_value=tf.zeros(self.shape), trainable=False)
        self.moving_var = tf.Variable(initial_value=tf.ones(self.shape), trainable=False)

    def update_var(self,inputs):
        wu, sigma = tf.nn.moments(inputs, axes=[0, 1, 2], shift=None, keepdims=False, name=None)
        var = tf.math.sqrt(sigma)
        self.moving_mean = self.moving_mean * 0.09 + wu * 0.01
        self.moving_var = self.moving_var * 0.09 + var * 0.01
        return wu,var

    def call(self, inputs):
        wu, var = self.update_var(inputs)
        return tf.nn.batch_normalization(inputs, wu, var, self.beta,
                                         self.gamma, variance_epsilon=0.001)


@tf.function
def train_step(model, inputs, label,optimizer):
    with tf.GradientTape(persistent=False) as tape:
        predictions = model(inputs, training=1)
        loss = tf.keras.losses.mean_squared_error(predictions,label)
    grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))


if __name__=='__main__':
    f=tf.ones([2,256,256,8])
    label=tf.ones([2,256,256,8])
    inputs = tf.keras.Input(shape=(256,256,8))
    outputs=batchNormalization([8],True)(inputs)
    Model = tf.keras.Model(inputs=inputs, outputs=outputs)
    Layer = batchNormalization([8],True)
    print(len(Model.variables))
    print(len(Model.trainable_variables))
    print(len(Layer.variables))
    print(len(Layer.trainable_variables))
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.001)
    for i in range(0,100):
        train_step(Layer, f, label,optimizer)
        # train_step(Model,f,label,optimizer)

训练时,出现了另一个错误: 类型错误:正在传递“图形”张量的函数构建代码之外的 op。 通过在函数构建代码中包含 tf.init_scope,可能会使 Graph 张量泄漏出函数构建上下文。

为什么您想将 moving_meanmoving_mean 保持为不可训练的 tf.Variable,而不是 tf.constant - Susmit Agrawal
为此,为什么不直接使用Python变量呢? - Susmit Agrawal
因为我想追踪self.moving_mean和self.moving_var并在层外更改它们的值。 - wangwei
1个回答

0

替换

self.moving_mean = self.moving_mean * 0.09 + wu * 0.01
self.moving_var = self.moving_var * 0.09 + var * 0.01
``
by 
self.moving_mean.assign(self.moving_mean * 0.09 + wu * 0.01)
self.moving_var.assign(self.moving_var * 0.09 + var * 0.01)

可以解决这个问题。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接