如何在TensorFlow中正确创建卷积层的批量归一化层?

3
我在查看TensorFlow中的官方批量归一化层(BN),但它没有很好地解释如何将其用于卷积层。有人知道怎么做吗?特别是它应用和学习每个特征图相同的参数(而不是每个激活)。换句话说,它按过滤器应用和学习BN。
在一个具体的玩具示例中,假设我想在MNIST上使用带有BN的conv2d(本质上是2D数据)。因此,可以执行以下操作:
W_conv1 = weight_variable([5, 5, 1, 32]) # 5x5 filters with 32 filters
x_image = tf.reshape(x, [-1,28,28,1]) # MNIST image
conv = tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='VALID') #[?,24,24,1]
z = conv # [?,24,24,32]
z = BN(z) # [?,24,24,32], essentially only 32 different scales and shift parameters to learn, per filer application
a = tf.nn.relu(z) # [?,24,24,32]

其中z = BN(z)会对每个卷积核生成的特征进行批量归一化。伪代码如下:

x_patch = x[h:h+5,w:w+h,1] # patch to do convolution
z[h,w,f] = x_patch * W[:,:,f] = tf.matmul(x_patch, W[:,:,f]) # actual matrix multiplication for the convolution

我们对其应用了适当的批量归一化层(在省略重要细节的伪代码中):
z[h,w,f] = BN(z[h,w,f]) = scale[f] * (z[h,w,f]  - mu / sigma) + shift[f]

即对于每个过滤器 f,我们应用 BN。
2个回答

2
重要提示:这里提供的链接影响了 tf.contrib.layers.batch_norm 模块,而不是通常的 tf.nn(请参见下面的评论和帖子)
我没有测试过,但是 TensorFlow 希望你使用的方式似乎在 convolution2ddocstring 中有记录:
def convolution2d(inputs,
              num_outputs,
              kernel_size,
              stride=1,
              padding='SAME',
              activation_fn=nn.relu,
              normalizer_fn=None,
              normalizer_params=None,
              weights_initializer=initializers.xavier_initializer(),
              weights_regularizer=None,
              biases_initializer=init_ops.zeros_initializer,
              biases_regularizer=None,
              reuse=None,
              variables_collections=None,
              outputs_collections=None,
              trainable=True,
              scope=None):
  """Adds a 2D convolution followed by an optional batch_norm layer.
  `convolution2d` creates a variable called `weights`, representing the
  convolutional kernel, that is convolved with the `inputs` to produce a
  `Tensor` of activations. If a `normalizer_fn` is provided (such as
  `batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is
  None and a `biases_initializer` is provided then a `biases` variable would be
  created and added the activations.

根据这个建议,您应该将normalizer_fn='batch_norm'作为参数添加到conv2d方法调用中。
关于特征图与激活的问题,我猜TF会在构建图时将规范化层添加为卷积层顶部的新“节点”,并且两者都会修改相同的权重变量(在您的情况下,W_conv1对象)。我不会描述规范化层的任务为“学习”,但我不确定是否理解了您的观点(如果您详细说明,我可以尝试进一步帮助)。
编辑: 仔细查看函数体确认了我的猜测,并且还解释了如何使用normalized_params参数。从line 354读取:
outputs = nn.conv2d(inputs, weights, [1, stride_h, stride_w, 1],
padding=padding)
if normalizer_fn:
  normalizer_params = normalizer_params or {}
  outputs = normalizer_fn(outputs, **normalizer_params)
else:
  ...etc...

我们看到,保存每一层输出的变量outputs被顺序地覆盖。因此,如果在构建图时给定了 normalizer_fn,那么nn.conv2d的输出将被额外层normalizer_fn覆盖。这就是**normalizer_params的作用,作为一个可迭代的kwarg传递给给定的normalizer_fn。您可以在此处找到batch_norm的默认参数here,因此将希望更改的参数作为字典传递给normalizer_params应该做到这一点,类似于以下内容:
normalizer_params = {"epsilon" : 0.314592, "center" : False}

希望它有所帮助!

你知道normalizer_params是做什么的吗? - Charlie Parker
是的,那很有帮助,谢谢!顺便问一下,你是否成功运行了一个示例?我一直在遇到一些问题。 - Charlie Parker

1
似乎以下示例适用于我:

import numpy as np

import tensorflow as tf


normalizer_fn = None
normalizer_fn = tf.contrib.layers.batch_norm

D = 5
kernel_height = 1
kernel_width = 3
F = 4
x = tf.placeholder(tf.float32, shape=[None,1,D,1], name='x-input') #[M, 1, D, 1]
conv = tf.contrib.layers.convolution2d(inputs=x,
    num_outputs=F, # 4
    kernel_size=[kernel_height, kernel_width], # [1,3]
    stride=[1,1],
    padding='VALID',
    rate=1,
    activation_fn=tf.nn.relu,
    normalizer_fn=normalizer_fn,
    normalizer_params=None,
    weights_initializer=tf.contrib.layers.xavier_initializer(dtype=tf.float32),
    biases_initializer=tf.zeros_initializer,
    trainable=True,
    scope='cnn'
)

# syntheitc data
M = 2
X_data = np.array( [np.arange(0,5),np.arange(5,10)] )
print(X_data)
X_data = X_data.reshape(M,1,D,1)
with tf.Session() as sess:
    sess.run( tf.initialize_all_variables() )
    print( sess.run(fetches=conv, feed_dict={x:X_data}) )

控制台输出:

$ python single_convolution.py
[[0 1 2 3 4]
 [5 6 7 8 9]]
[[[[ 1.33058071  1.33073258  1.30027914  0.        ]
   [ 0.95041472  0.95052338  0.92877126  0.        ]
   [ 0.57024884  0.57031405  0.55726254  0.        ]]]


 [[[ 0.          0.          0.          0.56916821]
   [ 0.          0.          0.          0.94861376]
   [ 0.          0.          0.          1.32805932]]]]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接