TensorFlow中的Dropout层:如何进行训练?

4

在使用Keras创建模型后,我想获取梯度并使用tf.train.AdamOptimizer类在Tensorflow中直接应用它们。然而,由于我使用了Dropout层,我不知道如何告诉模型是否处于训练模式。 training关键字不被接受。以下是代码:

    net_input = Input(shape=(1,))
    net_1 = Dense(50)
    net_2 = ReLU()
    net_3 = Dropout(0.5)
    net = Model(net_input, net_3(net_2(net_1(net_input))))

    #mycost = ...

    optimizer = tf.train.AdamOptimizer()
    gradients = optimizer.compute_gradients(mycost, var_list=[net.trainable_weights])
    # perform some operations on the gradients
    # gradients = ...
    trainstep = optimizer.apply_gradients(gradients)

无论是否有dropout层,我都会得到相同的结果,即使dropout rate=1。如何解决?

2个回答

1
正如@Sharky所说,您可以在调用Dropout类的call()方法时使用training参数。但是,如果您想在tensorflow图模式下进行训练,则需要传递一个占位符并在训练期间提供布尔值。以下是适用于您情况的高斯斑点拟合示例:
import tensorflow as tf
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import ReLU
from tensorflow.keras.layers import Input
from tensorflow.keras import Model

x_train, y_train = make_blobs(n_samples=10,
                              n_features=2,
                              centers=[[1, 1], [-1, -1]],
                              cluster_std=1)

x_train, x_test, y_train, y_test = train_test_split(
    x_train, y_train, test_size=0.2)

# `istrain` indicates whether it is inference or training
istrain = tf.placeholder(tf.bool, shape=()) 
y = tf.placeholder(tf.int32, shape=(None))
net_input = Input(shape=(2,))
net_1 = Dense(2)
net_2 = Dense(2)
net_3 = Dropout(0.5)
net = Model(net_input, net_3(net_2(net_1(net_input)), training=istrain))

xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels=y, logits=net.output)
loss_fn = tf.reduce_mean(xentropy)

optimizer = tf.train.AdamOptimizer(0.01)
grads_and_vars = optimizer.compute_gradients(loss_fn,
                                             var_list=[net.trainable_variables])
trainstep = optimizer.apply_gradients(grads_and_vars)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    l1 = loss_fn.eval({net_input:x_train,
                       y:y_train,
                       istrain:True}) # apply dropout
    print(l1) # 1.6264652
    l2 = loss_fn.eval({net_input:x_train,
                       y:y_train,
                       istrain:False}) # no dropout
    print(l2) # 1.5676715
    sess.run(trainstep, feed_dict={net_input:x_train,
                                   y:y_train, 
                                   istrain:True}) # train with dropout


1
Keras层继承自tf.keras.layers.Layer类。Keras API在内部使用model.fit处理此问题。如果在纯TensorFlow训练循环中使用Keras Dropout,则其调用函数支持training参数,因此您可以通过它来控制它。
dropout = tf.keras.layers.Dropout(rate, noise_shape, seed)(prev_layer, training=is_training)

来自官方TF文档

注意:以下可选关键字参数仅供特定用途使用:* training:Python布尔量的布尔标量张量,指示调用是否用于训练或推理。* mask:布尔输入掩码。- 如果层的调用方法接受掩码参数(如某些Keras层所做的那样),则其默认值将设置为先前层生成的输入的掩码(如果输入确实来自具有掩码支持的Keras层,则表示输入来自生成相应掩码的层。 https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout#call


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接