Keras中CNN的损失在训练过程中某个时刻变成了nan。

4
我正在使用Keras训练VGG16的最后一层。我的模型如下:
map_characters1 = {0: 'No Pneumonia', 1: 'Yes Pneumonia'}
class_weight1 = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train)
weight_path1 = './imagenet_models/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
pretrained_model_1 = VGG16(weights = 'imagenet', include_top=False, input_shape=(200, 200, 3))

optimizer1 = keras.optimizers.Adam(lr=0.0001)
def pretrainedNetwork(xtrain,ytrain,xtest,ytest,pretrainedmodel,pretrainedweights,classweight,numclasses,numepochs,optimizer,labels):
    base_model = pretrained_model_1 # Topless
    # Add top layer
    x = base_model.output
    x = Flatten()(x)
    predictions = Dense(numclasses, activation='relu')(x)
    model = Model(inputs=base_model.input, outputs=predictions)
    # Train top layer
    for layer in base_model.layers:
        layer.trainable = False
    model.compile(loss='categorical_crossentropy', 
              optimizer=optimizer, 
              metrics=['accuracy'])
    callbacks_list = [keras.callbacks.EarlyStopping(monitor='val_acc', patience=3, verbose=1)]
    model.summary()
    # Fit model
    history = model.fit(xtrain,ytrain, epochs=numepochs, class_weight=classweight, validation_data=(xtest,ytest), verbose=1,callbacks = [MetricsCheckpoint('logs')])
    # Evaluate model
    score = model.evaluate(xtest,ytest, verbose=0)
    print('\nKeras CNN - accuracy:', score[1], '\n')

return model

训练开始时看起来很不错:损失下降,准确率提高。但随后损失变成了nan,准确率变成了0.5——就像随机猜测一样。

模型:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 200, 200, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 200, 200, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 200, 200, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 100, 100, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 100, 100, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 100, 100, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 50, 50, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 50, 50, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 50, 50, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 25, 25, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 25, 25, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 25, 25, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 12, 12, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 12, 12, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 6, 6, 512)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 18432)             0         
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 36866     
=================================================================
Total params: 14,751,554
Trainable params: 36,866
Non-trainable params: 14,714,688

训练输出:

Train on 2682 samples, validate on 468 samples

Epoch 1/6
2682/2682 [==============================] - 621s 232ms/step - loss: 1.5150 - acc: 0.7662 - val_loss: 0.4117 - val_acc: 0.8526

Epoch 2/6
2682/2682 [==============================] - 615s 229ms/step - loss: 0.2535 - acc: 0.9459 - val_loss: 1.7812 - val_acc: 0.7009

Epoch 3/6
2682/2682 [==============================] - 621s 232ms/step - loss: nan - acc: 0.7468 - val_loss: nan - val_acc: 0.5000

Epoch 4/6
2682/2682 [==============================] - 644s 240ms/step - loss: nan - acc: 0.5000 - val_loss: nan - val_acc: 0.5000

Epoch 5/6
2682/2682 [==============================] - 616s 230ms/step - loss: nan - acc: 0.5000 - val_loss: nan - val_acc: 0.5000

我该在哪里查找问题?丢包是什么情况?


你的输入格式是[0,1]还是[0.255]?我发现[0.255]效果不太好。此外,你的学习率可能太高了,你尝试降低一下吗? - Tacratis
是的,我的输入在[0,1]范围内。 - Ekaterina Tcareva
2个回答

7
你的梯度爆炸了。简单来说,考虑通过梯度下降进行凸优化。神经网络的目标是通过优化权重使得损失函数的导数在以下图像底部(绿色)处变为零:

Gradient Descent

Gradient 2

爆炸梯度是指梯度几乎与平方误差和轴平行,从而生成nans。
有一些解决方法,如批量归一化、权重初始化、使用ReLU激活函数和更小的学习率。对于LSTM中的消失梯度,甚至优化器也很重要。
如果学习率不够小,训练可能会在梯度上产生zig zag,错过局部最小值。

Big Learning rate


2
问题出在我在预测层中使用了activation='relu'。我将其改为'softmax',现在它可以正常工作了!最初的回答。

1
太棒了!恭喜Ekaterina! - razimbres

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接