定制Keras损失函数中的奇怪Nan损失

5

我正在尝试在Keras中实现自定义损失函数,但无法使其正常工作。

我已经使用numpy和keras.backend实现了它:

def log_rmse_np(y_true, y_pred):
    d_i = np.log(y_pred) -  np.log(y_true)
    loss1 = (np.sum(np.square(d_i))/np.size(d_i))
    loss2 = ((np.square(np.sum(d_i)))/(2 * np.square(np.size(d_i))))
    loss = loss1 - loss2
    print('np_loss =  %s - %s = %s'%(loss1, loss2, loss))
    return loss

def log_rmse(y_true, y_pred):
    d_i = (K.log(y_pred) -  K.log(y_true))
    loss1 = K.mean(K.square(d_i))
    loss2 = K.square(K.sum(K.flatten(d_i),axis=-1))/(K.cast_to_floatx(2) * K.square(K.cast_to_floatx(K.int_shape(K.flatten(d_i))[0])))
    loss = loss1 - loss2
    return loss

当我使用以下函数进行测试和比较损失时,一切似乎都正常。

def check_loss(_shape):
    if _shape == '2d':
        shape = (6, 7)
    elif _shape == '3d':
        shape = (5, 6, 7)
    elif _shape == '4d':
        shape = (8, 5, 6, 7)
    elif _shape == '5d':
        shape = (9, 8, 5, 6, 7)

    y_a = np.random.random(shape)
    y_b = np.random.random(shape)

    out1 = K.eval(log_rmse(K.variable(y_a), K.variable(y_b)))
    out2 = log_rmse_np(y_a, y_b)

    print('shapes:', str(out1.shape), str(out2.shape))
    print('types: ', type(out1), type(out2))
    print('log_rmse:    ', np.linalg.norm(out1))
    print('log_rmse_np: ', np.linalg.norm(out2))
    print('difference:  ', np.linalg.norm(out1-out2))
    assert out1.shape == out2.shape
    #assert out1.shape == shape[-1]

def test_loss():
    shape_list = ['2d', '3d', '4d', '5d']
    for _shape in shape_list:
        check_loss(_shape)
        print ('======================')

test_loss()

以上代码输出:
np_loss =  1.34490449177 - 0.000229461787517 = 1.34467502998
shapes: () ()
types:  <class 'numpy.float32'> <class 'numpy.float64'>
log_rmse:     1.34468
log_rmse_np:  1.34467502998
difference:   3.41081509703e-08
======================
np_loss =  1.68258448859 - 7.67580654591e-05 = 1.68250773052
shapes: () ()
types:  <class 'numpy.float32'> <class 'numpy.float64'>
log_rmse:     1.68251
log_rmse_np:  1.68250773052
difference:   1.42057615005e-07
======================
np_loss =  1.99736933814 - 0.00386228512295 = 1.99350705302
shapes: () ()
types:  <class 'numpy.float32'> <class 'numpy.float64'>
log_rmse:     1.99351
log_rmse_np:  1.99350705302
difference:   2.53924863358e-08
======================
np_loss =  1.95178217182 - 1.60006871892e-05 = 1.95176617114
shapes: () ()
types:  <class 'numpy.float32'> <class 'numpy.float64'>
log_rmse:     1.95177
log_rmse_np:  1.95176617114
difference:   3.78277884572e-08
======================

当我使用这个损失函数编译和拟合我的模型时,从来没有出现过异常。当我使用'adam'损失函数运行模型时,一切都正常。但是,当我使用这个损失函数时,keras一直显示nan-loss:

Epoch 1/10000
 17/256 [>.............................] - ETA: 124s - loss: nan

有点卡在这里了...我做错了什么吗?

在Ubuntu 16.04上使用Tensorflow 1.4

更新:

在Marcin Możejko的建议下,我更新了代码,但是训练损失仍然为NaN:

def get_log_rmse(normalization_constant):
    def log_rmse(y_true, y_pred):
        d_i = (K.log(y_pred) -  K.log(y_true))
        loss1 = K.mean(K.square(d_i))
        loss2 = K.square(K.sum(K.flatten(d_i),axis=-1))/K.cast_to_floatx(2 * normalization_constant ** 2)
        loss = loss1 - loss2
        return loss
    return log_rmse

然后模型通过以下方式编译:

model.compile(optimizer='adam', loss=get_log_rmse(batch_size))

更新2:

模型摘要如下:

Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, 160, 256, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 160, 256, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 160, 256, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 80, 128, 64)       0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 80, 128, 128)      73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 80, 128, 128)      147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 40, 64, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 40, 64, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 40, 64, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 40, 64, 256)       590080    
_________________________________________________________________
block3_conv4 (Conv2D)        (None, 40, 64, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 20, 32, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 20, 32, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 20, 32, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 20, 32, 512)       2359808   
_________________________________________________________________
block4_conv4 (Conv2D)        (None, 20, 32, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 10, 16, 512)       0         
_________________________________________________________________
conv2d_transpose_5 (Conv2DTr (None, 10, 16, 128)       1048704   
_________________________________________________________________
up_sampling2d_5 (UpSampling2 (None, 20, 32, 128)       0         
_________________________________________________________________
conv2d_transpose_6 (Conv2DTr (None, 20, 32, 64)        131136    
_________________________________________________________________
up_sampling2d_6 (UpSampling2 (None, 40, 64, 64)        0         
_________________________________________________________________
conv2d_transpose_7 (Conv2DTr (None, 40, 64, 32)        32800     
_________________________________________________________________
up_sampling2d_7 (UpSampling2 (None, 80, 128, 32)       0         
_________________________________________________________________
conv2d_transpose_8 (Conv2DTr (None, 80, 128, 16)       8208      
_________________________________________________________________
up_sampling2d_8 (UpSampling2 (None, 160, 256, 16)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 160, 256, 1)       401       
=================================================================
Total params: 11,806,401
Trainable params: 11,806,401
Non-trainable params: 0

更新3:

y_true示例:

输入图像样例:


2
可能是由于对数函数,如果y_pre或y_true为0,则尝试计算log(0),其结果为-inf。如果您尝试np.log(0) - np.log(0),则会得到nan。 - WellDone2094
好的观点,但我认为这不是问题的根源,因为数据介于0和1之间,即使我将y_true和y_pred都加1,nan-loss仍然存在。 - Raspel
4个回答

2
问题出在这个部分:
K.cast_to_floatx(K.int_shape(K.flatten(d_i))[0]

由于损失函数在提供任何形状之前就被编译了 - 这个表达式的值为None,这就是你出错的原因。我尝试设置batch_input_shape而不是input_shape,但这也没有起作用(可能是由于keras编译模型的方式)。建议以以下方式将此数字设置为常量:

def get_log_rmse(normalization_constant):
    def log_rmse(y_true, y_pred):
        d_i = (K.log(y_pred) -  K.log(y_true))
        loss1 = K.mean(K.square(d_i))
        loss2 = K.square(
            K.sum(
                K.flatten(d_i),axis=-1))/(K.cast_to_floatx(
                    2 * normalization_constant ** 2) 
        loss = loss1 - loss2
        return loss
    return log_rmse

然后进行编译:

model.compile(..., loss=get_log_rmse(normalization_constant))

我猜测normalization_constant等于batch_size,但我不确定,所以我把它设为通用值。
更新:
在Marcin Możejko的建议下,我更新了代码,但不幸的是训练损失仍然是NaN。
def get_log_rmse(normalization_constant):
    def log_rmse(y_true, y_pred):
        d_i = (K.log(y_pred) -  K.log(y_true))
        loss1 = K.mean(K.square(d_i))
        loss2 = K.square(K.sum(K.flatten(d_i),axis=-1))/K.cast_to_floatx(2 * normalization_constant ** 2)
        loss = loss1 - loss2
        return loss
    return log_rmse

然后,模型通过以下方式进行编译:

model.compile(optimizer='adam', loss=get_log_rmse(batch_size))

更新2:

模型定义如下:

input_shape = (160, 256, 3)
print('Input_shape: %s'%str(input_shape))
base_model = keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', 
                               input_tensor=None, input_shape=input_shape, 
                               pooling=None, # None, 'avg', 'max'
                               classes=1000)
for i in range(5):
    base_model.layers.pop()
base_model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
print('VGG19 output_shape: ' + str(base_model.output_shape))

x = Deconv(128, kernel_size=(4, 4), strides=1, padding='same', activation='relu')(base_model.output)
x = UpSampling2D((2, 2))(x)
x = Deconv(64, kernel_size=(4, 4), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2, 2))(x)
x = Deconv(32, kernel_size=(4, 4), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2, 2))(x)
x = Deconv(16, kernel_size=(4, 4), strides=1, padding='same', activation='relu')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(1, kernel_size=(5, 5), strides=1, padding='same')(x)
model = Model(inputs=base_model.input, outputs=x)

仍然存在NaN损失:( 有其他的想法吗? - Raspel
你能打印一个 y_true 的例子吗? - Marcin Możejko
作为一张图片,是吗?当然可以! - Raspel
是否存在任何 0 值? - Marcin Możejko
即使添加了 sigmoid 也是这样吗? - Marcin Możejko
显示剩余18条评论

1

尝试在内置损失上拟合您的模型几个时期。然后再次编译您的模型,使用自己的损失。这可能会有所帮助。


0

我有一个类似的损失函数,并且得到了 nan 作为损失。

def log_rsme(y_true, y_pred):
    loga = tf.math.log(y_true)
    logb = tf.math.log(y_pred)
    error = tf.math.sqrt(tf.math.square(loga - logb))
    return error

我通过以下方式修复了它:
def log_rsme(y_true, y_pred):
    loga = tf.math.log(y_true + 0.000001)
    logb = tf.math.log(y_pred + 0.000001)
    error = tf.math.sqrt(tf.math.square(loga - logb))
    return error

由于对于非正值,未定义log,因此最好确保永远不会违反log的定义域。在我的情况下,模型的最后一层具有relu激活函数,确保非负值,但它允许零作为可能的值。因此,为了消除这种情况,我添加了一个小常数即0.000001,解决了这个nan问题。


0

当我在使用 均方根百分比误差(root mean square percentage error) 时遇到相同的错误,公式为:k.sqrt(K.mean(K.square( (y_true - y_pred) / y_true )))

解决方案:
我移除了分母并运行了几个epochs。然后停止并使用原始公式重新运行。它开始给出有限的损失值。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接