我对model.compile()中的add_loss和传统的loss有什么区别感到困惑?
我的代码如下:
from time import time
import numpy as np
import random
from keras.models import Model
import keras.backend as K
from keras.engine.topology import Layer, InputSpec
from keras.layers import Dense, Input, GaussianNoise, Layer, Activation
from keras.models import Model
from keras.optimizers import SGD, Adam
from keras.utils.vis_utils import plot_model
from keras.callbacks import EarlyStopping
input_place = Input(shape=(128,))
e_layer1 = Dense(64,activation='relu')(input_place)
e_layer2 = Dense(32,activation='relu')(e_layer1)
hidden = Dense(16,activation='relu')(e_layer2)
d_layer1 = Dense(32,activation='relu')(hidden)
d_layer2 = Dense(64,activation='relu')(d_layer1)
output_place = Dense(128,activation='sigmoid')(d_layer2)
model = Model(inputs=input_place,outputs=output_place)
loss = K.mean(K.square(d_layer1 - e_layer2),axis = -1)
model.add_loss(loss)
model.compile(optimizer = 'adam',
loss=['mse'],
metrics=['accuracy'])
input_data = np.random.randn(1,128)
model.fit(input_data,
input_data,
epochs=5)
如上所述,我制作了两个损失函数,一个是传统的MSE损失,在model.compile()中用于计算输入和输出的MSE_loss,另一个损失也类似于MSE损失,但它计算了中间层的MSE。它可以运行,但我很困惑,使用这两种不同的方式添加损失,我的模型能清楚地知道它们是什么吗?