预训练模型中的验证损失和验证准确率曲线波动

3
我目前正在学习神经网络,但在尝试学习卷积神经网络(CNN)时遇到了问题。我试图训练包含音乐流派的频谱图数据。我的数据包括27000个频谱图,并分为3类(流派)。我的数据按9:1比例划分为训练和验证。
请问有人能告诉我为什么验证损失/准确性的结果会波动吗?我正在使用Keras中的MobileNetV2,并将其连接到3个稠密层。以下是我的代码片段:
train_datagen = ImageDataGenerator(
    preprocessing_function=preprocess_input,
    validation_split=0.1)

train_generator = train_datagen.flow_from_dataframe(
    dataframe=traindf,
    directory="...",
    color_mode='rgb',
    x_col="ID",
    y_col="Class",
    subset="training",
    batch_size=32,
    seed=42,
    shuffle=True,
    class_mode="categorical",
    target_size=(64, 64))

valid_generator = train_datagen.flow_from_dataframe(
    dataframe=traindf,
    directory="...",
    color_mode='rgb',
    x_col="ID",
    y_col="Class",
    subset="validation",
    batch_size=32,
    seed=42,
    shuffle=True,
    class_mode="categorical",
    target_size=(64, 64))

base_model = MobileNetV2(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1025, activation='relu')(x)
x = Dense(1025, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(3, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=preds)

model.compile(optimizer='adam', loss='categorical_crossentropy',
                  metrics=['accuracy'])

step_size_train = train_generator.n//train_generator.batch_size
step_size_valid = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(
    generator=train_generator,
    steps_per_epoch=step_size_train,
    validation_data=valid_generator,
    validation_steps=step_size_valid,
    epochs=75)

这是我的验证损失和验证准确性曲线图,波动太大了。
有什么办法可以减少波动或使其更好吗?这里是否存在过拟合或欠拟合问题?我尝试使用Dropout(),但只会使情况更糟。我需要怎么做才能解决这个问题呢?
谢谢, Aquilla Setiawan Kanadi。

我认为缺少损失的图片。此外,在模型训练时,分享带有和不带有Dropout的日志是个好主意。如果数据不涉及机密,您可以分享数据,以便我们在我们这边尝试复现问题。谢谢! - user11530462
1个回答

0

首先,缺少验证损失和验证准确性的图片。

回答您的问题,以下可能是您的验证损失和验证准确性波动的原因 -

  1. 您已经将基础模型的权重增加了大约1.25倍来构建模型。(模型可训练参数5115398-基础模型可训练参数2223872 = 2891526)

程序统计数据:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense
from keras.utils.layer_utils import count_params

class color:
   PURPLE = '\033[95m'
   CYAN = '\033[96m'
   DARKCYAN = '\033[36m'
   BLUE = '\033[94m'
   GREEN = '\033[92m'
   YELLOW = '\033[93m'
   RED = '\033[91m'
   BOLD = '\033[1m'
   UNDERLINE = '\033[4m'
   END = '\033[0m'

base_model = tf.keras.applications.MobileNetV2(weights='imagenet', include_top=False)

#base_model.summary()
trainable_count = count_params(base_model.trainable_weights)
non_trainable_count = count_params(base_model.non_trainable_weights)
print("\n",color.BOLD + '  base_model Statistics !' + color.END)
print("Trainable Parameters :", color.BOLD + str(trainable_count) + color.END)
print("Non Trainable Parameters :", non_trainable_count,"\n")

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1025, activation='relu')(x)
x = Dense(1025, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(3, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=preds)

#model.summary()
trainable_count = count_params(model.trainable_weights)
non_trainable_count = count_params(model.non_trainable_weights)
print(color.BOLD + '    model Statistics !' + color.END)
print("Trainable Parameters :", color.BOLD + str(trainable_count) + color.END)
print("Non Trainable Parameters :", non_trainable_count,"\n")

new_weights_added = count_params(model.trainable_weights) - count_params(base_model.trainable_weights)
print("Additional trainable weights added to the model excluding basel model trainable weights :", color.BOLD + str(new_weights_added) + color.END)

输出 -

WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

   base_model Statistics !
Trainable Parameters : 2223872
Non Trainable Parameters : 34112 

    model Statistics !
Trainable Parameters : 5115398
Non Trainable Parameters : 34112 

Additional trainable weights added to the model excluding basel model trainable weights : 2891526
  1. 您正在训练完整的模型权重(MobileNetV2 权重和额外层权重)。

解决方案是 -

  1. 定制额外层,使得与基础模型可训练参数相比,新的可训练参数最小化。可以添加最大池化层和较少的密集层。

  2. 通过 base_model.trainable = False 冻结基础模型,只训练您在 MobileNetV2 层之上添加的新层。

或者

解冻基础模型(MobileNetV2 层)的顶部层,并将底部层设置为不可训练。您可以按照以下方式执行此操作,其中我们将模型冻结到第 100 层,其余层将是可训练的 -

# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))

# Fine-tune from this layer onwards
fine_tune_at = 100

# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
  layer.trainable =  False

输出 -

Number of layers in the base model:  155
  • 使用超参数调整训练模型。您可以在这里了解更多关于超参数调整的信息。

  • 网页内容由stack overflow 提供, 点击上面的
    可以查看英文原文,
    原文链接