Keras:早停模型保存

5

目前我在Keras中使用早停止,方法如下:

X,y= load_data('train_data')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=12)

datagen = ImageDataGenerator(
    horizontal_flip=True,
    vertical_flip=True)

early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
            steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
            epochs=n_epochs, callbacks=[early_stopping_callback])

但是,在model.fit_generator结束时,它将在epochs_to_wait_for_improve之后保存模型,但我想保存具有最小val_loss的模型,这样做有意义吗?是否可能实现?


这绝对是可能的。只需创建您自己的检查点。在此处检查答案:https://dev59.com/01oU5IYBdhLWcg3wnXxO - Wilmar van Ommeren
你看过这个吗?https://dev59.com/01oU5IYBdhLWcg3wnXxO - orabis
1个回答

9

是的,只需要再添加一个回调函数就可以实现,以下是代码:

early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
            steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
            epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接