Keras:如何保存模型并继续训练?

56

我有一个训练了40个epochs的模型。我为每个epoch保存了检查点,并使用model.save()保存了模型。下面是训练的代码:

n_units = 1000
model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
# define the checkpoint
filepath="word2vec-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)

然而,当我加载模型并尝试重新训练时,它会从头开始,好像之前没有进行过训练。损失值不是从上次训练的开始。

让我困惑的是,当我加载模型并重新定义模型结构,并使用load_weight时,model.predict()可以正常工作。因此,我相信模型权重已经被加载:

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
filename = "word2vec-39-0.0027.hdf5"
model.load_weights(filename)
model.compile(loss='mean_squared_error', optimizer='adam')

然而,当我继续使用这个模型进行训练时,损失值与初始阶段相同:

filepath="word2vec-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)

我搜索了一些保存和加载模型的例子,这里这里。然而,它们都无法正常工作。


更新1

我查看了这个问题,尝试了一下,它可以正常工作:

model.save('partly_trained.h5')
del model
load_model('partly_trained.h5')

但是,当我关闭Python并重新打开它,再次运行load_model时,它失败了。损失值与初始状态一样高。


更新2

我尝试了Yu-Yang的示例代码,它有效。然而,当我再次使用自己的代码时,它仍然失败了。

这是原始训练的结果。第二个轮次应该以损失值为3.1***开始:

13700/13846 [============================>.] - ETA: 0s - loss: 3.0519
13750/13846 [============================>.] - ETA: 0s - loss: 3.0511
13800/13846 [============================>.] - ETA: 0s - loss: 3.0512Epoch 00000: loss improved from inf to 3.05101, saving model to LPT-00-3.0510.h5

13846/13846 [==============================] - 81s - loss: 3.0510    
Epoch 2/60

   50/13846 [..............................] - ETA: 80s - loss: 3.1754
  100/13846 [..............................] - ETA: 78s - loss: 3.1174
  150/13846 [..............................] - ETA: 78s - loss: 3.0745

我关闭了Python,重新打开它,使用model = load_model("LPT-00-3.0510.h5")加载模型,然后进行训练:

我关闭了Python,重新打开它,使用model = load_model("LPT-00-3.0510.h5")加载模型,然后进行训练:

filepath="LPT-{epoch:02d}-{loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=60, batch_size=50, callbacks=callbacks_list)

损失从4.54开始:

Epoch 1/60
   50/13846 [..............................] - ETA: 162s - loss: 4.5451
   100/13846 [..............................] - ETA: 113s - loss: 4.3835

3
你是否在 load_model() 后调用了 model.compile(optimizer='adam')?如果是,那么不要这样做。重新使用选项 optimizer='adam' 编译模型将会重置优化器的内部状态(实际上会创建一个新的 Adam 优化器实例)。 - Yu-Yang
2
谢谢你的回答。但是,不,我没有再次调用 model.compile。在重新打开 Python 后,我所做的只是 model = load_model('partly_trained.h5')model.fit(x, y, epochs=20, batch_size=100) - David
1
我还尝试重新定义模型结构以及使用 model.load_weight('checkpoint.hff5')model.compile(loss='categorical_crossentropy')。但是,它会报错,说必须提供优化器。 - David
8个回答

63

由于很难澄清问题所在,我从你的代码中创建了一个玩具示例,看起来运行良好。

import numpy as np
from numpy.testing import assert_allclose
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dropout, Dense
from keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

# load the model
new_model = load_model(filepath)
assert_allclose(model.predict(x_train),
                new_model.predict(x_train),
                1e-5)

# fit the model
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

模型加载后,损失继续下降。(重新启动Python也没有问题)

Using TensorFlow backend.
Epoch 1/5
500/500 [==============================] - 2s - loss: 0.3216     Epoch 00000: loss improved from inf to 0.32163, saving model to model.h5
Epoch 2/5
500/500 [==============================] - 0s - loss: 0.2923     Epoch 00001: loss improved from 0.32163 to 0.29234, saving model to model.h5
Epoch 3/5
500/500 [==============================] - 0s - loss: 0.2542     Epoch 00002: loss improved from 0.29234 to 0.25415, saving model to model.h5
Epoch 4/5
500/500 [==============================] - 0s - loss: 0.2086     Epoch 00003: loss improved from 0.25415 to 0.20860, saving model to model.h5
Epoch 5/5
500/500 [==============================] - 0s - loss: 0.1725     Epoch 00004: loss improved from 0.20860 to 0.17249, saving model to model.h5

Epoch 1/5
500/500 [==============================] - 0s - loss: 0.1454     Epoch 00000: loss improved from inf to 0.14543, saving model to model.h5
Epoch 2/5
500/500 [==============================] - 0s - loss: 0.1289     Epoch 00001: loss improved from 0.14543 to 0.12892, saving model to model.h5
Epoch 3/5
500/500 [==============================] - 0s - loss: 0.1169     Epoch 00002: loss improved from 0.12892 to 0.11694, saving model to model.h5
Epoch 4/5
500/500 [==============================] - 0s - loss: 0.1097     Epoch 00003: loss improved from 0.11694 to 0.10971, saving model to model.h5
Epoch 5/5
500/500 [==============================] - 0s - loss: 0.1057     Epoch 00004: loss improved from 0.10971 to 0.10570, saving model to model.h5

顺便说一句,重新定义模型后接着使用load_weight()是肯定行不通的,因为save_weight()load_weight()不会保存/加载优化器。


我试了你的玩具代码,它是有效的。但是当我回到我的代码时,它仍然失败了...我认为我和你的示例完全一样。我不明白为什么会这样。请查看我的更新以获取详细信息。 - David
随便猜测一下,您在模型加载前后是否使用了相同的(x, y) - Yu-Yang
是的。我真的关闭了Python并重新打开,重新加载了数据。 - David
9
那么,问题是什么? - Leonid Dashko
8
@David,请告诉我们问题出在哪里。 - shivam13juna
显示剩余5条评论

7
我将我的代码与这个示例进行了比较,仔细地逐行对照并重新运行。经过一整天的努力,我终于找到了问题所在。

在进行char-int映射时,我使用了

# title_str_reduced is a string
chars = list(set(title_str_reduced))
# make char to int index mapping
char2int = {}
for i in range(len(chars)):
    char2int[chars[i]] = i    

集合是一种无序的数据结构。在Python中,当将一个集合转换为有序的列表时,顺序会随机给定。因此,每次重新打开Python时,我的char2int字典都会随机化。我通过添加sorted()来修复了我的代码。

chars = sorted(list(set(title_str_reduced)))

这会强制转换为固定顺序。

谢谢你。我也遇到了完全相同的问题。每次重新启动后都必须从头开始,但令人难以置信的是,在同一会话中甚至在 .save、.load 后也是如此。我自己没有想出来,但感谢你,在几天的迷失之后,找到了你的答案,这救了我!:感谢: - devplayer

5
被打勾的答案并不正确;真正的问题更加微妙。当您创建一个ModelCheckpoint()时,请检查最佳结果。
cp1 = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
print(cp1.best)

你会发现这个值被设为np.inf,但不幸的是,当你停止训练时,它并不是最好的结果。因此,当你重新训练并重新创建ModelCheckpoint()时,如果你调用fit函数,且损失值低于先前已知的值,那么看起来似乎可以正常工作,但在更复杂的问题中,你将可能保存一个坏模型而失去最佳模型。

你可以通过如下方式覆盖cp.best参数来解决这个问题:

import numpy as np
from numpy.testing import assert_allclose
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dropout, Dense
from keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
cp1= ModelCheckpoint(filepath=filepath, monitor='loss',     save_best_only=True, verbose=1, mode='min')
callbacks_list = [cp1]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, shuffle=True, validation_split=0.1, callbacks=callbacks_list)

# load the model
new_model = load_model(filepath)
#assert_allclose(model.predict(x_train),new_model.predict(x_train), 1e-5)
score = model.evaluate(x_train, y_train, batch_size=50)
cp1 = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
cp1.best = score # <== ****THIS IS THE KEY **** See source for  ModelCheckpoint

# fit the model
callbacks_list = [cp1]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

5

上面的答案使用的是Tensorflow 1.x。这里提供一个使用Tensorflow 2.x更新版本的答案。

import numpy as np
from numpy.testing import assert_allclose
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import LSTM, Dropout, Dense
from tensorflow.keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

# load the model
new_model = load_model("model.h5")
assert_allclose(model.predict(x_train),
                new_model.predict(x_train),
                1e-5)

# fit the model
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

这段代码在我的模型上出现错误,原因是model.predict(x_train)new_model.predict(x_train)不相等。同样的问题也存在于另一种情况下。实际上,我使用了不同的模型设置,包括简单的Conv2D、Flatten、Dense和MaxPooling2D。如果这是问题所在,我需要做些什么呢? - Rovetown

3

我认为你可以写

model.save('partly_trained.h5' )

并且

model = load_model('partly_trained.h5')

替代

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))    
model.add(Dropout(0.2)) 
model.add(LSTM(n_units, return_sequences=True))  
model.add(Dropout(0.2)) 
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear')) 
model.compile(loss='mean_squared_error', optimizer='adam')

然后进行持续培训。 因为您可以在文档中阅读到的那样,model.save会同时存储模型架构和权重。


这对我起作用了。有点令人困惑的是,它从纪元1开始重新启动,但它的初始准确度和损失与上次训练(从最后一个检查点)结束时一致。因此,如果这很重要,您可能需要减少纪元数以反映这一点 - 我还没有找到指定“从第X个纪元开始”的方法 - 但我认为这在很大程度上只是外观问题。 - Brad

0

由于Keras和Tensorflow现在已经捆绑在一起,您可以使用新的Tensorflow格式,它将保存所有模型信息,包括优化器及其状态(来自文档,我强调):

您可以将整个模型保存到单个工件中。它将包括:

  • 模型的架构/配置
  • 训练期间学习的模型权重值
  • 如果调用了compile(),则包括模型的编译信息
  • 优化器及其状态(如果有),这使您能够从停止的地方重新开始训练

APIs

因此,一旦您以这种方式保存了模型,您可以加载它并恢复训练:它将从离开的地方继续。


0

欢迎提供解决方案的链接,但请确保您的答案即使没有链接也是有用的:在链接周围添加上下文,以便其他用户了解它的内容和原因,然后引用您链接的页面中最相关的部分,以防目标页面不可用。仅仅是一个链接的答案可能会被删除 - mrun
谢谢您的建议,我会记在心里的。 - a11apurva

0
假设你有这样的代码:
model = some_model_you_made(input_img) # you compiled your model in this 
model.summary()

model_checkpoint = ModelCheckpoint('yours.h5', monitor='val_loss', verbose=1, save_best_only=True)

model_json = model.to_json()
with open("yours.json", "w") as json_file:
    json_file.write(model_json)

model.fit_generator(#stuff...) # or model.fit(#stuff...)

现在把你的代码改成这样:

model = some_model_you_made(input_img) #same model here
model.summary()

model_checkpoint = ModelCheckpoint('yours.h5', monitor='val_loss', verbose=1, save_best_only=True) #same ckeckpoint

model_json = model.to_json()
with open("yours.json", "w") as json_file:
    json_file.write(model_json)

with open('yours.json', 'r') as f:
    old_model = model_from_json(f.read()) # open the model you just saved (same as your last train) with a different name

old_model.load_weights('yours.h5') # the model checkpoint you trained before
old_model.compile(#stuff...) # need to compile again (exactly like the last compile)

# now start training with the checkpoint...
old_model.fit_generator(#same stuff like the last train) # or model.fit(#stuff...)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接