使用EarlyStopping中的restore_best_weights=True时出现TypeError: object of type 'NoneType' has no len()错误。

4

我刚开始使用keras进行深度学习应用,并尝试使用预训练模型进行二元分类。我在Google Colab上运行代码,其中tensorflow版本为2.2.0-rc2。以下是我正在使用的模型。

vgg19_basemodel = tf.keras.applications.VGG19(include_top = False, weights='imagenet', input_shape=(IMSIZE,IMSIZE,3))
#vgg19_basemodel.summary()

x = vgg19_basemodel.output

x = tf.keras.layers.Conv2D(16, (3,3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D(2,2)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(32, activation="relu")(x)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)

for layer in vgg19_basemodel.layers:
  layer.trainable = False

vgg19_model = tf.keras.Model(vgg19_basemodel.input, x)
vgg19_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LR), loss='binary_crossentropy', metrics=['accuracy'])

#vgg19_model.summary()

以下是我正在使用的回调函数。
class myCallBack(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if(logs.get('loss') <= EXLOSS and logs.get('accuracy') >= EXACC and logs.get('val_accuracy') >= VALACC):
      print("\nCALLBAKC: TRAINING LOSS {} reached.".format(EXLOSS))
      self.model.stop_training  = True

ccall = myCallBack()

es = tf.keras.callbacks.EarlyStopping(monitor='loss', mode='min', min_delta=0.01, baseline = 0.01, patience=10, restore_best_weights=True)

我正在使用以下内容训练模型:

d3_vgg19_history = vgg19_model.fit(d3_train_generator, 
                          epochs=EPOCHS,
                          validation_data=d3_test_generator, 
                          steps_per_epoch=d3_stepsize_train, 
                          validation_steps=d3_stepsize_test,
                          callbacks=[ccall, es]
                          )

当没有使用early stopping时,自定义回调函数不会产生任何问题,并且可以完美停止训练。

但是,如果我在early stopping中设置 restore_best_weights=True,当epoch_number == patience时会产生以下错误。

如果将 restore_best_weights=False设置,则不会出现任何问题并且训练成功结束。

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-38-f6a9ab9579ae> in <module>()
      6                           steps_per_epoch=d3_stepsize_train,
      7                           validation_steps=d3_stepsize_test,
----> 8                           callbacks=[ccall, esd3]
      9                           )

4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
     64   def _method_wrapper(self, *args, **kwargs):
     65     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 66       return method(self, *args, **kwargs)
     67 
     68     # Running inside `run_distribute_coordinator` already.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    811           epoch_logs.update(val_logs)
    812 
--> 813         callbacks.on_epoch_end(epoch, epoch_logs)
    814         if self.stop_training:
    815           break

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
    363     logs = self._process_logs(logs)
    364     for callback in self.callbacks:
--> 365       callback.on_epoch_end(epoch, logs)
    366 
    367   def on_train_batch_begin(self, batch, logs=None):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
   1483           if self.verbose > 0:
   1484             print('Restoring model weights from the end of the best epoch.')
-> 1485           self.model.set_weights(self.best_weights)
   1486 
   1487   def on_train_end(self, logs=None):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in set_weights(self, weights)
   1517         expected_num_weights += 1
   1518 
-> 1519     if expected_num_weights != len(weights):
   1520       raise ValueError(
   1521           'You called `set_weights(weights)` on layer "%s" '

TypeError: object of type 'NoneType' has no len()

我已经在其他预训练模型中测试了早停法,包括:vgg16、denset201、resnet、xception、inception等。然而,早停法的问题仍然存在,每当restore_best_weights设置为True时,都会出现相同的错误。 非常感谢您在此案件中的帮助。如果需要其他信息,请告诉我。

1个回答

8

找到了“问题”。 在我的情况下,它是None是有道理的,因为它没有找到比基线更好的模型。 我去掉了“baseline = 1.0”,现在对我有效。


1
节省了我很多时间!我使用了一个现实的基线。这个神秘的消息据说与基线有关(尽管模型在下面并且到达了第100个时期)。去掉基线后,消息消失了(手指交叉)。 - Dr_Zaszuś

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接