Keras NASNet训练

3
我打算做以下事情:
1. 从头开始对数据集进行NASNet训练 2. 重新训练NASNet的最后一层(迁移学习) 并比较它们的相对性能。从文档中可以看出:
keras.applications.nasnet.NASNetLarge(input_shape=None, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
但是文档有点令人困惑。
问题:
1. 对于迁移学习,我是否应该设置include_top = True和classes = (num_classes),冻结除最后一层外的所有层,然后对其进行训练? 2. 是否需要将输入图像与规定的形状相同? NASNet需要(331,331,3),但这很大,我看到ImageNet使用不同大小进行训练。我可以使用较小的图像,例如(120,120,3)并替换顶部层吗?这仍然被认为是迁移学习,对吧?但是,NASNet的最后一层似乎是一种特殊类型的单元,我该如何实现它? 3. 如果我想从头开始训练,我可以确认我设置了include_top = False,并在末尾添加完全连接的层?
如果有一个教程可以展示如何从头开始训练NASNet以及通过迁移学习在新数据集上进行训练,那就太好了。我找到了一个关于ImageNet的教程,但他自己构建了模型层而不是使用keras.applications。
1个回答

0

以下是您问题的答案:

For transfer learning, do I set include_top = True and classes = 
(num_classes), freeze all the layers except the last one, then train 
that?

当加载任何想要用于迁移学习的模型时,如果将“include_top”参数设置为False,则不会加载用于进行预测的模型的全连接输出层,允许您添加一个新的输出层以进行训练。

Is it a requirement to have input images in the same shape as 
specified? NASNet requires (331,331,3) but that is quite large and i 
see imagenet being trained with diff sizes. Can I use smaller images 
such as (120,120,3) and replace the top layer? This would still be 
considered transfer learning right? However, the NASNet last layer 
seems to be a special type of cell, how would I implement that?

重塑图像非常重要,使其与预训练模型(例如 NasNet)的图像大小相同,否则会导致偏差。

If I want to train from scratch, can I confirm that i set include_top = 
False, and add fully connected layers to the end?

'include_top'参数与是否训练整个模型或冻结模型无关。以下是如何训练整个模型和部分模型的示例:

for layer in model.layers:
    layer.trainable=False
# or if we want to set the first 20 layers of the network to be non- 
trainable
for layer in model.layers[:20]:
    layer.trainable=False
for layer in model.layers[20:]:
    layer.trainable=True

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接