卷积神经网络似乎是随机猜测

3
我目前正在尝试使用卷积神经网络构建一款种族识别程序。我正在输入UTKFaceRegonition数据集的200px x 200px版本(如果您想查看,可以将我的数据集放在Google Drive上)。我使用keras和tensorflow使用8个不同的类别(4个种族*2个性别),每个类别大约有700张图片,但我已经使用了1000张。问题是当我运行网络时,最好的准确率只有13.5%,验证准确率约为11-12.5%,损失约为2.079-2.081,即使经过50个epoch后也没有任何改进。我目前的假设是它随机猜测/无法学习,因为8/100=12.5%,这大约是它获得的结果,在我制作的其他3类模型中,它的准确率约为33%。
我注意到第一次甚至第二次epoch的验证准确率不同,但之后就保持恒定了。我增加了像素分辨率,改变了层数、层类型和每层神经元数量,尝试了优化器(sgd在正常lr下以及非常大和小的lr(0.1和10^-6)),并尝试了不同的损失函数,如KLDivergence,但除了KLDivergence外,似乎没有任何影响,其中一次运行表现不错(约16%),但之后又失败了。我想到的一些想法是数据集中可能有太多噪声,或者可能与密集层的数量有关,但老实说,我不知道为什么它不学习。
这里是制作张量的代码。
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os
import cv2
import random
import pickle

WIDTH_SIZE = 200
HEIGHT_SIZE = 200

CATEGORIES = []
for CATEGORY in os.listdir('./TRAINING'):
    CATEGORIES.append(CATEGORY)
DATADIR = "./TRAINING"
training_data = []
def create_training_data():
    for category in CATEGORIES:
        path = os.path.join(DATADIR, category)
        class_num = CATEGORIES.index(category)
        for img in os.listdir(path)[:700]:
            try:
                img_array = cv2.imread(os.path.join(path,img), cv2.IMREAD_COLOR)
                new_array = cv2.resize(img_array,(WIDTH_SIZE,HEIGHT_SIZE))
                training_data.append([new_array,class_num])
            except Exception as error:
                print(error)

create_training_data()

random.shuffle(training_data)
X = []
y = []

for features, label in training_data:
    X.append(features)
    y.append(label)

X = np.array(X).reshape(-1, WIDTH_SIZE, HEIGHT_SIZE, 3)
y = np.array(y)

pickle_out = open("X.pickle", "wb")
pickle.dump(X, pickle_out)
pickle_out = open("y.pickle", "wb")
pickle.dump(y, pickle_out)

这是我的构建模型。
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
import pickle

pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("y.pickle","rb")
y = pickle.load(pickle_in)
X = X/255.0

model = Sequential()
model.add(Conv2D(256, (2,2), activation = 'relu', input_shape = X.shape[1:]))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Dropout(0.4))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Dropout(0.4))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Conv2D(256, (2,2), activation = 'relu'))
model.add(Dropout(0.4))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(8, activation="softmax"))

model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])

model.fit(X, y, batch_size=16,epochs=100,validation_split=.1)

这是我运行的10个时期的日志。
5040/5040 [==============================] - 55s 11ms/sample - loss: 2.0803 - accuracy: 0.1226 - val_loss: 2.0796 - val_accuracy: 0.1250
Epoch 2/100
5040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1147 - val_loss: 2.0798 - val_accuracy: 0.1161
Epoch 3/100
5040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1190 - val_loss: 2.0800 - val_accuracy: 0.1161
Epoch 4/100
5040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1173 - val_loss: 2.0799 - val_accuracy: 0.1107
Epoch 5/100
5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1183 - val_loss: 2.0802 - val_accuracy: 0.1107
Epoch 6/100
5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1226 - val_loss: 2.0801 - val_accuracy: 0.1107
Epoch 7/100
5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1238 - val_loss: 2.0803 - val_accuracy: 0.1107
Epoch 8/100
5040/5040 [==============================] - 54s 11ms/sample - loss: 2.0797 - accuracy: 0.1169 - val_loss: 2.0802 - val_accuracy: 0.1107
Epoch 9/100
5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1212 - val_loss: 2.0803 - val_accuracy: 0.1107
Epoch 10/100
5040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1177 - val_loss: 2.0802 - val_accuracy: 0.1107

所以,是的,有关于我的网络为什么似乎只是在猜测方面的任何帮助吗?谢谢!


你可以解释一下为什么要使用2x2卷积核,或者在这种情况下TensorFlow是如何操作的吗? - SajanGohil
1
我的回答解决了你的问题吗?如果是,请考虑采纳它。干杯! - Lukasz Tracewski
1个回答

4
问题在于你的网络设计。
通常,在前几层中,您希望学习高级特征并使用具有奇数大小的较大内核。目前,您实际上正在插值相邻像素。为什么是奇数大小?请阅读例如here
随着进入网络的深度增加,过滤器的数量通常从小(例如16、32)增加到较大的值。在您的网络中,所有层都学习相同数量的过滤器。推理是,越深入,您希望学习的特征就越细粒度-因此增加过滤器的数量。
在您的ANN中,每个层还会削减图像中的有价值信息(默认情况下,您使用有效填充)。
这是一个非常基本的网络,经过40秒和10个时期后,我获得了超过95%的训练准确性:
import pickle
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D

pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("y.pickle","rb")
y = pickle.load(pickle_in)
X = X/255.0

model = Sequential()
model.add(Conv2D(16, (5,5), activation = 'relu', input_shape = X.shape[1:], padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(32, (3,3), activation = 'relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64, (3,3), activation = 'relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(512))
model.add(Dense(8, activation='softmax'))

model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])

架构:

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_19 (Conv2D)           (None, 200, 200, 16)      1216      
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_20 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_21 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 40000)             0         
_________________________________________________________________
dense_7 (Dense)              (None, 512)               20480512  
_________________________________________________________________
dense_8 (Dense)              (None, 8)                 4104      
=================================================================
Total params: 20,508,968
Trainable params: 20,508,968
Non-trainable params: 0

培训:

Train on 5040 samples, validate on 560 samples
Epoch 1/10
5040/5040 [==============================] - 7s 1ms/sample - loss: 2.2725 - accuracy: 0.1897 - val_loss: 1.8939 - val_accuracy: 0.2946
Epoch 2/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 1.7831 - accuracy: 0.3375 - val_loss: 1.8658 - val_accuracy: 0.3179
Epoch 3/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 1.4857 - accuracy: 0.4623 - val_loss: 1.9507 - val_accuracy: 0.3357
Epoch 4/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 1.1294 - accuracy: 0.6028 - val_loss: 2.1745 - val_accuracy: 0.3250
Epoch 5/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.8060 - accuracy: 0.7179 - val_loss: 3.1622 - val_accuracy: 0.3000
Epoch 6/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.5574 - accuracy: 0.8169 - val_loss: 3.7494 - val_accuracy: 0.2839
Epoch 7/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.3756 - accuracy: 0.8813 - val_loss: 4.9125 - val_accuracy: 0.2643
Epoch 8/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.3001 - accuracy: 0.9036 - val_loss: 5.6300 - val_accuracy: 0.2821
Epoch 9/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.2345 - accuracy: 0.9337 - val_loss: 5.7263 - val_accuracy: 0.2679
Epoch 10/10
5040/5040 [==============================] - 6s 1ms/sample - loss: 0.1549 - accuracy: 0.9581 - val_loss: 7.3682 - val_accuracy: 0.2732

正如您所看到的,验证分数很糟糕,但重点是要展示糟糕的架构可能会完全阻止训练。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接