我正在尝试使用Keras复制神经网络与深度学习中的一些示例,但我在基于第1章架构训练网络时遇到了问题。目标是对MNIST数据集中的手写数字进行分类。
架构:
如您所见,这个网络根本没有学习,我不确定原因。就我所知,形状看起来还不错。是什么阻止了网络的学习?
(顺便说一下,我知道交叉熵损失和softmax输出层会更好;然而,根据链接的书籍,它们似乎并不是必需的。第1章中手动实现的网络成功地学习了;我正在尝试复制它,然后再继续。)
架构:
- 784个输入(代表MNIST图像中的28 * 28个像素)
- 30个神经元的隐藏层
- 10个神经元的输出层
- 权重和偏差从均值为0,标准差为1的高斯分布初始化。
- 损失/成本函数为平均平方误差。
- 优化器为随机梯度下降。
- 学习率=3.0
- 批处理大小=10
- epochs = 30
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from keras.initializers import RandomNormal
# import data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# input image dimensions
img_rows, img_cols = 28, 28
x_train = x_train.reshape(x_train.shape[0], img_rows * img_cols)
x_test = x_test.reshape(x_test.shape[0], img_rows * img_cols)
input_shape = (img_rows * img_cols,)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train shape:', y_train.shape)
# Construct model
# 784 * 30 * 10
# Normal distribution for weights/biases
# Stochastic Gradient Descent optimizer
# Mean squared error loss (cost function)
model = Sequential()
layer1 = Dense(30,
input_shape=input_shape,
kernel_initializer=RandomNormal(stddev=1),
bias_initializer=RandomNormal(stddev=1))
model.add(layer1)
layer2 = Dense(10,
kernel_initializer=RandomNormal(stddev=1),
bias_initializer=RandomNormal(stddev=1))
model.add(layer2)
print('Layer 1 input shape: ', layer1.input_shape)
print('Layer 1 output shape: ', layer1.output_shape)
print('Layer 2 input shape: ', layer2.input_shape)
print('Layer 2 output shape: ', layer2.output_shape)
model.summary()
model.compile(optimizer=SGD(lr=3.0),
loss='mean_squared_error',
metrics=['accuracy'])
# Train
model.fit(x_train,
y_train,
batch_size=10,
epochs=30,
verbose=2)
# Run on test data and output results
result = model.evaluate(x_test,
y_test,
verbose=1)
print('Test loss: ', result[0])
print('Test accuracy: ', result[1])
输出(使用Python 3.6和TensorFlow后端):
Using TensorFlow backend.
x_train shape: (60000, 784)
60000 train samples
10000 test samples
y_train shape: (60000, 10)
Layer 1 input shape: (None, 784)
Layer 1 output shape: (None, 30)
Layer 2 input shape: (None, 30)
Layer 2 output shape: (None, 10)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 30) 23550
_________________________________________________________________
dense_2 (Dense) (None, 10) 310
=================================================================
Total params: 23,860
Trainable params: 23,860
Non-trainable params: 0
_________________________________________________________________
Epoch 1/30
- 7s - loss: nan - acc: 0.0987
Epoch 2/30
- 7s - loss: nan - acc: 0.0987
(重复30个时期)
Epoch 30/30
- 6s - loss: nan - acc: 0.0987
10000/10000 [==============================] - 0s 22us/step
Test loss: nan
Test accuracy: 0.098
如您所见,这个网络根本没有学习,我不确定原因。就我所知,形状看起来还不错。是什么阻止了网络的学习?
(顺便说一下,我知道交叉熵损失和softmax输出层会更好;然而,根据链接的书籍,它们似乎并不是必需的。第1章中手动实现的网络成功地学习了;我正在尝试复制它,然后再继续。)