如何为卷积(1D)Keras神经网络格式化输入和输出形状?(Python)

4
我刚接触深度学习、keras API和卷积神经网络,若有任何幼稚错误请见谅。我正试图构建一个简单的卷积神经网络进行分类。输入数据X有286个样本,每个样本有500个时间点,4个维度。这些维度是分类变量的独热编码。我不确定如何处理Y,所以只是对样本进行了一些聚类,然后将它们进行独热编码,以便在建模时进行实验。目标数据Y有286个样本,包含6个类别的独热编码。我的最终目标是让它运行起来,以便我可以弄清楚如何为真正有用的学习问题更改它,并使用隐藏层进行特征提取。
我的问题是无法使最后一层的尺寸匹配。
我制作的模型执行以下步骤:
(1) 输入数据
(2) 卷积层
(3) 最大池化层
(4) Dropout正则化
(5) 大型全连接层
(6) 输出层
import tensorflow as tf
import numpy as np
# Data Description
print(X[0,:])
# [[0 0 1 0]
#  [0 0 1 0]
#  [0 1 0 0]
#  ..., 
#  [0 0 1 0]
#  [0 0 1 0]
#  [0 0 1 0]]
print(Y[0,:])
# [0 0 0 0 0 1]
X.shape, Y.shape
# ((286, 500, 4), (286, 6))

# Tensorboard callback
tensorboard= tf.keras.callbacks.TensorBoard()

# Build the model
# Input Layer taking in 500 time points with 4 dimensions
input_layer = tf.keras.layers.Input(shape=(500,4), name="sequence")
# 1 Dimensional Convolutional layer with 320 filters and a kernel size of 26 
conv_layer = tf.keras.layers.Conv1D(320, 26, strides=1, activation="relu", )(input_layer)
# Maxpooling layer 
maxpool_layer = tf.keras.layers.MaxPooling1D(pool_size=13, strides=13)(conv_layer)
# Dropout regularization
drop_layer = tf.keras.layers.Dropout(0.3)(maxpool_layer)
# Fully connected layer
dense_layer = tf.keras.layers.Dense(512, activation='relu')(drop_layer)
# Softmax activation to get probabilities for output layer
activation_layer = tf.keras.layers.Activation("softmax")(dense_layer)
# Output layer with probabilities
output = tf.keras.layers.Dense(num_classes)(activation_layer)
# Build model
model = tf.keras.models.Model(inputs=input_layer, outputs=output, name="conv_model")
model.compile(loss="categorical_crossentropy", optimizer="adam", callbacks=[tensorboard])
model.summary()
# _________________________________________________________________
# Layer (type)                 Output Shape              Param #   
# =================================================================
# sequence (InputLayer)        (None, 500, 4)            0         
# _________________________________________________________________
# conv1d_9 (Conv1D)            (None, 475, 320)          33600     
# _________________________________________________________________
# max_pooling1d_9 (MaxPooling1 (None, 36, 320)           0         
# _________________________________________________________________
# dropout_9 (Dropout)          (None, 36, 320)           0         
# _________________________________________________________________
# dense_16 (Dense)             (None, 36, 512)           164352    
# _________________________________________________________________
# activation_7 (Activation)    (None, 36, 512)           0         
# _________________________________________________________________
# dense_17 (Dense)             (None, 36, 6)             3078      
# =================================================================
# Total params: 201,030
# Trainable params: 201,030
# Non-trainable params: 0
model.fit(X,Y, batch_size=128, epochs=100)
# ValueError: Error when checking target: expected dense_17 to have shape (None, 36, 6) but got array with shape (286, 6, 1)
1个回答

2

Conv1D的输出形状是一个三维张量(批次,观测值,卷积核)

> x = Input(shape=(500, 4))
> y = Conv1D(320, 26, strides=1, activation="relu")(x)
> y = MaxPooling1D(pool_size=13, strides=13)(y)
> print(K.int_shape(y))
(None, 36, 320)

However, Dense layers expects a 2-rank tensor (batch, features). A Flatten, GlobalAveragePooling1D or GlobalMaxPooling1D separating the convolutions from the denses is sufficient to fix this:

  1. Flatten will reshape a (batch, observations, kernels) tensor into a (batch, observations * kernels) one:

    ....
    y = Conv1D(320, 26, strides=1, activation="relu")(x)
    y = MaxPooling1D(pool_size=13, strides=13)(y)
    y = Flatten()(y)
    y = Dropout(0.3)(y)
    y = Dense(512, activation='relu')(y)
    ....
    
  2. GlobalAveragePooling1D will average all observations in (batch, observations, kernels) tensor, resulting in a (batch, kernels) one:

    ....
    y = Conv1D(320, 26, strides=1, activation="relu")(x)
    y = GlobalAveragePooling1D(pool_size=13, strides=13)(y)
    y = Flatten()(y)
    y = Dropout(0.3)(y)
    y = Dense(512, activation='relu')(y)
    ....
    

There seems to be a problem with your tensorboard callback initialization also. This one is easy to fix.


For temporal data processing, take a look at the TimeDistributed wrapper.


1
抱歉之前没有及时接受。我以为我已经点赞并选择了正确答案。 - O.rka

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接