我一直在试验不同类型的卷积层以检查它们的计算速度。我的代码一开始是这样的。
def conv_block_A(layer):
block = tf.keras.layers.Conv2D(filters=128, kernel_size=3, strides=1, padding='same')(layer)
block = tf.keras.layers.Conv2D(filters=196, kernel_size=3, strides=1, padding='same')(block)
block = tf.keras.layers.Conv2D(filters=128, kernel_size=3, strides=1, padding='same')(block)
block = tf.keras.layers.BatchNormalization(momentum=0.8)(block)
block = tf.keras.layers.LeakyReLU(alpha=0.2)(block)
return block
在阅读了几篇博客后,我修改了我的代码如下:
def conv_block_A(layer):
block = tf.keras.layers.SeparableConv2D(filters=128, kernel_size=3, strides=1, padding='same')(layer)
block = tf.keras.layers.SeparableConv2D(filters=196, kernel_size=3, strides=1, padding='same')(block)
block = tf.keras.layers.SeparableConv2D(filters=128, kernel_size=3, strides=1, padding='same')(block)
block = tf.keras.layers.BatchNormalization(momentum=0.8)(block)
block = tf.keras.layers.LeakyReLU(alpha=0.2)(block)
return block
在CPU上,训练过程加快了两倍,但在Tesla T4上训练已经变得非常缓慢。可能的原因是什么?