我正在使用 TensorFlow 中的贝叶斯优化来进行卷积神经网络 (CNN) 的超参数优化。但是我遇到了以下错误:
我优化了这些超参数:ResourceExhaustedError(参见上面的 traceback):在分配具有形状 [4136,1,180,432] 和类型 float 的张量时,GPU_0_bfc 分配器在 /job:localhost/replica:0/task:0/device:GPU:0 上发生 OOM 错误。
dim_batch_size = Integer(low=1, high=50, name='batch_size')
dim_kernel_size1 = Integer(low=1, high=75, name='kernel_size1')
dim_kernel_size2 = Integer(low=1, high=50, name='kernel_size2')
dim_depth = Integer(low=1, high=100, name='depth')
dim_num_hidden = Integer(low=5, high=1500, name='num_hidden')
dim_num_dense_layers = Integer(low=1, high=5, name='num_dense_layers')
dim_learning_rate = Real(low=1e-6, high=1e-2, prior='log-uniform',
name='learning_rate')
dim_activation = Categorical(categories=['relu', 'sigmoid'],
name='activation')
dim_max_pool = Integer(low=1, high=100, name='max_pool')
dimensions = [dim_batch_size,
dim_kernel_size1,
dim_kernel_size2,
dim_depth,
dim_num_hidden,
dim_num_dense_layers,
dim_learning_rate,
dim_activation,
dim_max_pool]
它说资源已经耗尽了。为什么会这样呢?
是因为我优化了太多的超参数吗?还是存在一些维度不匹配的问题?或者我分配的超参数范围超出了正确操作的允许范围?