使用自定义数据对BERT进行微调

4
我想使用Bert来训练一个21类文本分类模型。但是我的训练数据非常少,所以我下载了一个包含5类、200万个样本的类似数据集,并使用Bert提供的uncased预训练模型对下载的数据进行微调,获得了大约98%的验证准确率。现在,我想将这个模型用作我自己的小型自定义数据的预训练模型。但是,由于检查点模型有5类而我的自定义数据有21类,所以出现了“来自checkpoint reader的张量output_bias形状不匹配”的错误。

NFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running train on CPU
INFO:tensorflow:*** Features ***
INFO:tensorflow:  name = input_ids, shape = (32, 128)
INFO:tensorflow:  name = input_mask, shape = (32, 128)
INFO:tensorflow:  name = is_real_example, shape = (32,)
INFO:tensorflow:  name = label_ids, shape = (32, 21)
INFO:tensorflow:  name = segment_ids, shape = (32, 128)
Tensor("IteratorGetNext:3", shape=(32, 21), dtype=int32)
WARNING:tensorflow:From /home/user/Spine_NLP/bert/modeling.py:358: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/user/Spine_NLP/bert/modeling.py:671: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
INFO:tensorflow:num_labels:21;logits:Tensor("loss/BiasAdd:0", shape=(32, 21), dtype=float32);labels:Tensor("loss/Cast:0", shape=(32, 21), dtype=float32)
INFO:tensorflow:Error recorded from training_loop: Shape of variable output_bias:0 ((21,)) doesn't match with shape of tensor output_bias ([5]) from checkpoint reader.

1个回答

7
如果您想使用预训练的5个类别模型来微调自己的模型,您可能需要添加一层,将5个类别投影到您的21个类别中。
您看到的错误是因为您可能没有定义新的“output_weights”和“output_bias”,而是重用它们来处理具有21个类别的新标签。下面,我给您的新标签的中间张量添加了“final_”前缀。
代码应该类似于以下内容:
# These are the logits for the 5 classes. Keep them as is.
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)

# You want to create one more layer
final_output_weights = tf.get_variable(
  "final_output_weights", [21, 5],
  initializer=tf.truncated_normal_initializer(stddev=0.02))
final_output_bias = tf.get_variable(
  "final_output_bias", [21], initializer=tf.zeros_initializer())

final_logits = tf.matmul(logits, final_output_weights, transpose_b=True)
final_logits = tf.nn.bias_add(final_logits, final_output_bias)

# Below is for evaluating the classification.
final_probabilities = tf.nn.softmax(final_logits, axis=-1)
final_log_probs = tf.nn.log_softmax(final_logits, axis=-1)

# Note labels below should be the 21 class ids.
final_one_hot_labels = tf.one_hot(labels, depth=21, dtype=tf.float32)
final_per_example_loss = -tf.reduce_sum(final_one_hot_labels * final_log_probs, axis=-1)
final_loss = tf.reduce_mean(final_per_example_loss)

我应该不要移除预训练模型的最后一层,而是添加一层吗?这就是迁移学习的工作方式,对吧? - danishansari

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接