我对Tensorflow非常陌生,最近从这个链接开始尝试一个简单的聊天机器人项目。
有很多警告提示说在Tensorflow 2.0中会弃用一些功能,建议升级。因此我进行了升级,并使用了自动Tensorflow代码升级工具更新所有必要的文件到2.0版本,但其中还是出现了一些错误。
在处理model.py文件时,它返回了以下警告:
133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check. `partition_strategy` has been removed from tf.nn.sampled_softmax_loss. The 'div' strategy will be used by default.
148:31: WARNING: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
148:31: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
171:33: ERROR: Using member tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
197:27: ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
我遇到的主要问题是使用现已不存在的contrib
模块的代码。我该如何调整以下三个代码块,使它们在Tensorflow 2.0中工作?
# Define the network
# Here we use an embedding model, it takes integer as input and convert them into word vector for
# better word representation
decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
self.encoderInputs, # List<[batch=?, inputDim=1]>, list of size args.maxLength
self.decoderInputs, # For training, we force the correct output (feed_previous=False)
encoDecoCell,
self.textData.getVocabularySize(),
self.textData.getVocabularySize(), # Both encoder and decoder have the same number of class
embedding_size=self.args.embeddingSize, # Dimension of each word
output_projection=outputProjection.getWeights() if outputProjection else None,
feed_previous=bool(self.args.test) # When we test (self.args.test), we use previous output as next input (feed_previous)
)
# Finally, we define the loss function
self.lossFct = tf.contrib.legacy_seq2seq.sequence_loss(
decoderOutputs,
self.decoderTargets,
self.decoderWeights,
self.textData.getVocabularySize(),
softmax_loss_function= sampledSoftmax if outputProjection else None # If None, use default SoftMax
)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
encoDecoCell,
input_keep_prob=1.0,
output_keep_prob=self.args.dropout
)
decoder(decoder_embeddings, initial_state=encoder_state, sequence_length=sequence_lengths)
吗?我在源代码的第159行https://github.com/tensorflow/addons/blob/5f618fdb92d9737da059de2a33fa606e97505398/tensorflow_addons/seq2seq/decoder.py#L159中没有看到`sequence_length`作为函数参数。 - thinkdeep