为什么我会收到AlreadyExistsError错误?

6
当我使用keras训练我的二进制分类时,出现了以下错误:
AlreadyExistsError: Resource __per_step_16/training_4/Adam/gradients/lstm_10/while/ReadVariableOp_8/Enter_grad/ArithmeticOptimizer/AddOpsRewrite_Add/tmp_var/struct tensorflow::TemporaryVariableOp::TmpVar
     [[{{node training_4/Adam/gradients/lstm_10/while/ReadVariableOp_8/Enter_grad/ArithmeticOptimizer/AddOpsRewrite_Add/tmp_var}} = TemporaryVariable[dtype=DT_FLOAT, shape=[64,256], var_name="training_4...dd/tmp_var", _device="/job:localhost/replica:0/task:0/device:CPU:0"](^training_4/Adam/gradients/lstm_10/while/strided_slice_11_grad/StridedSliceGrad)]]

我做了以下代码:
file = pd.read_csv('train_stemmed.csv')
Y = list(map(int,file['target'].values))
X = list(map(str,file['question_text'].values))

MAXLEN = 100
tokenizer = Tokenizer()
tokenizer.fit_on_texts(X)

X_seq = tokenizer.texts_to_sequences(X)
X_seq_pad = pad_sequences(X_seq, maxlen=MAXLEN)
X_train, X_test, Y_train, Y_test = train_test_split(X_seq_pad, Y, test_size=0.2)
vocab_len = len(tokenizer.word_index) + 1

model = Sequential()
model.add(Embedding(vocab_len, 100, input_length=MAXLEN))
model.add(Conv1D(64, 5, 5, activation='relu'))
model.add(MaxPooling1D(pool_size=5))
model.add(BatchNormalization())
model.add(LSTM(64)) 
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

model.fit(X_train,
          epochs=2,
          batch_size=128,
          y=Y_train,
          validation_data=(X_test, Y_test),
          verbose=1)

有什么问题吗?
2个回答

9
model = Sequential()这行代码之前添加以下代码可以避免出现此错误。
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.keras.backend import set_session
tf.keras.backend.clear_session()  # For easy reset of notebook state.

config_proto = tf.ConfigProto()
off = rewriter_config_pb2.RewriterConfig.OFF
config_proto.graph_options.rewrite_options.arithmetic_optimization = off
session = tf.Session(config=config_proto)
set_session(session)

6
它解决了我的问题,但你能否在回答中加入一些解释,说明它是如何解决这个问题以及导致该问题的原因是什么? - Benyamin Jafari

1
这是tf github上的一个未解决问题(https://github.com/tensorflow/tensorflow/issues/23780),与Grappler Optimization有关。有两种解决方案:
  1. 按照Nandeesh的被接受的答案,您可以关闭算术优化。

  2. 您可以减少内存使用量(例如,减小层/层的大小等)。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接