我有一个看起来可行的线性回归模型。首先将数据
加载到X
中,将目标列加载到Y
中,之后我执行以下操作...
X_train, X_test, Y_train, Y_test = train_test_split(
X_data,
Y_data,
test_size=0.2
)
rng = np.random
n_rows = X_train.shape[0]
X = tf.placeholder("float")
Y = tf.placeholder("float")
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
pred = tf.add(tf.multiply(X, W), b)
cost = tf.reduce_sum(tf.pow(pred-Y, 2)/(2*n_rows))
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
init = tf.global_variables_initializer()
init_local = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run([init, init_local])
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
for (x, y) in zip(X_train, Y_train):
sess.run(optimizer, feed_dict={X:x, Y:y})
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
c = sess.run(
cost,
feed_dict={X:X_train, Y:Y_train}
)
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(c))
print("Optimization Finished!")
accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
我不知道如何打印出模型的准确率。例如,在
sklearn
中,如果你有一个模型,你只需要打印model.score(X_test,Y_test)
即可。但我不知道如何在tensorflow
中执行此操作,甚至不知道是否可能。我认为可以计算
均方误差
。这有任何帮助吗?编辑
我尝试按照评论中建议的实现
tf.metrics.accuracy
,但我在实现时遇到了问题。文档中说它需要两个参数labels
和predictions
,所以我尝试了以下...accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
但是这会给我一个错误...
FailedPreconditionError(请参见上面的回溯):尝试使用未初始化的值accuracy/count [[Node:accuracy/count/read = IdentityT=DT_FLOAT,_class = ["loc:@accuracy/count"],_device =“/ job:localhost / replica:0 / task:0 / device:CPU:0”]]
那么如何实现这个呢?
tf.metrics.accuracy
是返回一个张量,表示准确率的函数,其值为总数除以计数。而MSE并不能告诉你准确度得分。 - Flika205