假设以下简化代码:
如果我优化损失函数,损失函数的梯度会反向传播到x还是y?
x = tf.Variable(...)
y = tf.Variable(...) # y can also be some tensor computed from other variables
x_new = tf.assign(x, y)
loss = x_new * x_new
如果我优化损失函数,损失函数的梯度会反向传播到x还是y?