无法在GPU上运行tensorflow

9
我想在我的GPU上运行tensorflow代码,但是它没有工作。我已经安装了Cuda和cuDNN,并且也有兼容的GPU。
我从官方网站的GPU教程Tensorflow tutorial for GPU中获取了这个示例。
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

这是我的输出结果:
Device mapping: no known devices.
2017-10-31 16:15:40.298845: I tensorflow/core/common_runtime/direct_session.cc:300] Device mapping:

MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:0
2017-10-31 16:15:56.895802: I tensorflow/core/common_runtime/simple_placer.cc:872] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0
b: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-10-31 16:15:56.895910: I tensorflow/core/common_runtime/simple_placer.cc:872] b: (Const)/job:localhost/replica:0/task:0/cpu:0
a_1: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-10-31 16:15:56.895961: I tensorflow/core/common_runtime/simple_placer.cc:872] a_1: (Const)/job:localhost/replica:0/task:0/cpu:0
a: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-10-31 16:15:56.896006: I tensorflow/core/common_runtime/simple_placer.cc:872] a: (Const)/job:localhost/replica:0/task:0/cpu:0
[[ 22.  28.]
 [ 49.  64.]]

我的GPU上没有运行选项。我尝试使用以下方法手动强制在GPU上运行:

with tf.device('/gpu:0'):
...

它报了一堆错误:

Traceback (most recent call last):
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
    return fn(*args)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1297, in _run_fn
    self._extend_graph()
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1358, in _extend_graph
    self._session, graph_def.SerializeToString(), status)
  File "/home/abhor/anaconda3/lib/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'MatMul_1': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device.
     [[Node: MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/device:GPU:0"](a_2, b_1)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'MatMul_1': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device.
     [[Node: MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/device:GPU:0"](a_2, b_1)]]

Caused by op 'MatMul_1', defined at:
  File "<stdin>", line 4, in <module>
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1844, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1289, in _mat_mul
    transpose_b=transpose_b, name=name)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/abhor/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'MatMul_1': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device.
     [[Node: MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/device:GPU:0"](a_2, b_1)]]

我看到有些行中只提到了CPU可用。

以下是我的显卡详细信息和Cuda版本。

nvidia-smi 的输出结果:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.81                 Driver Version: 384.81                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce 940MX       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   43C    P0    N/A /  N/A |    274MiB /  2002MiB |     10%      Default |
+-------------------------------+----------------------+----------------------+

nvcc -V的输出

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

我不知道如何检查cuDNN,但我按照官方文档中给出的方式安装了它,所以我猜它应该也能正常工作。
编辑: pip3 list | grep tensorflow 的输出结果。
tensorflow-gpu (1.3.0)
tensorflow-tensorboard (0.1.8)

你能添加命令 pip list | grep tensorflow 的输出吗? - mrry
我已将其添加到问题中。 - Abhor
你的错误有任何更新吗? - konstantin
1
很可能是由于旧软件包的兼容性问题。我尝试在一个全新的Ubuntu系统上安装,它可以工作。 - Abhor
3个回答

13

尝试这段代码:

sess = tf.Session(config=tf.ConfigProto(
      allow_soft_placement=True, log_device_placement=True))

1
请问您能否解释一下它为什么有效? - Cryckx

2
实际上,在您的情况下,TensorFlow 无法找到 CUDA GPU。请参考下面的输出设备列表:

最初的回答:

在您的情况下,TensorFlow 无法找到 CUDA GPU。请查看下面的设备列表:

Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]

这意味着没有找到GPU。您可以参考此处的代码(如何在tensorflow中获取当前可用的GPU?),以列出(tensorflow实际上可以找到的)GPU。最初的回答。
from tensorflow.python.client import device_lib

def get_available_gpus():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos if x.device_type == 'GPU']

您必须确保实际找到的 GPU 被返回,这样 TensorFlow 才能使用 GPU 设备。

有很多可能导致无法找到 GPU,包括但不限于 CUDA 安装/设置、TensorFlow 版本和 GPU 型号,特别是 GPU 计算能力。必须检查 TensorFlow 版本是否支持特定 GPU 型号,并检查 GPU 能力(对于 NVidia GPU)。

最初的回答:

要确保 TensorFlow 能够使用 GPU 设备,请确保已正确安装和设置 CUDA,并检查 TensorFlow 版本是否支持您的 GPU 型号和计算能力。


0
通常我的建议是使用conda环境。在这种情况下,您可以创建一个新的独立环境并尝试从头开始安装tensorflow或任何其他工具,而不必重新安装整个操作系统。作为附加价值,您可以在计算机上拥有多个环境。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接