我有一个装有2个GPU的工作站,我想同时运行多个TensorFlow作业,这样我就可以同时训练多个模型等等。
例如,我尝试使用python API在脚本1.py中将会话分离到不同的资源上:
with tf.device("/gpu:0"):
# do stuff
在 script2.py 文件中:
with tf.device("/gpu:1"):
# do stuff
在 script3.py 文件中
with tf.device("/cpu:0"):
# do stuff
如果我单独运行每个脚本,我可以看到它正在使用指定的设备。(而且模型很好地适合于单个GPU,即使两个都可用也不会使用另一个。)
但是,如果一个脚本正在运行,我尝试运行另一个脚本,我总是会收到这个错误:
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 0 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.2155
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 187.65MiB
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:103] Found device 1 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.2155
pciBusID 0000:04:00.0
Total memory: 4.00GiB
Free memory: 221.64MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:127] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 0: Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 1: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:702] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 980, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Allocating 187.40MiB bytes.
E tensorflow/stream_executor/cuda/cuda_driver.cc:932] failed to allocate 187.40M (196505600 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
F tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Check failed: gpu_mem != nullptr Could not allocate GPU device memory for device 0. Tried to allocate 187.40MiB
Aborted (core dumped)
当加载TensorFlow时,似乎每个进程都在尝试抓取机器上的所有GPU,即使不是所有设备都将被用于运行模型。
我看到有一种选项可以限制每个进程使用的GPU数量。
tf.GPUOptions(per_process_gpu_memory_fraction=0.5)
我没试过,但这似乎会使两个进程尝试共享50%的每个GPU,而不是在单独的GPU上运行每个进程...
有人知道如何配置tensorflow只使用一个GPU并将另一个GPU留给另一个tensorflow进程吗?