我创建了3个虚拟GPU(只有1个GPU),并尝试加速图像向量化。然而,使用下面提供的手动放置代码(这里)却得到了奇怪的结果:在所有GPU上进行训练的时间是单个GPU的两倍慢。同样,在拥有3个物理GPU的机器上检查此代码(并删除虚拟设备初始化),其表现相同。
环境:Python 3.6,Ubuntu 18.04.3,tensorflow-gpu 1.14.0。
代码(此示例创建3个虚拟设备,您可以在一台只有1个GPU的PC上测试它):
提供输出视图(来自包含100张图片的列表):
我尝试的内容:将带有图像的列表分成3个块,并将每个块分配给GPU(请看提交的代码行)。这将多GPU的时间缩短到了17秒,比单GPU运行18秒稍微快了一点(约5%)。
预期结果:多GPU版本比单GPU版本更快(至少快1.5倍)。
可能发生的原因:我写的计算方法有误。
环境:Python 3.6,Ubuntu 18.04.3,tensorflow-gpu 1.14.0。
代码(此示例创建3个虚拟设备,您可以在一台只有1个GPU的PC上测试它):
import os
import time
import numpy as np
import tensorflow as tf
start = time.time()
def load_graph(frozen_graph_filename):
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we import the graph_def into a new Graph and returns it
with tf.Graph().as_default() as graph:
# The name var will prefix every op/nodes in your graph
# Since we load everything in a new graph, this is not needed
tf.import_graph_def(graph_def, name="")
return graph
path_to_graph = '/imagenet/' # Path to imagenet folder where graph file is placed
GRAPH = load_graph(os.path.join(path_to_graph, 'classify_image_graph_def.pb'))
# Create Session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
config.gpu_options.allow_growth = True
session = tf.Session(graph=GRAPH, config=config)
output_dir = '/vectors/' # where to saved vectors from images
# Single GPU vectorization
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vector = np.squeeze(feature_vector)
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
print(f"Single GPU: {time.time() - start}")
start = time.time()
print("Start calculation on multiple GPU")
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Create 3 virtual GPUs with 1GB memory each
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
print("Create prepared ops")
start1 = time.time()
gpus = logical_gpus # comment this line to use physical GPU devices for calculations
image_list = ['1.jpg', '2.jpg', '3.jpg'] # list with images to vectorize (tested on 100 and 1000 examples)
# Assign chunk of list to each GPU
# image_list1, image_list2, image_list3 = image_list[:len(image_list)],\
# image_list[len(image_list):2*len(image_list)],\
# image_list[2*len(image_list):]
selected_list = image_list # commit this line if you want to try to assign chunk of list manually to each GPU
output_vectors = []
if gpus:
# Replicate your computation on multiple GPUs
feature_vectors = []
for gpu in gpus: # iterating on a virtual GPU devices, not physical
with tf.device(gpu.name):
print(f"Assign list of images to {gpu.name.split(':', 4)[-1]}")
# Try to assign chunk of list with images to each GPU - work the same time as single GPU
# if gpu.name.split(':', 4)[-1] == "GPU:0":
# selected_list = image_list1
# if gpu.name.split(':', 4)[-1] == "GPU:1":
# selected_list = image_list2
# if gpu.name.split(':', 4)[-1] == "GPU:2":
# selected_list = image_list3
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vectors.append(feature_vector)
print("All images has been assigned to GPU's")
print(f"Time spend on prep ops: {time.time() - start1}")
print("Start calculation on multiple GPU")
start1 = time.time()
for image_index, image in enumerate(image_list):
feature_vector = np.squeeze(feature_vectors[image_index])
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
# Close session
session.close()
print(f"Calc on GPU's spend: {time.time() - start1}")
print(f"All time, spend on multiple GPU: {time.time() - start}")
提供输出视图(来自包含100张图片的列表):
1 Physical GPU, 3 Logical GPUs
Single GPU: 18.76301646232605
Start calculation on multiple GPU
Create prepared ops
Assign list of images to GPU:0
Assign list of images to GPU:1
Assign list of images to GPU:2
All images has been assigned to GPU's
Time spend on prep ops: 18.263537883758545
Start calculation on multiple GPU
Calc on GPU's spend: 11.697082042694092
All time, spend on multiple GPU: 29.960679531097412
我尝试的内容:将带有图像的列表分成3个块,并将每个块分配给GPU(请看提交的代码行)。这将多GPU的时间缩短到了17秒,比单GPU运行18秒稍微快了一点(约5%)。
预期结果:多GPU版本比单GPU版本更快(至少快1.5倍)。
可能发生的原因:我写的计算方法有误。