export CUDA_VISIBLE_DEVICES=""
这将告诉它仅使用一个GPU(id为0的那个),以此类推:
export CUDA_VISIBLE_DEVICES="0"
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
我认为这个例子已经很好地解释了。但如果有任何问题,请随时提出!
使用上面示例中的语法的一个巨大优势是,您可以创建在CPU上运行的代码,如果没有GPU可用,也可以在GPU上运行而不更改任何一行。
与使用torch.cuda.is_available()
的if语句不同,您也可以将设备设置为CPU,如下所示:
device = torch.device("cpu")
此外,您可以使用device
标志在所需的设备上创建张量:
mytensor = torch.rand(5, 5, device=device)
使用Python最简单的方法是:
os.environ["CUDA_VISIBLE_DEVICES"]=""
强制使用CPU的方法有多种:
Set default tensor type:
torch.set_default_tensor_type(torch.FloatTensor)
Set device and consistently reference when creating tensors:
(with this you can easily switch between GPU and CPU)
device = 'cpu'
# ...
x = torch.rand(2, 10, device=device)
Hide GPU from view:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
如先前的答案所示,您可以使用以下方式在CPU上运行PyTorch:
device = torch.device("cpu")
我想补充一下如何在 CPU 上加载以前训练好的模型(示例取自PyTorch 文档)。
注意:确保输入到模型中的所有数据也在 CPU 上。
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=torch.device("cpu")))
model = torch.load(PATH, map_location=torch.device("cpu"))
#totally new line of code
device=torch.device("cpu")
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
#img = img.cuda()
img = img.to(device)
#new_function_with_cpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
device=torch.device("cpu")
n_classes = 19
net = BiSeNet(n_classes=n_classes)
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
net.eval()
to_tensor = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
#img = img.cuda()
img = img.to(device)
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing
#original_function_with_gpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
n_classes = 19
net = BiSeNet(n_classes=n_classes)
net.cuda()
net.load_state_dict(torch.load(cp))
net.eval()
to_tensor = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
img = img.cuda()
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing
UserWarning:CUDA初始化:在您的系统上找不到NVIDIA驱动程序
。 - Chris Stryczynski.cuda()
。这是否意味着代码默认配置为使用GPU?如果是,我应该做什么更改才能使其在GPU上运行? - Shreyesh Desai.cuda()
替换为.cpu()
来确保你的代码使用 CPU 运行。这很麻烦,但我认为这是最简单的方法。对于未来,在顶部声明device
并使用.to
将模型/张量移动到设备上。关于如何做到这一点...请参考 MBT 的回答或此链接 https://dev59.com/plQJ5IYBdhLWcg3wr363#53332659。 - Umang Gupta