如何确保在Google Colab上使用PyTorch代码时充分利用GPU

6
我是新手,学习PyTorch并在CIFAR10上进行了一些教程。由于目前没有GPU可以使用,我选择使用Google Colab进行实验。
我已经成功训练了神经网络,但不确定我的代码是否使用了来自Colab的GPU,因为与我的2014年款 MacBook Pro(没有GPU)相比,Colab所需的训练时间并没有显著加快。
我检查过笔记本电脑,确实运行的是Tesla K80,但一些训练速度非常慢。因此,我认为可能是我的代码没有使用GPU语法,但我无法弄清楚哪部分是出了问题的。
# install PyTorch
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.0-{platform}-linux_x86_64.whl torchvision

import torch
import torch.nn as nn
from torch.optim import Adam
from torchvision import transforms
from torch.autograd import Variable
import torchvision.datasets as datasets
from torch.utils.data import DataLoader, TensorDataset
import matplotlib.pyplot as plt

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)

# hyperparameters
n_epochs = 50
n_batch_size = 200
n_display_step = 200
n_learning_rate = 1e-3
n_download_cifar = True

# import cifar
# more about cifar https://www.cs.toronto.edu/~kriz/cifar.html

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

train_dataset = datasets.CIFAR10(
                    root="../datasets/cifar", 
                    train=True, 
                    transform=transform,
                    download=n_download_cifar)
test_dataset = datasets.CIFAR10(
                    root="../datasets/cifar", 
                    train=False, 
                    transform=transform)

# create data loader
train_loader = DataLoader(train_dataset, batch_size=n_batch_size, shuffle=True, num_workers=2)
test_loader = DataLoader(test_dataset, batch_size=n_batch_size, shuffle=False)

# build CNN
class CNN(nn.Module):

    def __init__(self):
        super(CNN, self).__init__()

        # (3, 32, 32)
        self.conv1 = nn.Sequential(
            nn.Conv2d(3, 32, 5, 1, 2),
            nn.ReLU(),
            nn.MaxPool2d(2, 2))

        # (32, 16, 16)
        self.conv2 = nn.Sequential(
            nn.Conv2d(32, 16, 5, 1, 2),
            nn.ReLU(),
            nn.MaxPool2d(2, 2))

        # (16, 8, 8)
        self.out = nn.Linear(16 * 8 * 8, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1)
        out = self.out(x)
        return out

net = CNN()
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = Adam(net.parameters(), lr=n_learning_rate)

def get_accuracy(model, loader):
    model.eval()
    n_samples = 0
    n_correct = 0

    with torch.no_grad():
        for step, (x, y) in enumerate(loader):
            x, y = Variable(x).to(device), Variable(y).to(device)
            out = model(x)
            _, pred = torch.max(out, 1)
            n_samples += y.size(0)
            n_correct += (pred == y).sum().item()

    return n_correct / n_samples


def train(model, criterion, optimizer, epochs, train_loader, test_loader):
    for epoch in range(epochs):
        for step, (x, y) in enumerate(train_loader):
            model.train()
            x, y = Variable(x).to(device), Variable(y).to(device)
            out = model(x)
            loss = criterion(out, y)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            if step % n_display_step == 0:
                print("Epoch {:2d} Loss {:.4f} Accuracy (Train | Test) {:.4f} {:.4f}".format(epoch, loss.item(), get_accuracy(model, train_loader), get_accuracy(model, test_loader)))

train(net, criterion, optimizer, n_epochs, train_loader, test_loader)

2
我觉得这个问题与 https://dev59.com/X1YM5IYBdhLWcg3wgwXH#51178965 相关。似乎谷歌会为一些用户提供仅有5%的GPU利用率。 - Manuel Lagunas
1个回答

2
您的代码看起来很合适,我在我的MacBook上、一个启用GPU的机器上以及Google Colab上运行了它。我比较了训练所需的时间,我的实验表明您的代码针对GPU进行了优化。
您可以尝试从这个帖子中运行此代码,并查看Google为您分配了多少GPU RAM?我猜您只使用了5%的GPU利用率。
此致,
Rex.

2
我检查了RAM分配,显然我是不太幸运的那个 :(. 不过还是谢谢你的回答。 - Moore Tech

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接