PyTorch嵌入层出现“expected...cuda...but got...cpu”错误

4

我正在将一个PyTorch模型从CPU(在这里它可以工作)转移到GPU(到目前为止还不能工作)。错误消息(仅保留重要部分)如下:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-12-a7bb230c924c> in <module>
      1 model = FeedforwardTabularModel()
      2 model.cuda()
----> 3 model.fit(X_train_sample.values, y_train_sample.values)

<ipython-input-11-40b1edae7417> in fit(self, X, y)
    100         for epoch in range(self.n_epochs):
    101             for i, (X_batch, y_batch) in enumerate(batches):
--> 102                 y_pred = model(X_batch).squeeze()
    103                 # scheduler.batch_step()  # Disabled due to a bug, see above.
    104                 loss = self.loss_fn(y_pred, y_batch)

[...]

/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   1482         # remove once script supports set_grad_enabled
   1483         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   1485 
   1486 

RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select

以下是完整的模型定义:

import torch
from torch import nn
import torch.utils.data
# ^ https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-data/1666


class FeedforwardTabularModel(nn.Module):
    def __init__(self):
        super().__init__()

        self.batch_size = 512
        self.base_lr, self.max_lr = 0.001, 0.003
        self.n_epochs = 5
        self.cat_vars_embedding_vector_lengths = [
            (1115, 80), (7, 4), (3, 3), (12, 6), (31, 10), (2, 2), (25, 10), (26, 10), (4, 3),
            (3, 3), (4, 3), (23, 9), (8, 4), (12, 6), (52, 15), (22, 9), (6, 4), (6, 4), (3, 3),
            (3, 3), (8, 4), (8, 4)
        ]
        self.loss_fn = torch.nn.MSELoss()
        self.score_fn = torch.nn.MSELoss()

        # Layer 1: embeddings.
        self.embeddings = []
        for (in_size, out_size) in self.cat_vars_embedding_vector_lengths:
            emb = nn.Embedding(in_size, out_size)
            self.embeddings.append(emb)

        # Layer 1: dropout.
        self.embedding_dropout = nn.Dropout(0.04)

        # Layer 1: batch normalization (of the continuous variables).
        self.cont_batch_norm = nn.BatchNorm1d(16, eps=1e-05, momentum=0.1)

        # Layers 2 through 9: sequential feedforward model.
        self.seq_model = nn.Sequential(*[
            nn.Linear(in_features=215, out_features=1000, bias=True),
            nn.ReLU(),
            nn.BatchNorm1d(1000, eps=1e-05, momentum=0.1),
            nn.Dropout(p=0.001),
            nn.Linear(in_features=1000, out_features=500, bias=True),
            nn.ReLU(),
            nn.BatchNorm1d(500, eps=1e-05, momentum=0.1),
            nn.Dropout(p=0.01),
            nn.Linear(in_features=500, out_features=1, bias=True)
        ])


    def forward(self, x):
        # Layer 1: embeddings.
        inp_offset = 0
        embedding_subvectors = []
        for emb in self.embeddings:
            index = torch.tensor(inp_offset, dtype=torch.int64).cuda()
            inp = torch.index_select(x, dim=1, index=index).long().cuda()
            out = emb(inp)
            out = out.view(out.shape[2], out.shape[0], 1).squeeze()
            embedding_subvectors.append(out)
            inp_offset += 1
        out_cat = torch.cat(embedding_subvectors)
        out_cat = out_cat.view(out_cat.shape[::-1])

        # Layer 1: dropout.
        out_cat = self.embedding_dropout(out_cat)

        # Layer 1: batch normalization (of the continuous variables).
        out_cont = self.cont_batch_norm(x[:, inp_offset:])

        out = torch.cat((out_cat, out_cont), dim=1)

        # Layers 2 through 9: sequential feedforward model.
        out = self.seq_model(out)

        return out


    def fit(self, X, y):
        self.train()

        # TODO: set a random seed to invoke determinism.
        # cf. https://github.com/pytorch/pytorch/issues/11278

        X = torch.tensor(X, dtype=torch.float32).cuda()
        y = torch.tensor(y, dtype=torch.float32).cuda()

        # The build of PyTorch on Kaggle has a blog that prevents us from using
        # CyclicLR with ADAM. Cf. GH#19003.
        # optimizer = torch.optim.Adam(model.parameters(), lr=max_lr)
        # scheduler = torch.optim.lr_scheduler.CyclicLR(
        #     optimizer, base_lr=base_lr, max_lr=max_lr,
        #     step_size_up=300, step_size_down=300,
        #     mode='exp_range', gamma=0.99994
        # )
        optimizer = torch.optim.Adam(model.parameters(), lr=(self.base_lr + self.max_lr) / 2)
        batches = torch.utils.data.DataLoader(
            torch.utils.data.TensorDataset(X, y),
            batch_size=self.batch_size, shuffle=True
        )

        for epoch in range(self.n_epochs):
            for i, (X_batch, y_batch) in enumerate(batches):
                y_pred = model(X_batch).squeeze()
                # scheduler.batch_step()  # Disabled due to a bug, see above.
                loss = self.loss_fn(y_pred, y_batch)
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()
            print(
                f"Epoch {epoch + 1}/{self.n_epochs}, Loss {loss.detach().numpy()}"
            )


    def predict(self, X):
        self.eval()
        with torch.no_grad():
            y_pred = model(torch.tensor(X, dtype=torch.float32).cuda())
        return y_pred.squeeze()


    def score(self, X, y):
        y_pred = self.predict(X)
        y = torch.tensor(y, dtype=torch.float32).cuda()
        return self.score_fn(y, y_pred)


model = FeedforwardTabularModel()
model.cuda()
model.fit(X_train_sample.values, y_train_sample.values)

这种类型的错误通常在模型中存在一个应该在GPU上但实际上在CPU上的张量时发生。但据我所知,我已经在所有必要的位置添加了 .cuda() 调用:每次声明 torch.tensor 时,以及在运行 model.fit 之前运行 model.cuda()

是什么导致了这个错误?

4个回答

6

在另一个论坛上,有人提供了解决方案:

Pytorch要求你对应正确地使用self.module_name = module。将它们保留在列表中是可以的,只需要在循环中的每一步执行setattr(self, 'emb_{}'.format(i), emb)就行了。

因为我将嵌入层保存在列表中,而PyTorch要求所有层都注册为模型对象的属性,所以当调用model.cuda()时,它们不会自动转移到GPU内存中。有些棘手!


非常感谢您分享您的答案。这解决了我的问题。 - Fardin Abdi
1
@Fardin 另一种更好的方法是:nn.ModuleList(self.linear) - zhao yufei

1
更好的工作方式:
class DeepLinear(nn.Module):
def __init__(self, dim: int, depth: int):
    super(DeepLinear, self).__init__()
    self.depth = depth
    self.linear = []
    for _ in range(depth):
        self.linear.append(nn.Linear(dim, dim))

    self.linear = nn.ModuleList(self.linear)

def forward(self, inputs):
    for i in range(self.depth):
        inputs = self.linear[i](inputs)

    return inputs

与上述代码相同,使用nn.ModuleList包装包含层的列表变量。

1

你能尝试将X_batch和y_batch移动到cuda上,例如X_batch.cuda()y_batch.cuda()。可能需要在CPU上对数据(shuffle)进行处理,这可能会导致问题。希望这有所帮助。


一个好的想法,但不幸的是这没有解决错误。 - Aleksey Bilogur
嗨@AlekseyBilogur,你还是遇到同样的错误吗? - Bill Chen
1
在另一个论坛上,有人回答了我的问题,我已经将其作为答案发布了。顺便说一句,你上面提出的分批移动数据而不是一次性全部移动的建议也很好。 - Aleksey Bilogur

1
这段简单的代码展示了我解决错误的方法:
device = 'cuda'
x = torch.LongTensor([0, 1, 2])
x = x.to(device)
emb = nn.Embedding(3, 5)
emb.weight = nn.Parameter(emb.weight.to(device)) # Moving the weights of the embedding layer to the GPU
x = emb(x)

不确定这是否是有效的方法,但目前似乎可以工作。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接