运行时错误:期望标量类型为长整型(Long),但传入的是标量类型为浮点型(Float)的参数#2“target”。

10
我在计算神经网络的损失时遇到问题。我不确定为什么程序期望一个长对象,因为我的所有张量都是以浮点形式存在的。我查看了具有类似错误的线程,并且解决方案是将张量强制转换为浮点数而不是长整型,但在我的情况下这样做行不通,因为当数据传递给网络时,所有数据已经是以浮点形式存在的。
这是我的代码:
# Dataloader
from torch.utils.data import Dataset, DataLoader

class LoadInfo(Dataset):    
    def __init__(self, prediction, indicator):  
        self.prediction = prediction
        self.indicator = indicator
    def __len__(self):
        return len(self.prediction)
    def __getitem__(self, idx):
        data = torch.tensor(self.indicator.iloc[idx, :],dtype=torch.float)
        data = torch.unsqueeze(data, 0)
        label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.float)
        sample = {'data': data, 'label': label} 
        return sample

# Trainloader
test_train = LoadInfo(train_label, train_indicators)
trainloader = DataLoader(test_train, batch_size=64,shuffle=True, num_workers=1,pin_memory=True) 

# The Network
class NetDense2(nn.Module):

    def __init__(self):
        super(NetDense2, self).__init__()
        self.rnn1 = nn.RNN(11, 100, 3)  
        self.rnn2 = nn.RNN(100, 500, 3)  
        self.fc1 = nn.Linear(500, 100)  
        self.fc2 = nn.Linear(100, 20)
        self.fc3 = nn.Linear(20, 3)

    def forward(self, x):
        x1, h1 = self.rnn1(x)
        x2, h2 = self.rnn2(x1)
        x = F.relu(self.fc1(x2))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

# Allocate / Transfer to GPU      
dense2 = NetDense2()
dense2.cuda()

# Optimizer
import torch.optim as optim
criterion = nn.CrossEntropyLoss()                                 # specify the loss function
optimizer = optim.SGD(dense2.parameters(), lr=0.001, momentum=0.9,weight_decay=0.001)

# Training
dense2.train()
loss_memory = []
for epoch in range(50):  # loop over the dataset multiple times
    running_loss = 0.0
    for i, samp in enumerate(trainloader):
        # get the inputs
        ins = samp['data']
        targets = samp['label']
        tmp = []
        tmp = torch.squeeze(targets.float())
        ins, targets = ins.cuda(),  tmp.cuda()
        # zero the parameter gradients
        optimizer.zero_grad()
        # forward + backward + optimize
        outputs = dense2(ins)
        loss = criterion(outputs, targets)     # The loss
        loss.backward()
        optimizer.step()
        # keep track of loss
        running_loss += loss.data.item()

我在“ loss = criterion(outputs, targets) ”这一行遇到了上述错误。


1
你确定这些标签是正确的吗?它看起来像 torch - Nicolas Gervais
它是 Torch。你说得对。正在更新标签。 - davetherock
3个回答

17
根据pytorch网页上的文档和官方示例,传递给nn.CrossEntropyLoss()的目标应该是torch.long格式。
# official example
import torch
import torch.nn as nn
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5) 

# if you will replace the dtype=torch.float, you will get error

output = loss(input, target)
output.backward()

请将您的代码中这一行更新为

label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.long) #updated torch.float to torch.long

谢谢!如果数据必须以浮点形式呈现,您有哪些建议可以选择哪种损失函数? - davetherock
数据可以是任何格式,只要目标值是浮点数即可。或者您的目标值也是浮点数吗? - Mughees
目标是以浮点形式表示的。我只想知道是否有任何损失函数可以处理浮点数。正在尝试MSE。 - davetherock
我不确定,但请在https://pytorch.org/docs/master/generated/torch.nn.KLDivLoss.html上探索`torch.nn.KLDivLoss()`。 - Mughees
@Mughees,你能检查一下这个问题吗: https://stackoverflow.com/questions/75261037/custom-loss-for-huggingface-trainer-for-sequences - H.H

4
一个简单的修复方法,对我有用的是替换

标签。

loss = criterion(outputs, targets)

使用

loss = criterion(outputs, targets.long())

0

对你的代码进行小修补,可以像这样:

for epoch in range(50):  # loop over the dataset multiple times
running_loss = 0.0
for i, samp in enumerate(trainloader):
    # get the inputs
    ins = samp['data']
    targets = samp['label'].long() # HERE IS THE CHANGE <<---------------
    tmp = []
    tmp = torch.squeeze(targets.float())
    ins, targets = ins.cuda(),  tmp.cuda()
    # zero the parameter gradients
    optimizer.zero_grad()
    # forward + backward + optimize
    outputs = dense2(ins)
    loss = criterion(outputs, targets)     # The loss
    loss.backward()
    optimizer.step()
    # keep track of loss
    running_loss += loss.data.item()

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接