如何在PyTorch中使用负样本数据训练更快的Faster R-CNN模型

3

我正在尝试使用自己的数据训练torchvision Faster R-CNN模型进行物体检测。我使用了torchvision目标检测微调教程中的代码。但是出现了以下错误:

Expected target boxes to be a tensor of shape [N, 4], got torch.Size([0])

这与我的自定义数据集中的负数据(空训练图像/没有边界框)有关。我们如何更改下面的数据集类以在包括负数据的数据集上启用更快的RCNN训练?

class MyCustomDataset(Dataset):

    def __init__(self, root, transforms):
        self.root = root
        self.transforms = transforms
        # load all image files, sorting them to
        # ensure that they are aligned
        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))

    def __len__(self):
        return len(self.imgs)

    def __getitem__(self, idx):
        # load images ad masks
        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
        img = Image.open(img_path).convert("RGB")
        # note that we haven't converted the mask to RGB,
        # because each color corresponds to a different instance
        # with 0 being background
        mask = Image.open(mask_path)
        # convert the PIL Image into a numpy array
        mask = np.array(mask)
        # instances are encoded as different colors
        obj_ids = np.unique(mask)
        # first id is the background, so remove it
        obj_ids = obj_ids[1:]

        # split the color-encoded mask into a set of binary masks
        masks = mask == obj_ids[:, None, None]

        # get bounding box coordinates for each mask
        num_objs = len(obj_ids)
        
        boxes = []
        for i in range(num_objs):
            pos = np.where(masks[i])
            xmin = np.min(pos[1])
            xmax = np.max(pos[1])
            ymin = np.min(pos[0])
            ymax = np.max(pos[0])
            boxes.append([xmin, ymin, xmax, ymax])

        # convert everything into a torch.Tensor
        boxes = torch.as_tensor(boxes, dtype=torch.float32)      
        # there is only one class  
        labels = torch.ones((num_objs,), dtype=torch.int64)
        image_id = torch.tensor([idx])
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)

        target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["image_id"] =  torch.tensor([idx])
        target["area"] = area
        target["iscrowd"] = iscrowd

        return img, target 
1个回答

5

我们需要对数据集类做出两个更改

1- 空框被输入为:

if num_objs == 0:
    boxes = torch.zeros((0, 4), dtype=torch.float32)
else:
    boxes = torch.as_tensor(boxes, dtype=torch.float32)

2- 当边界框为空时,将area=0分配给它,更改用于计算面积的代码,并将其变为一个Torch Tensor

area = 0
for i in range(num_objs):
    pos = np.where(masks[i])
    xmin = np.min(pos[1])
    xmax = np.max(pos[1])
    ymin = np.min(pos[0])
    ymax = np.max(pos[0])
    area += (xmax-xmin)*(ymax-ymin)
area = torch.as_tensor(area, dtype=torch.float32)

我们将在现有的for循环中加入第二步骤。
因此,修改后的数据集类将如下所示:
class MyCustomDataset(Dataset):

    def __init__(self, root, transforms):
        self.root = root
        self.transforms = transforms
        # load all image files, sorting them to
        # ensure that they are aligned
        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))

    def __len__(self):
        return len(self.imgs)

    def __getitem__(self, idx):
        # load images ad masks
        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
        img = Image.open(img_path).convert("RGB")
        # note that we haven't converted the mask to RGB,
        # because each color corresponds to a different instance
        # with 0 being background
        mask = Image.open(mask_path)
        # convert the PIL Image into a numpy array
        mask = np.array(mask)
        # instances are encoded as different colors
        obj_ids = np.unique(mask)
        # first id is the background, so remove it
        obj_ids = obj_ids[1:]

        # split the color-encoded mask into a set of binary masks
        masks = mask == obj_ids[:, None, None]

        # get bounding box coordinates for each mask
        num_objs = len(obj_ids)
        
        boxes = []
        area = 0 
        for i in range(num_objs):
            pos = np.where(masks[i])
            xmin = np.min(pos[1])
            xmax = np.max(pos[1])
            ymin = np.min(pos[0])
            ymax = np.max(pos[0])
            boxes.append([xmin, ymin, xmax, ymax])
            area += (xmax-xmin)*(ymax-ymin)
        area = torch.as_tensor(area, dtype=torch.float32)

        # Handle empty bounding boxes
        if num_objs == 0:
            boxes = torch.zeros((0, 4), dtype=torch.float32)
        else:
            boxes = torch.as_tensor(boxes, dtype=torch.float32)   

        # there is only one class  
        labels = torch.ones((num_objs,), dtype=torch.int64)
        image_id = torch.tensor([idx])

        #area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])

        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)

        target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["image_id"] =  torch.tensor([idx])
        target["area"] = area
        target["iscrowd"] = iscrowd

        return img, target 

为什么你要在 'area += (xmax-xmin)*(ymax-ymin)' 中加上多个对象的面积? - samra
另外,如果我们添加负样本,是否需要将类别数目增加到三个?前景、背景和负类别? - samra
1
添加区域是因为输出的 "target" 需要所有对象所占用的总面积,并且将其发送到目标作为:target["area"] = area - Abhi25t
1
不,不要为负数据创建单独的类。在这里它与背景相同。 - Abhi25t
那么您的意思是,如果图像有两个对象,假设它们的面积分别为120和220,然后加在一起变成了340。我不理解将340分配的原理?难道不应该是220和120来帮助检测网络吗? - samra
总面积可能在不同的用例中需要。例如:在拥挤的体育场内计算人数,或者在农场中占据作物的区域。如果两个不同对象的边界框有重叠,那么简单相加的逻辑就会失败。这时我们需要获取两者的并集。 - Abhi25t

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接