如何在 PyTorch 中计算引导交叉熵损失?

4

我读过一些关于使用“自助交叉熵损失”来训练分割网络的论文。这个想法是只关注最难的k%(比如说15%)的像素,以提高学习性能,特别是当易于处理的像素占主导地位时。

目前,我正在使用标准的交叉熵:

loss = F.binary_cross_entropy(mask, gt)

如何在PyTorch中高效地将此转换为引导版本?

2个回答

3

通常我们还会给损失函数添加“热身”时间,使网络首先适应简单区域,然后过渡到更难的区域。

此实现从k=100开始,并持续20000次迭代,然后线性衰减到k=15,再进行50000次迭代。

class BootstrappedCE(nn.Module):
    def __init__(self, start_warm=20000, end_warm=70000, top_p=0.15):
        super().__init__()

        self.start_warm = start_warm
        self.end_warm = end_warm
        self.top_p = top_p

    def forward(self, input, target, it):
        if it < self.start_warm:
            return F.cross_entropy(input, target), 1.0

        raw_loss = F.cross_entropy(input, target, reduction='none').view(-1)
        num_pixels = raw_loss.numel()

        if it > self.end_warm:
            this_p = self.top_p
        else:
            this_p = self.top_p + (1-self.top_p)*((self.end_warm-it)/(self.end_warm-self.start_warm))
        loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
        return loss.mean(), this_p

1

除了@hkchengrex提供的自我回答(为了未来自我和与PyTorch的API相似性),

可以先实现functional版本(在原始torch.nn.functional.cross_entropy中提供一些额外的参数),像这样(我更喜欢将reduction作为callable而不是预定义字符串):

import typing

import torch


def bootstrapped_cross_entropy(
    inputs,
    targets,
    iteration,
    p: float,
    warmup: typing.Union[typing.Callable[[float, int], float], int] = -1,
    weight=None,
    ignore_index=-100,
    reduction: typing.Callable[[torch.Tensor], torch.Tensor] = torch.mean,
):
    if not 0 < p < 1:
        raise ValueError("p should be in [0, 1] range, got: {}".format(p))

    if isinstance(warmup, int):
        this_p = 1.0 if iteration < warmup else p
    elif callable(warmup):
        this_p = warmup(p, iteration)
    else:
        raise ValueError(
            "warmup should be int or callable, got {}".format(type(warmup))
        )

    # Shortcut
    if this_p == 1.0:
        return torch.nn.functional.cross_entropy(
            inputs, targets, weight, ignore_index=ignore_index, reduction=reduction
        )

    raw_loss = torch.nn.functional.cross_entropy(
        inputs, targets, weight=weight, ignore_index=ignore_index, reduction="none"
    ).view(-1)
    num_pixels = raw_loss.numel()

    loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
    return reduction(loss)

同时,可以将warmup指定为callable(接受p和当前iteration)或int,从而实现灵活或轻松的调度。

并且可以基于_WeightedLoss和自动递增的iteration创建一个类(因此只需要传递inputstargets):

class BoostrappedCrossEntropy(torch.nn.modules.loss._WeightedLoss):
    def __init__(
        self,
        p: float,
        warmup: typing.Union[typing.Callable[[float, int], float], int] = -1,
        weight=None,
        ignore_index=-100,
        reduction: typing.Callable[[torch.Tensor], torch.Tensor] = torch.mean,
    ):
        self.p = p
        self.warmup = warmup
        self.ignore_index = ignore_index
        self._current_iteration = -1

        super().__init__(weight, size_average=None, reduce=None, reduction=reduction)

    def forward(self, inputs, targets):
        self._current_iteration += 1
        return bootstrapped_cross_entropy(
            inputs,
            targets,
            self._current_iteration,
            self.p,
            self.warmup,
            self.weight,
            self.ignore_index,
            self.reduction,
        )

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接