如何在使用Pytorch Lightning时记录指标(例如验证损失)到TensorBoard?

5
我使用Pytorch Lightning训练我的模型(在GPU设备上,使用DDP),默认日志记录器是TensorBoard。
我的代码设置为分别在每个训练和验证步骤上记录训练和验证损失。
class MyLightningModel(pl.LightningModule):

    def training_step(self, batch):
        x, labels = batch
        out = self(x)
        loss = F.mse_loss(out, labels)
        self.log("train_loss", loss)
        return loss

    def validation_step(self, batch):
        x, labels = batch
        out = self(x)
        loss = F.mse_loss(out, labels)
        self.log("val_loss", loss)
        return loss

TensorBoard能够在“SCALERS”标签页中正确绘制train_lossval_loss图表。然而,在左侧边栏的“HPARAMS”标签页中,只有hp_metric在“Metrics”下可见。

enter image description here

然而,在左侧栏的HPARAMS选项卡中,只有Metrics下面的hp_metric是可见的。

enter image description here

我们如何在“指标”部分添加“train_loss”和“val_loss”?这样,我们就能在“平行坐标视图”中使用“val_loss”,而不是“hp_metric”。 显示“hp_metric”而无“val_loss”的图像: enter image description here 使用Pytorch 1.8.1、Pytorch Lightning 1.2.6、TensorBoard 2.4.1
1个回答

2

示例代码(完整代码):

class BasicModule(LightningModule):
    def __init__(self, lr=0.01):
        super().__init__()
        self.model = models.resnet18(pretrained=False)
        self.criterion = nn.CrossEntropyLoss()
        self.lr = lr
        self.save_hyperparameters()
        
        metric = MetricCollection({'top@1': Accuracy(top_k=1), 'top@5': Accuracy(top_k=5)})
        self.train_metric = metric.clone(prefix='train/')
        self.valid_metric = metric.clone(prefix='valid/')
    
    def on_train_start(self) -> None:
        # log hyperparams
        self.logger.log_hyperparams(self.hparams, {'train/top@1': 0, 'train/top@5': 0, 'valid/top@1': 0, 'valid/top@5': 0})
        return super().on_train_start()
    
    def training_step(self, batch, batch_idx, optimizer_idx=None):
        return self.shared_step(*batch, self.train_metric)

    def validation_step(self, batch, batch_idx):
        return self.shared_step(*batch, self.valid_metric)

    def shared_step(self, x, y, metric):
        y_hat = self.model(x)
        loss = self.criterion(y_hat, y)
        self.log_dict(metric(y_hat, y), prog_bar=True)
        return loss

if __name__ == '__main__':
    # default_hp_metric=False
    logger = loggers.TensorBoardLogger('', 'lightning_logs', default_hp_metric=False)
    trainer = Trainer(max_epochs=2, gpus='0,', logger=logger, precision=16)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接