Optuna Pytorch:目标函数返回值无法转换为浮点数。

4
def autotune(trial):

      cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu",
         #   'train_batch_size' : 64,
         #   'test_batch_size' : 1000,
         #   'n_epochs' : 1,
         #   'seed' : 0,
         #   'log_interval' : 100,
         #   'save_model' : False,
         #   'dropout_rate' : trial.suggest_uniform('dropout_rate',0,1.0),
            'lr' : trial.suggest_loguniform('lr', 1e-3, 1e-2),
            'momentum' : trial.suggest_uniform('momentum', 0.4, 0.99),
            'optimizer': trial.suggest_categorical('optimizer',[torch.optim.Adam,torch.optim.SGD, torch.optim.RMSprop, torch.optim.$
            'activation': F.tanh}
      optimizer = cfg['optimizer'](model.parameters(), lr=cfg['lr'])
      #optimizer = torch.optim.Adam(model.parameters(),lr=0.001

如上所示,我正在尝试运行Optuna试验以搜索最优的CNN模型超参数。

# Train the model
# use small epoch for large dataset
# An epoch is 1 run through all the training data
# losses = [] # use this array for plotting losses
      for _ in range(epochs):
    # using data_loader 
         for i, (data, labels) in enumerate(trainloader):
        # Forward and get a prediction
        # x is the training data which is X_train
            if name.lower() == "rnn":
                model.hidden = (torch.zeros(1,1,model.hidden_sz),
                    torch.zeros(1,1,model.hidden_sz))

            y_pred = model.forward(data)

        # compute loss/error by comparing predicted out vs acutal labels
            loss = criterion(y_pred, labels)
        #losses.append(loss)

            if i%10==0:  # print out loss at every 10 epoch
                 print(f'epoch {i} and loss is: {loss}')

        #Backpropagation
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

study = optuna.create_study(sampler=optuna.samplers.TPESampler(), direction='minimize',pruner=optuna.pruners.SuccessiveHalvingPrune$
study.optimize(autotune, n_trials=1)

但是,当我运行上述代码来调整并找到最优参数时,出现了以下错误,似乎试验已经失败,尽管我仍然得到了epoch losses和values。请给予建议,谢谢!

[W 2020-11-11 13:59:48,000] Trial 0 failed, because the returned value from the objective function cannot be cast to float. Returned value is: None
Traceback (most recent call last):
  File "autotune2", line 481, in <module>
    n_instances, n_features, scores = run_analysis()
  File "autotune2", line 350, in run_analysis
    print(study.best_params)
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 67, in best_params
    return self.best_trial.params
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 92, in best_trial
    return copy.deepcopy(self._storage.get_best_trial(self._study_id))
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/storages/_in_memory.py", line 287, in get_best_trial
    raise ValueError("No trials are completed yet.")
ValueError: No trials are completed yet.
1个回答

3

此异常是由于您的研究目标函数必须返回一个浮点数而引起的。

在您的情况下,问题出现在这一行:

study.optimize(autotune, n_trials=1)

你之前定义的自动调参函数没有返回值,不能用于优化。

如何修复?

对于超参数搜索,自动调参函数必须返回一些训练后可以得到的度量标准,例如损失或交叉熵。

你的代码可以快速修复,像这样:

def autotune():
  cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu"
        ...etc...
       }

  best_loss = 1e100;  # or larger

  # Train the model
  for _ in range(epochs):
     for i, (data, labels) in enumerate(trainloader):
        ... (train the model) ...
        # compute loss/error by comparing predicted out vs actual labels
        loss = criterion(y_pred, labels)
        best_loss = min(loss,best_loss)

  return best_loss

在Optuna存储库中有一个很好的PyTorch示例,它使用了一个PyTorch回调函数来获取准确性(如果需要,可以轻松更改为使用RMSE)。它还使用了多个实验,并对超参数取中位数。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接