如何解决UserWarning: 使用与输入尺寸(torch.Size([1]))不同的目标尺寸(torch.Size([]))问题?

5

我正在尝试运行一本关于Pytorch强化学习的书中的代码。根据书上说法,代码应该能够工作,但是对我来说,模型没有收敛,奖励仍为负数。同时也出现了以下用户警告:

/home/user/.local/lib/python3.6/site-packages/ipykernel_launcher.py:30: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.

我是Pytorch的完全新手,但是我认为size([])不是一个有效的张量大小?我认为代码出了问题,但是尝试解决一段时间后,我还没有找到任何问题。我有一段时间前也联系过书籍出版商,但是很遗憾我没有得到回复。

因此我想在这里问问是否有人见过这个错误,并可能知道如何修复它?

这段代码是用于在“mountain car gym”环境中实现A2C强化学习的。也可以在这里找到代码:https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter08/chapter8/actor_critic_mountaincar.py

'''
Source codes for PyTorch 1.0 Reinforcement Learning (Packt Publishing)
Chapter 8: Implementing Policy Gradients and Policy Optimization
Author: Yuxi (Hayden) Liu
'''

import torch
import gym
import torch.nn as nn
import torch.nn.functional as F


env = gym.make('MountainCarContinuous-v0')


class ActorCriticModel(nn.Module):
    def __init__(self, n_input, n_output, n_hidden):
        super(ActorCriticModel, self).__init__()
        self.fc = nn.Linear(n_input, n_hidden)
        self.mu = nn.Linear(n_hidden, n_output)
        self.sigma = nn.Linear(n_hidden, n_output)
        self.value = nn.Linear(n_hidden, 1)
        self.distribution = torch.distributions.Normal

    def forward(self, x):
        x = F.relu(self.fc(x))
        mu = 2 * torch.tanh(self.mu(x))
        sigma = F.softplus(self.sigma(x)) + 1e-5
        dist = self.distribution(mu.view(1, ).data, sigma.view(1, ).data)
        value = self.value(x)
        return dist, value


class PolicyNetwork():
    def __init__(self, n_state, n_action, n_hidden, lr=0.001):
        self.model = ActorCriticModel(n_state, n_action, n_hidden)
        self.optimizer = torch.optim.Adam(self.model.parameters(), lr)


    def update(self, returns, log_probs, state_values):
        """
        Update the weights of the Actor Critic network given the training samples
        @param returns: return (cumulative rewards) for each step in an episode
        @param log_probs: log probability for each step
        @param state_values: state-value for each step
        """
        loss = 0
        for log_prob, value, Gt in zip(log_probs, state_values, returns):
            advantage = Gt - value.item()
            policy_loss = - log_prob * advantage

            value_loss = F.smooth_l1_loss(value, Gt)

            loss += policy_loss + value_loss

        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()


    def predict(self, s):
        """
        Compute the output using the continuous Actor Critic model
        @param s: input state
        @return: Gaussian distribution, state_value
        """
        self.model.training = False
        return self.model(torch.Tensor(s))

    def get_action(self, s):
        """
        Estimate the policy and sample an action, compute its log probability
        @param s: input state
        @return: the selected action, log probability, predicted state-value
        """
        dist, state_value = self.predict(s)
        action = dist.sample().numpy()
        log_prob = dist.log_prob(action[0])
        return action, log_prob, state_value




def actor_critic(env, estimator, n_episode, gamma=1.0):
    """
    continuous Actor Critic algorithm
    @param env: Gym environment
    @param estimator: policy network
    @param n_episode: number of episodes
    @param gamma: the discount factor
    """
    for episode in range(n_episode):
        log_probs = []
        rewards = []
        state_values = []
        state = env.reset()

        while True:
            state = scale_state(state)
            action, log_prob, state_value = estimator.get_action(state)
            action = action.clip(env.action_space.low[0],
                                 env.action_space.high[0])
            next_state, reward, is_done, _ = env.step(action)

            total_reward_episode[episode] += reward
            log_probs.append(log_prob)
            state_values.append(state_value)
            rewards.append(reward)

            if is_done:
                returns = []

                Gt = 0
                pw = 0

                for reward in rewards[::-1]:

                    Gt += gamma ** pw * reward
                    pw += 1
                    returns.append(Gt)

                returns = returns[::-1]
                returns = torch.tensor(returns)
                returns = (returns - returns.mean()) / (returns.std() + 1e-9)


                estimator.update(returns, log_probs, state_values)
                print('Episode: {}, total reward: {}'.format(episode, total_reward_episode[episode]))

                break

            state = next_state


import sklearn.preprocessing
import numpy as np

state_space_samples = np.array(
    [env.observation_space.sample() for x in range(10000)])
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(state_space_samples)


def scale_state(state):
    scaled = scaler.transform([state])
    return scaled[0]


n_state = env.observation_space.shape[0]
n_action = 1
n_hidden = 128
lr = 0.0003
policy_net = PolicyNetwork(n_state, n_action, n_hidden, lr)


n_episode = 200
gamma = 0.9
total_reward_episode = [0] * n_episode

actor_critic(env, policy_net, n_episode, gamma)

3
Size([])是一个有效的大小:0维,即标量。哪一行会抛出警告? - nnnmmm
不是代码的一行,而是以下内容:/home/user/.local/lib/python3.6/site-packages/ipykernel_launcher.py:30,这是每个剧集都有的相同警告。 - N.W.
1个回答

2

size([]) 是有效的,但它表示的是一个单一的值,而不是一个数组,而 size([1]) 则是一个只包含一个项目的一维数组。这就像将 5 与 [5] 进行比较。解决此问题的一种方法是

            returns = returns[::-1]
            returns_amount = len(returns)
            returns = torch.tensor(returns)
            returns = (returns - returns.mean()) / (returns.std() + 1e-9)
            returns.resize_(returns_amount, 1)

这将返回一个二维数组,使得从中获取的每个Gt都是一个一维数组,而不是一个浮点数。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接