我正在执行以下操作,
energy.masked_fill(mask == 0, float("-1e20"))
以下是我的Python跟踪信息:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 418, in forward
enc_src = self.encoder(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 71, in forward
src = layer(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 110, in forward
_src, _ = self.self_attention(src, src, src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 191, in forward
energy = energy.masked_fill(mask == 0, float("-1e20"))
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
以下是我的注意力层代码:
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
# Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
energy = torch.matmul(Q, K.transpose(1,0)) / self.scale
我正在遵循以下Github代码来执行我的seq to seq操作,seq2seq pytorch。 实际的测试代码位于以下位置,测试seq 1024到1024输出的代码 第二个尝试的示例在这里,我已经注释掉了pos_embedding,因为出现了CUDA错误,具体原因是对大索引进行运行时错误(RuntimeError: cuda runtime error (59))。