这里有一个关于pytorch的常见错误,但我在特定情况下遇到了它:当重新加载模型时,即使我还没有将模型放在GPU上,我也会收到一个CUDA: Out of Memory
错误。
model = model.load_state_dict(torch.load(model_file_path))
optimizer = optimizer.load_state_dict(torch.load(optimizer_file_path))
# Error happens here ^, before I send the model to the device.
model = model.to(device_id)