Pytorch数据加载器:工作进程>0出现坏的文件描述符和EOF错误。

4

问题描述

我在使用自定义数据集创建的Pytorch dataloader进行神经网络训练时,遇到了奇怪的行为。 dataloader设置了workers=4,pin_memory=False。

大多数情况下,训练都没有问题。有时,在随机时刻,训练会停止并出现以下错误:

  1. OSError:[Errno 9] 错误的文件描述符
  2. EOFError

看起来错误发生在访问dataloader元素的套接字创建期间。当我将工作程序数设置为0时,错误就消失了,但我需要通过多进程加速训练。错误的来源可能是什么?谢谢!

Python和库的版本

Python 3.9.12,Pyorch 1.11.0+cu102
编辑:错误仅在群集上发生

错误文件的输出

Traceback (most recent call last):
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 145, in _serve
Epoch 17:  52%|█████▏    | 253/486 [01:00<00:55,  4.18it/s, loss=1.73]

Traceback (most recent call last):
  File "/my_directory/bench/run_experiments.py", line 251, in <module>
    send(conn, destination_pid)
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 50, in send
    reduction.send_handle(conn, new_fd, pid)
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 183, in send_handle
    with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s:
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 545, in fromfd
    return socket(family, type, proto, nfd)
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 232, in __init__
    _socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 9] Bad file descriptor

    main(args)
  File "/my_directory/bench/run_experiments.py", line 183, in main
    run_experiments(args, save_path)
  File "/my_directory/bench/run_experiments.py", line 70, in run_experiments
    ) = run_algorithm(algorithm_params[j], mp[j], ss, dataset)
  File "/my_directorybench/algorithms.py", line 38, in run_algorithm
    data = es(mp,search_space,  dataset, **ps)
  File "/my_directorybench/algorithms.py", line 151, in es
   data = ss.generate_random_dataset(mp,
  File "/my_directorybench/architectures.py", line 241, in generate_random_dataset
    arch_dict = self.query_arch(
  File "/my_directory/bench/architectures.py", line 71, in query_arch
    train_losses, val_losses, model = meta_net.get_val_loss(
  File "/my_directory/bench/meta_neural_net.py", line 50, in get_val_loss
    return self.training(
  File "/my_directorybench/meta_neural_net.py", line 155, in training
    train_loss = self.train_step(model, device, train_loader, epoch)
  File "/my_directory/bench/meta_neural_net.py", line 179, in train_step
    for batch_idx, mini_batch in enumerate(pbar):
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
    data = self._next_data()
  File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data
    idx, data = self._get_data()
  File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1173, in _get_data
    success, data = self._try_get_data()
  File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/queues.py", line 122, in get
    return _ForkingPickler.loads(res)
  File "/my_directory/.local/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 295, in rebuild_storage_fd
    fd = df.detach()
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 58, in detach
    return reduction.recv_handle(conn)
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 189, in recv_handle
    return recvfds(s, 1)[0]
  File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 159, in recvfds
    raise EOFError
EOFError

编辑:数据访问方式

    from PIL import Image
    from torch.utils.data import DataLoader
    
    # extract of code of dataset
    
        class Dataset():
           def __init__(self,image_files,mask_files):
              self.image_files = image_files
              self.mask_files = mask_files
    
           def __getitem__(self, idx):
              img = Image.open(self.image_files[idx]).convert('RGB')
              mask=Image.open(self.mask_files[idx]).convert('L')
              return img, mask
    
    # extract of code of trainloader
      
        train_loader = DataLoader(
                        dataset=train_dataset,
                        batch_size=4,
                        num_workers=4,
                        pin_memory=False,
                        shuffle=True,
                        drop_last=True,
                        persistent_workers=False,
                    )

1
你在使用哪个操作系统?我在运行Lustre文件系统的集群上遇到了类似的问题,但是在我的Mac本地无法复制这个错误。 - Alex Meredith
1
就像你一样,我无法在我的Ubuntu笔记本电脑上复制这个错误。该错误发生在两个不同的集群上,分别运行在CentOS Linux和Red Hat Linux上。我不知道它们是否使用Lustre文件系统。 - rabbit-of-caerbannog
1
有趣。我的自定义数据集类看起来与你的非常相似。我正在使用的集群也运行着CentOS Linux,但我认为分布式文件系统(Lustre)可能是问题所在。我已经放弃了对训练进行多进程处理,但我认为在__getitem__中锁定文件(例如https://github.com/dmfrey/FileLock)可能会奏效,因为我发现一些文档显示Lustre无法同时从多个进程访问文件。如果你有什么进展,请告诉我 :) - Alex Meredith
我刚刚尝试在__getitem__函数中使用文件锁策略,但错误仍未消失。 - rabbit-of-caerbannog
2个回答

7

我终于找到了一个解决方案。将这个配置添加到数据集脚本中即可:

import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')

默认情况下,共享策略设置为'file_descriptor'

我尝试了一些在以下问题中解释的解决方案:

  • 此问题(增加共享内存、增加打开文件描述符的最大数量,在每个 epoch 结束时使用 torch.cuda.empty_cache() ...)
  • 另一个问题,结果证明解决了这个问题

如@AlexMeredith所建议的那样,错误可能与某些集群使用的分布式文件系统(Lustre)有关。错误也可能来自分布式共享内存。


0
在这个例子中,只有数据集的实现,但没有展示批次发生了什么的片段。
在我的情况下,我将批次存储在类似于索引数组的对象中,幸运的是已经在这里进行了描述。数据加载器无法关闭子进程,因为这个原因。实现类似于那样的东西帮助我解决了这个问题。
import copy

for batch in data_loader:  
    batch_cp = copy.deepcopy(batch)  
    del batch  
    index.append(batch_cp["index"])

我还遇到了其他与此相关的错误,例如:

  • 接收到0个ancdata项
  • 消息长度不正确

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接