How does the "number of workers" parameter in PyTorch dataloader actually work? How does the "number of workers" parameter in PyTorch dataloader actually work? python python

How does the "number of workers" parameter in PyTorch dataloader actually work?


  1. When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3.
  2. Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more.
  3. Nope. Remember DataLoader doesn't just randomly return from what's available in RAM right now, it uses batch_sampler to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker.

Lastly to clarify, it isn't DataLoader's job to send anything directly to GPU, you explicitly call cuda() for that.

EDIT: Don't call cuda() inside Dataset's __getitem__() method, please look at @psarka's comment for the reasoning