Is your feature request related to a problem? Please describe.
As described and implemented in #3949, pytorch docker images after v21.10 seems to have a much larger memory footprint when launching the dataloader with num_workers>1 for the integration test:
|
def run_training_test(root_dir, train_x, train_y, val_x, val_y, device="cuda:0", num_workers=10): |
the base image was downgraded as a temp workaround to avoid this issue in thg 16GB v100 CI.