-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Description
Hi,
I am observing memory leak while transferring tensor from GPU to CPU in pytorch. Following code can summarize the issue. Here data_loader is feeding images. Memory leak is observed while using opt_level 'O1'. If I use opt_level 'O0' there is no leak. I am seeing this issue after updating apex to current version.
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
model.eval()
for epoch in range(10):
for i, input in enumerate(data_loader):
# compute output
output = model(input)
output = output.cpu().numpy()
I am using :
apex ver: 0.1 "https://github.com/NVIDIA/apex.git" master branch dated 11-25-2019.
Pytorch ver: 1.3.0
Ubuntu: 18.04
cuda: 10.1
I tried typecasting 'output' to float() at gpu before transferring to cpu and converting numpy array to float16. Nothing works.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels