-
Notifications
You must be signed in to change notification settings - Fork 133
Closed
Labels
type: bugSomething isn't workingSomething isn't working
Description
Recently we have added an autograd function for computing the adjoint of Tomography physics, but which can potentially leak memory:
from deepinv.physics import Tomography
import torch
device = torch.device("cuda")
physics = Tomography(angles=15, img_width=256, circle=True, adjoint_via_backprop=True, device=device)
x = torch.randn(4, 1, 256, 256, device=device)
torch.cuda.empty_cache()
torch.cuda.init()
torch.cuda.reset_peak_memory_stats(device.index)
for i in range(100):
y = physics.A(x)
for _ in range(100):
aty = physics.A_adjoint(y)
if (i + 1) % 10 == 0:
torch.cuda.synchronize()
peak_memory_mb = torch.cuda.max_memory_allocated(device) / 1024 ** 2
print(f"Peak memory usage at iteration {i + 1}: {peak_memory_mb:.2f} MB")gives
Peak memory usage at iteration 10: 920.64 MB
Peak memory usage at iteration 20: 1017.57 MB
Peak memory usage at iteration 30: 1082.19 MB
Peak memory usage at iteration 40: 1340.65 MB
Peak memory usage at iteration 50: 1566.81 MB
Peak memory usage at iteration 60: 1792.97 MB
Peak memory usage at iteration 70: 2051.44 MB
Peak memory usage at iteration 80: 2277.60 MB
Peak memory usage at iteration 90: 2536.07 MB
Peak memory usage at iteration 100: 2762.23 MB
jscanvic and Andrewwango
Metadata
Metadata
Assignees
Labels
type: bugSomething isn't workingSomething isn't working