When I run the following code, the memory usage keeps increasing iteration by iteration.
while True:
a = Variable(torch.FloatTensor(32,16).cuda())
r = Variable(torch.FloatTensor(1).cuda())
c = r.expand(a.size())
If I use r.exapnd([32, 16]), memory usage does not increase.
Did I use expand() incorrectly? Does anyone have the same issue?