Cast to host memory does not work.
>>> myTensor = torch.cuda.FloatTensor([1, 2, 3, 4])
>>> print(myTensor)
1
2
3
4
[torch.cuda.FloatTensor of size 4 (GPU 0)]
>>> print(myTensor.float())
1
2
3
4
[torch.cuda.FloatTensor of size 4 (GPU 0)]
Cast to device memory works fine.
>>> myTensor = torch.FloatTensor([1, 2, 3, 4])
>>> print(myTensor)
1
2
3
4
[torch.FloatTensor of size 4
>>> print(myTensor.cuda())
1
2
3
4
[torch.cuda.FloatTensor of size 4 (GPU 0)]