Currently, empty Tensors may not have a Storage: ``` python >>> x = torch.Tensor() >>> x.storage() is None True ``` I propose that we make every Tensor, even empty ones, have a Storage. I don't think this will require any changes to TH/THC (just pytorch). This has some advantages: - We often have code that assumes a valid storage. These bugs are hard to catch because they only occur with empty tensors. - Every CUDA tensor will have an associated device. Currently, empty CUDA tensors may not be on any device