Printing large tensors is slow in master. ```python >>> x = torch.randn(1000, 1000, 1000) >>> %timeit repr(x) 1 loop, best of 3: 21.8 s per loop ``` This is important because we often deal with very large tensors. Note that printing large tensors is much faster in NumPy (10,000x in this case): ```python >>> %timeit repr(x.numpy()) 100 loops, best of 3: 2.18 ms per loop ```