-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
module: internalsRelated to internal abstractions in c10 and ATenRelated to internal abstractions in c10 and ATenmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
The memory usage comparison between the same data structures implemented with different backends (PyTorch tensors and NumPy arrays) shows over 4x higher usage when using PyTorch. Data structure consists of a list containing 5kk small tensors/arrays.
To Reproduce
Use this Gist: https://gist.github.com/iamhatesz/3ef34254febe482aa48e3e489f89b07b
Expected behavior
The memory usage for both data structures should be similar.
Environment
- PyTorch Version (e.g., 1.0): 1.0.1
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (
conda,pip, source): pip - Build command you used (if compiling from source): -
- Python version: 3.7.1
- CUDA/cuDNN version: -
- GPU models and configuration: -
- Any other relevant information: -
Additional context
none
Metadata
Metadata
Assignees
Labels
module: internalsRelated to internal abstractions in c10 and ATenRelated to internal abstractions in c10 and ATenmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
