Skip to content

Massive memory overhead over NumPy #19408

@iamhatesz

Description

@iamhatesz

🐛 Bug

The memory usage comparison between the same data structures implemented with different backends (PyTorch tensors and NumPy arrays) shows over 4x higher usage when using PyTorch. Data structure consists of a list containing 5kk small tensors/arrays.

To Reproduce

Use this Gist: https://gist.github.com/iamhatesz/3ef34254febe482aa48e3e489f89b07b

Expected behavior

The memory usage for both data structures should be similar.

Environment

  • PyTorch Version (e.g., 1.0): 1.0.1
  • OS (e.g., Linux): Windows 10
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source): -
  • Python version: 3.7.1
  • CUDA/cuDNN version: -
  • GPU models and configuration: -
  • Any other relevant information: -

Additional context

none

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: internalsRelated to internal abstractions in c10 and ATenmodule: memory usagePyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions