-
Notifications
You must be signed in to change notification settings - Fork 27.5k
MPS-Ready, ARM64 Docker Image #81224
Copy link
Copy link
Closed
Labels
featureA request for a proper, new feature.A request for a proper, new feature.module: armRelated to ARM architectures builds of PyTorch. Includes Apple M1Related to ARM architectures builds of PyTorch. Includes Apple M1module: dockermodule: macosMac OS related issuesMac OS related issuesmodule: mpsRelated to Apple Metal Performance Shaders frameworkRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
featureA request for a proper, new feature.A request for a proper, new feature.module: armRelated to ARM architectures builds of PyTorch. Includes Apple M1Related to ARM architectures builds of PyTorch. Includes Apple M1module: dockermodule: macosMac OS related issuesMac OS related issuesmodule: mpsRelated to Apple Metal Performance Shaders frameworkRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🚀 The feature, motivation and pitch
It'd be very helpful to release an ARM64 pytorch docker image for running pytorch models with docker on M1 chips natively using the MPS backend.
I test and debug prototypes based on pytorch locally during development. This was totally manageable using previous intel-based macs, but now that my pytorch docker images are running in AMD64 emulation mode, it's so slow that I can't even debug and test my prototypes locally anymore.
I have tried out using the MPS backend in a virtual environment outside of docker and it's impressively fast! It's actually fast enough to plausibly run prototypes locally for demo purposes and massively speed up testing without throttling the model inference computations which is a real game changer. However, this isn't implemented in an official pytorch image yet. It'd be really great if it were.
Alternatives
No response
Additional context
No response
cc @malfet @albanD @kulinseth