Skip to content

Conversation

@malfet
Copy link
Contributor

@malfet malfet commented Nov 5, 2024

Stack from ghstack (oldest at bottom):

  • Compute attention mask before the switch
  • Introduce query_device_type variable
  • Refactor some of MPS-math checks into easily readable boolean names

- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 5, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/139788

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4d4181f with merge base 4d5cc1b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names

[ghstack-poisoned]
- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names

[ghstack-poisoned]
bool is_causal,
std::optional<double> scale,
bool enable_gqa) {
using namespace sdp;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

personally dont love maybe just bring in the two objects in the namespace

&& query_.is_contiguous() && key.is_contiguous() && value.is_contiguous()
&& !query_.is_nested() && !key.is_nested() && !value.is_nested()) {
case SDPBackend::math: {
const auto any_nested = query_.is_nested() || key.is_nested() || value.is_nested();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these checks should ideally be in the dispatching logic, I am not sure if MPS currently has a stub for doing the input validation. I think that it this length of check its okay but if it gets any larger I think we should do it properly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this code doesn't really belong here, will send a followup PRs to move it someplace else

- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names

[ghstack-poisoned]
- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names

[ghstack-poisoned]
@malfet malfet added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 5, 2024
@malfet
Copy link
Contributor Author

malfet commented Nov 5, 2024

@pytorchbot merge -f "This supposed to be a no-op"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Nov 6, 2024
The only difference between `convert_boolean_attn_mask_cudnn` and `convert_boolean_attn_mask` is the value we initialize boolean tensor to
Reduce duplication by introducing `convert_boolean_attn_mask_` that takes `neg_inf` value and make abovementioned implementations are trivial oneline call
Also, as suggested by @Skylion007, replace `at::where(foo->logical_not, -inf, 0)` with `at::where(*foo, 0, -inf)`
Pull Request resolved: #139784
Approved by: https://github.com/Skylion007, https://github.com/drisspg
ghstack dependencies: #139788
pytorchmergebot pushed a commit that referenced this pull request Nov 6, 2024
May be I'm missing some vital piece of information, but it feels like
```c++
  const auto neg_inf = at::scalar_tensor(-std::numeric_limits<float>::infinity(), at::TensorOptions().dtype(out.dtype()).device(out.device()));
  const auto masked = self.eq(neg_inf);
```
should be equivalent to [`torch.isneginf`](https://pytorch.org/docs/stable/generated/torch.isneginf.html) call
Pull Request resolved: #139763
Approved by: https://github.com/Skylion007
ghstack dependencies: #139788, #139784
pytorchmergebot pushed a commit that referenced this pull request Nov 6, 2024
As MacOS-15 or newer supports those out of the box. This significantly reduces memory requirements and improves performance for some stable diffision networks.

Test plan: Run
```python
from diffusers import StableDiffusionXLPipeline, AutoencoderKL, EulerAncestralDiscreteScheduler
import torch
import time

vae = AutoencoderKL.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
                                    subfolder='vae',
                                    torch_dtype=torch.bfloat16,
                                    force_upcast=False).to('mps')

pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae,
                                                 torch_dtype=torch.bfloat16, variant="fp16").to('mps')
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)

start_time = time.time()
start_mps_mem = torch.mps.driver_allocated_memory()
image = pipe(prompt="Spherical cow in vacuum",
             num_inference_steps=10,
             guidance_scale=8,
             generator=torch.Generator("mps").manual_seed(42),
             ).images[0]
end_mps_mem = torch.mps.driver_allocated_memory()
run_time = time.time() - start_time
print(f"run time in {run_time:.2f} sec, end_mps_mem {end_mps_mem/1024.0**2:.2f} Mb mem increase {(end_mps_mem-start_time)/1024.0**2:.2f} Mb")
image.save(f'bfloat16.png')
```

Before the change total memory use were 16Gb and needed 65 sec to complete, after it drops down to 14Gb and takes 50 sec to finish on M2Pro, though generated image remains the same:
![image](https://github.com/user-attachments/assets/1a35efef-9f80-4cd0-ac9c-30203eab6bb1)

Fixes #139389
Pull Request resolved: #139791
Approved by: https://github.com/drisspg, https://github.com/Skylion007
ghstack dependencies: #139788, #139784, #139763
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
- Compute attention mask before the switch
- Introduce `query_device_type` variable
- Refactor some of MPS-math checks into easily readable boolean names
Pull Request resolved: pytorch#139788
Approved by: https://github.com/Skylion007, https://github.com/drisspg
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
The only difference between `convert_boolean_attn_mask_cudnn` and `convert_boolean_attn_mask` is the value we initialize boolean tensor to
Reduce duplication by introducing `convert_boolean_attn_mask_` that takes `neg_inf` value and make abovementioned implementations are trivial oneline call
Also, as suggested by @Skylion007, replace `at::where(foo->logical_not, -inf, 0)` with `at::where(*foo, 0, -inf)`
Pull Request resolved: pytorch#139784
Approved by: https://github.com/Skylion007, https://github.com/drisspg
ghstack dependencies: pytorch#139788
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
May be I'm missing some vital piece of information, but it feels like
```c++
  const auto neg_inf = at::scalar_tensor(-std::numeric_limits<float>::infinity(), at::TensorOptions().dtype(out.dtype()).device(out.device()));
  const auto masked = self.eq(neg_inf);
```
should be equivalent to [`torch.isneginf`](https://pytorch.org/docs/stable/generated/torch.isneginf.html) call
Pull Request resolved: pytorch#139763
Approved by: https://github.com/Skylion007
ghstack dependencies: pytorch#139788, pytorch#139784
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
…139791)

As MacOS-15 or newer supports those out of the box. This significantly reduces memory requirements and improves performance for some stable diffision networks.

Test plan: Run
```python
from diffusers import StableDiffusionXLPipeline, AutoencoderKL, EulerAncestralDiscreteScheduler
import torch
import time

vae = AutoencoderKL.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
                                    subfolder='vae',
                                    torch_dtype=torch.bfloat16,
                                    force_upcast=False).to('mps')

pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae,
                                                 torch_dtype=torch.bfloat16, variant="fp16").to('mps')
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)

start_time = time.time()
start_mps_mem = torch.mps.driver_allocated_memory()
image = pipe(prompt="Spherical cow in vacuum",
             num_inference_steps=10,
             guidance_scale=8,
             generator=torch.Generator("mps").manual_seed(42),
             ).images[0]
end_mps_mem = torch.mps.driver_allocated_memory()
run_time = time.time() - start_time
print(f"run time in {run_time:.2f} sec, end_mps_mem {end_mps_mem/1024.0**2:.2f} Mb mem increase {(end_mps_mem-start_time)/1024.0**2:.2f} Mb")
image.save(f'bfloat16.png')
```

Before the change total memory use were 16Gb and needed 65 sec to complete, after it drops down to 14Gb and takes 50 sec to finish on M2Pro, though generated image remains the same:
![image](https://github.com/user-attachments/assets/1a35efef-9f80-4cd0-ac9c-30203eab6bb1)

Fixes pytorch#139389
Pull Request resolved: pytorch#139791
Approved by: https://github.com/drisspg, https://github.com/Skylion007
ghstack dependencies: pytorch#139788, pytorch#139784, pytorch#139763
@github-actions github-actions bot deleted the gh/malfet/49/head branch December 6, 2024 02:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants