Skip to content

Conversation

@asl3
Copy link
Contributor

@asl3 asl3 commented Jul 7, 2022

Stack from ghstack (oldest at bottom):

Summary:

This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight.

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

Test Plan:

Run models with: python test/quantization/core/experimental/fx_graph_mode_apot.py

Accuracy Stats:

8-bit (Uniform int8, APoT b = 8 k = 2)

Model #1: Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

Model #2: Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

Model #3: APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

Model #1: Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

Model #2: Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

Model #3: APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

Full Precision model (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

Eager mode quantized model
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 7, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 0530d26 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

asl3 added a commit that referenced this pull request Jul 7, 2022
ghstack-source-id: 6be1565
Pull Request resolved: #81040
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

Accuracy Stats:
Model #1: uniform activation, uniform weight (FX Graph Mode quantized)
Size of model (MB): 46.801265
Evaluation accuracy on test dataset: 69.76%, 89.08%

Model #2: uniform activation, APoT weight (FX Graph Mode quantized)
Size of model (MB): 46.820369
Evaluation accuracy on test dataset: 69.00%, 88.66%

Model #3: APoT activation and weight (FX Graph Mode quantized)
Size of model (MB): 46.801431
Evaluation accuracy on test dataset: 69.76%, 89.08%

Eager mode quantized model Resnet18
Size of model (MB): 11.839989
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 12, 2022
ghstack-source-id: 6c5148e
Pull Request resolved: #81040
@asl3 asl3 requested review from HDCharles, dzdang and jerryzh168 July 12, 2022 13:59
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
Uniform: int8
APoT: 8-bit (b = 8, k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 88.99%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Model #3:** APoT activation and weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 89.04%

**Model #4:** Eager mode quantized model Resnet18
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
Uniform: int8
APoT: 8-bit (b = 8, k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 88.99%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Model #3:** APoT activation and weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 89.04%

**Model #4:** Eager mode quantized model Resnet18
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 12, 2022
ghstack-source-id: 41e0c9f
Pull Request resolved: #81040
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
Uniform: int8
APoT: 8-bit (b = 8, k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 88.99%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Model #3:** APoT activation and weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 89.04%

**Model #4:** Eager mode quantized model Resnet18
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 13, 2022
ghstack-source-id: 33e6a19
Pull Request resolved: #81040
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
Uniform: int8
APoT: 8-bit (b = 8, k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 88.99%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Model #3:** APoT activation and weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 89.04%

**Model #4:** Eager mode quantized model Resnet18
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 13, 2022
ghstack-source-id: 5ef9162
Pull Request resolved: #81040
@dzdang
Copy link
Contributor

dzdang commented Jul 14, 2022

this PR is for PTQ and not QAT?

@asl3 asl3 changed the title [quant] Implement QAT for APoT FakeQuant [quant] Implement PTQ for APoT FakeQuant Jul 26, 2022
### Summary:
This PR implements FX Graph Mode QAT for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
Uniform: int8
APoT: 8-bit (b = 8, k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 88.99%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Model #3:** APoT activation and weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 69.54%, 89.04%

**Model #4:** Eager mode quantized model Resnet18
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 26, 2022
ghstack-source-id: 25a8def
Pull Request resolved: #81040
### Summary:
This PR implements PTQ for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43%, 85.62%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51%, 85.78%

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32%, 85.78%

4-bit (Uniform int4, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63%, 71.96%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24%, 85.56%

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40%, 76.21%

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
### Summary:
This PR implements PTQ for APoT FakeQuant.

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43%, 85.62%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51%, 85.78%

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32%, 85.78%

4-bit (Uniform int4, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63%, 71.96%

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24%, 85.56%

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40%, 76.21%

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76%, 89.08%

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49%, 88.90%

[ghstack-poisoned]
@asl3 asl3 added topic: not user facing topic category release notes: quantization release notes category labels Jul 27, 2022
@asl3
Copy link
Contributor Author

asl3 commented Jul 27, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Refusing to merge as mandatory check(s) pull failed for rule superuser
Raised by https://github.com/pytorch/pytorch/actions/runs/2749891464

### Summary:
This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight. 

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

[ghstack-poisoned]
### Summary:
This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight. 

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

[ghstack-poisoned]
@asl3
Copy link
Contributor Author

asl3 commented Jul 27, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Command git -C /home/runner/actions-runner/_work/pytorch/pytorch cherry-pick -x 1dc7eed3f86d655b2f808b9d6a19f914bdb6c1b4 returned non-zero exit code 1

Auto-merging mypy.ini
CONFLICT (content): Merge conflict in mypy.ini
Auto-merging test/quantization/core/experimental/test_fake_quantize.py
Auto-merging torch/ao/quantization/experimental/fake_quantize.py
CONFLICT (content): Merge conflict in torch/ao/quantization/experimental/fake_quantize.py
Auto-merging torch/ao/quantization/experimental/quantizer.py
CONFLICT (content): Merge conflict in torch/ao/quantization/experimental/quantizer.py
error: could not apply 1dc7eed3f8... [quant] Implement PTQ for APoT FakeQuant
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".

Raised by https://github.com/pytorch/pytorch/actions/runs/2750364309

### Summary:
This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight. 

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

[ghstack-poisoned]
@asl3
Copy link
Contributor Author

asl3 commented Jul 28, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

### Summary:
This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight. 

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats: 
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

[ghstack-poisoned]
asl3 added a commit that referenced this pull request Jul 28, 2022
ghstack-source-id: 81c9fd9
Pull Request resolved: #81040
@asl3
Copy link
Contributor Author

asl3 commented Jul 28, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

facebook-github-bot pushed a commit that referenced this pull request Jul 29, 2022
Summary:
### Summary:
This PR implements PTQ for APoT FakeQuant. It runs models (Resnet-18 pre-trained model, ImageNet dataset) to compare accuracy metrics for different qconfig settings of uniform vs. APoT quantized activation and weight.

According to the collected accuracy stats, model #2 (uniform activation and APoT weight) appears to have a slight improvement in accuracy compared to model #1 (uniform activation and uniform weight) for 8-bit and significant improvement for 4-bit (see "Accuracy Stats" section below).

### Test Plan:
Run models with: `python test/quantization/core/experimental/fx_graph_mode_apot.py`

### Accuracy Stats:
8-bit (Uniform int8, APoT b = 8 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.43% (Top-1), 85.62% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.51% (Top-1), 85.78% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.32% (Top-1), 85.78% (Top-5)

4-bit (Uniform int4, APoT b = 4 k = 2)

**Model #1:** Uniform activation, uniform weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.63% (Top-1), 71.96% (Top-5)

**Model #2:** Uniform activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 64.24% (Top-1), 85.56% (Top-5)

**Model #3:** APoT activation, APoT weight (FX Graph Mode quantized)
Evaluation accuracy on test dataset: 45.40% (Top-1), 76.21% (Top-5)

**Full Precision model (FX Graph Mode quantized)**
Evaluation accuracy on test dataset: 69.76% (Top-1), 89.08% (Top-5)

**Eager mode quantized model**
Evaluation accuracy on test dataset: 69.49% (Top-1), 88.90% (Top-5)

Pull Request resolved: #81040
Approved by: https://github.com/jerryzh168

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/13ad4739a6e9402e2039a1ce521b9aed595760b3

Reviewed By: osalpekar

Differential Revision: D38252390

Pulled By: asl3

fbshipit-source-id: 86ff2f3928fb1fc2b57867d6abcac998d17306e4
@facebook-github-bot facebook-github-bot deleted the gh/asl3/40/head branch July 31, 2022 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants