Skip to content

Strided masked reduction: mean.#66784

Closed
pearu wants to merge 7 commits intogh/pearu/7/basefrom
gh/pearu/7/head
Closed

Strided masked reduction: mean.#66784
pearu wants to merge 7 commits intogh/pearu/7/basefrom
gh/pearu/7/head

Conversation

@pearu
Copy link
Copy Markdown
Collaborator

@pearu pearu commented Oct 18, 2021

@pytorch-probot
Copy link
Copy Markdown

pytorch-probot bot commented Oct 18, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/24c43c4befc9315176932518e7c7b2357c2f7c24/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Oct 18, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 24c43c4 (more details on the Dr. CI page):


  • 6/6 failures introduced in this PR

🕵️ 6 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_test (1/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

Oct 21 20:15:06 FAIL [0.007s]: test_get_torch_f...v_cpu_float32 (__main__.TestOperatorSignaturesCPU)
Oct 21 20:14:50   test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (2.491s)
Oct 21 20:14:52   test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (2.619s)
Oct 21 20:14:55   test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (2.712s)
Oct 21 20:14:56   test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (1.396s)
Oct 21 20:14:58   test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (1.613s)
Oct 21 20:15:00   test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (1.636s)
Oct 21 20:15:03   test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (3.772s)
Oct 21 20:15:06   test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (2.016s)
Oct 21 20:15:06 
Oct 21 20:15:06 ======================================================================
Oct 21 20:15:06 FAIL [0.007s]: test_get_torch_func_signature_exhaustive_cov_cpu_float32 (__main__.TestOperatorSignaturesCPU)
Oct 21 20:15:06 ----------------------------------------------------------------------
Oct 21 20:15:06 Traceback (most recent call last):
Oct 21 20:15:06   File "test_fx.py", line 3254, in test_get_torch_func_signature_exhaustive
Oct 21 20:15:06     op(*bound_args.args, **bound_args.kwargs)
Oct 21 20:15:06   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_methods_invocations.py", line 666, in __call__
Oct 21 20:15:06     return self.op(*args, **kwargs)
Oct 21 20:15:06 RuntimeError: cov(): weights sum to zero, can't be normalized
Oct 21 20:15:06 
Oct 21 20:15:06 During handling of the above exception, another exception occurred:
Oct 21 20:15:06 

See GitHub Actions build linux-xenial-py3.6-clang7-asan / test (default, 2, 2, linux.2xlarge) (2/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-23T10:07:23.6198289Z FAIL [0.016s]: tes...v_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T10:06:39.1058737Z   test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (7.966s)
2021-10-23T10:06:47.2882429Z   test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (8.182s)
2021-10-23T10:06:56.2717149Z   test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (8.983s)
2021-10-23T10:06:59.4269609Z   test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (3.155s)
2021-10-23T10:07:03.8700586Z   test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (4.443s)
2021-10-23T10:07:07.8455392Z   test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (3.975s)
2021-10-23T10:07:18.0737801Z   test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (10.228s)
2021-10-23T10:07:23.6196430Z   test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (5.546s)
2021-10-23T10:07:23.6196940Z 
2021-10-23T10:07:23.6197282Z ======================================================================
2021-10-23T10:07:23.6198289Z FAIL [0.016s]: test_get_torch_func_signature_exhaustive_cov_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T10:07:23.6199607Z ----------------------------------------------------------------------
2021-10-23T10:07:23.6200166Z Traceback (most recent call last):
2021-10-23T10:07:23.6201038Z   File "test_fx.py", line 3254, in test_get_torch_func_signature_exhaustive
2021-10-23T10:07:23.6203164Z     op(*bound_args.args, **bound_args.kwargs)
2021-10-23T10:07:23.6204457Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_methods_invocations.py", line 666, in __call__
2021-10-23T10:07:23.6205168Z     return self.op(*args, **kwargs)
2021-10-23T10:07:23.6205755Z RuntimeError: cov(): weights sum to zero, can't be normalized
2021-10-23T10:07:23.6206320Z 
2021-10-23T10:07:23.6206765Z During handling of the above exception, another exception occurred:
2021-10-23T10:07:23.6207298Z 

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 2, 2, linux.2xlarge) (3/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-23T09:28:49.0043315Z FAIL [0.006s]: tes...v_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:28:36.9308717Z   test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (1.988s)
2021-10-23T09:28:39.0004213Z   test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (2.069s)
2021-10-23T09:28:41.2402865Z   test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (2.240s)
2021-10-23T09:28:42.1307812Z   test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (0.890s)
2021-10-23T09:28:43.3257318Z   test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (1.195s)
2021-10-23T09:28:44.4256332Z   test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (1.100s)
2021-10-23T09:28:47.3621988Z   test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (2.936s)
2021-10-23T09:28:49.0040877Z   test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (1.642s)
2021-10-23T09:28:49.0041528Z 
2021-10-23T09:28:49.0041892Z ======================================================================
2021-10-23T09:28:49.0043315Z FAIL [0.006s]: test_get_torch_func_signature_exhaustive_cov_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:28:49.0045190Z ----------------------------------------------------------------------
2021-10-23T09:28:49.0046100Z Traceback (most recent call last):
2021-10-23T09:28:49.0046908Z   File "test_fx.py", line 3254, in test_get_torch_func_signature_exhaustive
2021-10-23T09:28:49.0047771Z     op(*bound_args.args, **bound_args.kwargs)
2021-10-23T09:28:49.0049261Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_methods_invocations.py", line 666, in __call__
2021-10-23T09:28:49.0050385Z     return self.op(*args, **kwargs)
2021-10-23T09:28:49.0051437Z RuntimeError: cov(): weights sum to zero, can't be normalized
2021-10-23T09:28:49.0051923Z 
2021-10-23T09:28:49.0052610Z During handling of the above exception, another exception occurred:
2021-10-23T09:28:49.0053195Z 

See GitHub Actions build linux-bionic-py3.6-clang9 / test (default, 1, 2, linux.2xlarge) (4/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-23T09:18:14.7751099Z FAIL [0.006s]: tes...v_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:18:02.9305583Z   test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (1.948s)
2021-10-23T09:18:04.9406357Z   test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (2.010s)
2021-10-23T09:18:07.1338192Z   test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (2.193s)
2021-10-23T09:18:08.0300059Z   test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (0.896s)
2021-10-23T09:18:09.2078172Z   test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (1.178s)
2021-10-23T09:18:10.3147996Z   test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (1.107s)
2021-10-23T09:18:13.1429475Z   test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (2.828s)
2021-10-23T09:18:14.7748312Z   test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (1.632s)
2021-10-23T09:18:14.7749103Z 
2021-10-23T09:18:14.7749845Z ======================================================================
2021-10-23T09:18:14.7751099Z FAIL [0.006s]: test_get_torch_func_signature_exhaustive_cov_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:18:14.7752836Z ----------------------------------------------------------------------
2021-10-23T09:18:14.7753550Z Traceback (most recent call last):
2021-10-23T09:18:14.7754356Z   File "test_fx.py", line 3254, in test_get_torch_func_signature_exhaustive
2021-10-23T09:18:14.7755209Z     op(*bound_args.args, **bound_args.kwargs)
2021-10-23T09:18:14.7756553Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_methods_invocations.py", line 666, in __call__
2021-10-23T09:18:14.7757630Z     return self.op(*args, **kwargs)
2021-10-23T09:18:14.7758541Z RuntimeError: cov(): weights sum to zero, can't be normalized
2021-10-23T09:18:14.7759065Z 
2021-10-23T09:18:14.7759761Z During handling of the above exception, another exception occurred:
2021-10-23T09:18:14.7760371Z 

See GitHub Actions build linux-bionic-py3.6-clang9 / test (noarch, 1, 1, linux.2xlarge) (5/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-23T09:17:24.5800737Z FAIL [0.006s]: tes...v_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:17:13.3287869Z   test_torchvision_models_vgg16_bn (__main__.TestVisionTracing) ... ok (1.889s)
2021-10-23T09:17:15.2066303Z   test_torchvision_models_vgg19 (__main__.TestVisionTracing) ... ok (1.878s)
2021-10-23T09:17:17.2074527Z   test_torchvision_models_vgg19_bn (__main__.TestVisionTracing) ... ok (2.001s)
2021-10-23T09:17:18.0432633Z   test_torchvision_models_video_mc3_18 (__main__.TestVisionTracing) ... ok (0.836s)
2021-10-23T09:17:19.1980452Z   test_torchvision_models_video_r2plus1d_18 (__main__.TestVisionTracing) ... ok (1.155s)
2021-10-23T09:17:20.3484714Z   test_torchvision_models_video_r3d_18 (__main__.TestVisionTracing) ... ok (1.150s)
2021-10-23T09:17:23.0655009Z   test_torchvision_models_wide_resnet101_2 (__main__.TestVisionTracing) ... ok (2.717s)
2021-10-23T09:17:24.5798474Z   test_torchvision_models_wide_resnet50_2 (__main__.TestVisionTracing) ... ok (1.514s)
2021-10-23T09:17:24.5799201Z 
2021-10-23T09:17:24.5799637Z ======================================================================
2021-10-23T09:17:24.5800737Z FAIL [0.006s]: test_get_torch_func_signature_exhaustive_cov_cpu_float32 (__main__.TestOperatorSignaturesCPU)
2021-10-23T09:17:24.5802677Z ----------------------------------------------------------------------
2021-10-23T09:17:24.5803486Z Traceback (most recent call last):
2021-10-23T09:17:24.5804675Z   File "test_fx.py", line 3254, in test_get_torch_func_signature_exhaustive
2021-10-23T09:17:24.5805616Z     op(*bound_args.args, **bound_args.kwargs)
2021-10-23T09:17:24.5809287Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_methods_invocations.py", line 666, in __call__
2021-10-23T09:17:24.5810523Z     return self.op(*args, **kwargs)
2021-10-23T09:17:24.5816243Z RuntimeError: cov(): weights sum to zero, can't be normalized
2021-10-23T09:17:24.5816935Z 
2021-10-23T09:17:24.5817760Z During handling of the above exception, another exception occurred:
2021-10-23T09:17:24.5818454Z 

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (6/6)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-23T09:14:04.4229164Z The PR is introduc...m to confirm whether this change is wanted or not.
2021-10-23T09:14:04.4216491Z processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4217790Z processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4219154Z processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4220398Z processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4221661Z processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4222857Z processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-23T09:14:04.4223884Z processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (NoneType _0)
2021-10-23T09:14:04.4225230Z processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-10-23T09:14:04.4226689Z processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-10-23T09:14:04.4228061Z processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
2021-10-23T09:14:04.4229164Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2021-10-23T09:14:04.4229729Z 
2021-10-23T09:14:04.4229972Z Broken ops: [
2021-10-23T09:14:04.4230569Z 	aten::_torch_cuda_cu_linker_symbol_op(Tensor self) -> (Tensor)
2021-10-23T09:14:04.4231411Z 	aten::_histogramdd_from_bin_tensors(Tensor self, Tensor[] bins, *, Tensor? weight=None, bool density=False) -> (Tensor)
2021-10-23T09:14:04.4232427Z 	aten::_histogramdd_bin_edges(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor[])
2021-10-23T09:14:04.4233462Z 	aten::_histogramdd_from_bin_cts(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor)
2021-10-23T09:14:04.4233972Z ]
2021-10-23T09:14:04.4234310Z + cleanup
2021-10-23T09:14:04.4234571Z + retcode=1
2021-10-23T09:14:04.4234836Z + set +x

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@pearu pearu added the module: sparse Related to torch.sparse label Oct 18, 2021
cc nikitaved pearu cpuhrsch @IvanYashchuk

[ghstack-poisoned]


cc nikitaved pearu cpuhrsch 

[ghstack-poisoned]
pearu added a commit that referenced this pull request Oct 18, 2021
ghstack-source-id: f4e2c14
Pull Request resolved: #66784
@IvanYashchuk IvanYashchuk self-requested a review October 18, 2021 13:35


cc nikitaved pearu cpuhrsch @IvanYashchuk 

[ghstack-poisoned]
pearu added a commit that referenced this pull request Oct 18, 2021
ghstack-source-id: de69a72
Pull Request resolved: #66784


cc nikitaved pearu cpuhrsch 

[ghstack-poisoned]
pearu added a commit that referenced this pull request Oct 21, 2021
ghstack-source-id: c199a69
Pull Request resolved: #66784
@cpuhrsch
Copy link
Copy Markdown
Contributor

With this stack almost resolved I think we can reasonably get started on the masked normalizations. In particular the long awaited masked_softmax.

@pearu
Copy link
Copy Markdown
Collaborator Author

pearu commented Oct 21, 2021

OK, I'll refactor this PR into two.



cc nikitaved pearu cpuhrsch 

[ghstack-poisoned]
@cpuhrsch
Copy link
Copy Markdown
Contributor

@cpuhrsch has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Oct 21, 2021
Summary:
Pull Request resolved: #66784

cc nikitaved pearu cpuhrsch

Test Plan: Imported from OSS

Reviewed By: saketh-are

Differential Revision: D31838513

Pulled By: cpuhrsch

fbshipit-source-id: 54b99ccf9821832c31976406379939b3c95f41de
@ngimel
Copy link
Copy Markdown
Collaborator

ngimel commented Oct 22, 2021

@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been reverted by b8c6cdb04bdc64c28509132df3abf3d23699e5e1. To re-land this change, follow these steps.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been reverted by 20f08d2. To re-land this change, follow these steps.

@IvanYashchuk IvanYashchuk reopened this Oct 23, 2021
@IvanYashchuk
Copy link
Copy Markdown
Collaborator

Superseded by #67088

@facebook-github-bot facebook-github-bot deleted the gh/pearu/7/head branch November 22, 2021 15:17
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been reverted by 20f08d2. To re-land this change, follow these steps.

@cpuhrsch
Copy link
Copy Markdown
Contributor

cpuhrsch commented Jan 6, 2022

I don't know why the bot decided to comment just now, but the code has landed so we can ignore it.

@pearu
Copy link
Copy Markdown
Collaborator Author

pearu commented Jan 6, 2022

yes, I was also wondering about this but then noticed that revert-hash was the same as on Oct 23 and the code is still present in master..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants