Skip to content

Conversation

@ilia-cher
Copy link
Contributor

@ilia-cher ilia-cher commented May 2, 2019

Stack from ghstack:

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

Differential Revision: D15248505

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

heroic work!

Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
@ilia-cher
Copy link
Contributor Author

bugfix + double checked all other call sites


#ifdef _OPENMP
auto parallel_section = [&](int64_t start, int64_t end) {
for (int64_t i = 0; i < numel; i++) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, note that here this is correct, we parallelize inside the loop

Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I'll trust your audit :)

ilia-cher added 3 commits May 6, 2019 13:18
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
ilia-cher added 8 commits May 6, 2019 16:53
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
Port ATen/native to ATen/Parallel

Summary:
Remove explicit usage of OpenMP from ATen/native

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 20043 gh/ilia-cher/12/head
@zou3519 zou3519 deleted the gh/ilia-cher/12/head branch May 8, 2019 17:35
@facebook-github-bot
Copy link
Contributor

@ilia-cher merged this pull request in 2a104f7.

zdevito pushed a commit to zdevito/ATen that referenced this pull request May 8, 2019
Summary:
Pull Request resolved: pytorch/pytorch#20043
ghimport-source-id: 8003ef8ca335d4c4717a886a3a75fd78ec53ade5

Differential Revision: D15248505

Pulled By: ilia-cher

fbshipit-source-id: 7be500ed8bfb23cc36f1dd7108e344319e3e5332
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: mkl Related to our MKL support module: sparse Related to torch.sparse

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants