Skip to content

Conversation

@ilia-cher
Copy link
Contributor

@ilia-cher ilia-cher commented Apr 10, 2019

Stack from ghstack:

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

Differential Revision: D14947557

ilia-cher added 10 commits April 8, 2019 01:16
Summary:
This PR implements changes described in
#19001
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch
ilia-cher added 13 commits April 15, 2019 15:34
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Move OMP/MKL thread initialization into ATen/Parallel

Summary:
This PR implements changes described in
#19001

gh-metadata: pytorch pytorch 19011 gh/ilia-cher/1/head
Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
…pool"

Initialize intra-op threads in JIT thread pool

Summary:
Similar to initialization in torch initModule and autograd's
Engine::thread_init, we need to call at::init_num_threads when
initializing JIT thread pool threads

gh-metadata: pytorch pytorch 19058 gh/ilia-cher/2/head
@ilia-cher
Copy link
Contributor Author

rebased on top of #19997

ilia-cher added 2 commits May 1, 2019 01:39
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
@ilia-cher
Copy link
Contributor Author

addressed @cpuhrsch comments as an extra step in #20002 - I checked the speed with #19997 and it seems we should be good

ilia-cher added 7 commits May 6, 2019 13:18
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Copy link
Contributor

@VitalyFedyunin VitalyFedyunin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--- nvm ----

ilia-cher added 2 commits May 7, 2019 14:39
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
Port TH library to ATen/Parallel

Summary:
Removing explicit usage of OpenMP from TH.
More details in #19002

Test Plan:
BLAS=MKL USE_MKLDNN=1 USE_OPENCV=1 USE_FFMPEG=1 python setup.py develop --cmake
pytest -s -v test/test_torch.py::TestTorch

gh-metadata: pytorch pytorch 19105 gh/ilia-cher/3/head
@zou3519 zou3519 deleted the gh/ilia-cher/3/head branch May 8, 2019 08:09
zdevito pushed a commit to zdevito/ATen that referenced this pull request May 8, 2019
Summary:
Pull Request resolved: pytorch/pytorch#19105
ghimport-source-id: db3e26f89d098e86215c48e464ace615193f5772

Differential Revision: D14947557

Pulled By: ilia-cher

fbshipit-source-id: 7e987e74c034646ba818f02e7bd711aba2ee3364
@facebook-github-bot
Copy link
Contributor

@ilia-cher merged this pull request in 0ebe252.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants