Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented May 4, 2018

Stacked on top of #7197

Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds. This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time. And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).

This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support. This brings ATen's dynamic library
structure more similar to Caffe2's.

The general principle for how this works is that we introduce
a hooks interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry. This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits. Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class. We similarly use
the hooks interface to handle Variable registration.

We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL. If you access the
registry directly you must maintain this invariant.

We also preserve libATen.so, which is an empty stub library that
depends on libATen_cpu.so and libATen_cuda.so, depending on
the setting of USE_CUDA.

There are a few potholes along the way. I document them here:

  • Previously, PyTorch maintained a separate registry for variable
    types, because no provision for them was made in the Context's
    type_registry. Now that we have the hooks mechanism, we can easily
    have PyTorch register variables in the main registry. The code
    has been refactored accordingly.

  • There is a subtle ordering issue between Variable and CUDA.
    We permit libATen_cuda.so and PyTorch to be loaded in either
    order (in practice, CUDA is always loaded "after" PyTorch, because
    it is lazily initialized.) This means that, when CUDA types are
    loaded, we must subsequently also initialize their Variable equivalents.
    Appropriate hooks were added to VariableHooks to make this possible;
    similarly, getVariableHooks() is not referentially transparent, and
    will change behavior after Variables are loaded. (This is different
    to CUDAHooks, which is "burned in" after you try to initialize CUDA.)

  • The cmake is adjusted to separate dependencies into either CPU
    or CUDA dependencies. The generator scripts are adjusted to either
    generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).

  • I changed all native functions which were CUDA-only (the cudnn functions)
    to have dispatches for CUDA only (making it permissible to not specify
    all dispatch options.) This uncovered a bug in how we were handling
    native functions which dispatch on a Type argument; I introduced a new
    self_ty keyword to handle this case. I'm not 100% happy about it
    but it fixed my problem.

    This also exposed the fact that set_history incompletely handles
    heterogenous return tuples combining Tensor and TensorList. I
    swapped this codegen to use flatten() (at the possible cost of
    a slight perf regression, since we're allocating another vector now
    in this code path).

  • thc_state is no longer a public member of Context; use getTHCState() instead

  • This PR comes with Registry from Caffe2, for handling static initialization.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang ezyang mentioned this pull request May 4, 2018
@ezyang ezyang force-pushed the pr/cpu-gpu-split branch 3 times, most recently from 130c633 to 62cf57d Compare May 4, 2018 20:50
@yf225 yf225 force-pushed the pr/cpu-gpu-split branch from c2c98f4 to 8567b34 Compare May 7, 2018 01:16
@ezyang
Copy link
Contributor Author

ezyang commented May 7, 2018

@ebetica The functions I added should be semantics preserving for what the Torch++ API was previously. I agree that Context's mismash of functions are a little inconsistent. Can we just "do what PyTorch does" to solve the API discrepancy?

@ezyang ezyang force-pushed the pr/cpu-gpu-split branch from d088110 to b3c2323 Compare May 7, 2018 05:20
@ebetica
Copy link
Contributor

ebetica commented May 7, 2018

@ezyang
Copy link
Contributor Author

ezyang commented May 7, 2018

Here is English updates on all of the bugfixes on this PR:

  • We have a general problem which is that on recent Ubuntu distributions, --as-needed is enabled for shared libraries, which is (cc @apaszke who was worrying about this in [do not merge as-is] Dynamic registration of THD initialization methods #7160 see also [do not merge as-is] Dynamic registration of THD initialization methods #7160 (comment)). For now, I've hacked this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to make CI work, but a more sustainable solution is to attempt to dlopen libATen_cuda.so when CUDA functionality is requested.
    • (UNDER TESTING) The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. I'm testing a fix which is passing -Wl,--no-as-needed when linking libATen_cuda.so to _C.so
  • I needed to make a bunch of fixes to Registry to make it more portable
    • No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of token pasting because it does not work with MSVC.
    • It seems MSVC is not willing to generate code for constructors of template classes at use sites which cross DLL boundaries. So we explicitly instantiate the class to get around the problem. This involved tweaks to the boilerplate generating macros, and also required us to shuffle around namespaces a bit, because you can't specialize a template unless you are in the same namespace as the template.
    • Insertion of AT_API to appropriate places where the registry must be exported
  • There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at Windows MAGMA binary requires explicit linking against MKL LAPACK, or it will silently give incorrect results #7353
  • There is no more libATen.so compatibility library. You must use libATen_cpu.so or libATen_cuda.so. A lot of adjustments to CPP extensions, libtorch, etc. were made to make this go through correctly. This will need fbcode fixes as well.
  • autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added a few more things to CUDAHooks (getNumGPUs)
  • Added manualSeedAll to Generator so that we can invoke it polymorphically (it only does something different for CUDAGenerator)
  • There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)
  • CUDAHooks/VariableHooks structs live in at namespace because Registry's namespace support is not good enough to handle it otherwise (see Registry changes above)
  • There's some modest moving around of native functions in ReduceOps and UnaryOps to get the CUDA-only function implementations into separate files, so they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA function due to object linkage boundaries.
  • Some direct uses of native functions in CUDA code has to go away, since these functions are not exported, so you have to go through the dispatcher (at::native::empty_like to at::empty_like)
  • Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API (which matters now that TH and THC are not in the same library)
    • Some code debt in torch/_thnn/utils.py and other THNN parsing code to handle both TH_API and THC_API
  • TensorUtils.h is now properly exported with AT_API
  • Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently
  • Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't declare a type as possibly undefined when we should have. We didn't catch this previously because optional annotations are not tested on "pass-through" native ATen ops (which don't have dispatch). Upstream issue at Optional modifiers (e.g., Tensor?) are not checked for non-dispatched native functions #7316
  • There's a new cmake macro aten_compile_options for applying all of our per-target compile time options. We use this on the cpu and cuda libraries.
  • test/test_cpp_extensions.py can be run directly by invoking in Python, assuming you've setup your PYTHONPATH setup correctly
  • type_from_string does some new funny business to only query for all valid CUDA types (which causes CUDA initialization) when we see "torch.cuda." in the requested string

@ezyang
Copy link
Contributor Author

ezyang commented May 7, 2018

@pytorchbot retest this please

1 similar comment
@ezyang
Copy link
Contributor Author

ezyang commented May 8, 2018

@pytorchbot retest this please

endif(NOT ${CMAKE_VERSION} VERSION_LESS "3.1")
set_property(TARGET ATen_cpu PROPERTY CXX_STANDARD 11)
if(CUDA_FOUND)
set_property(TARGET ATen_cuda PROPERTY CXX_STANDARD 11)

This comment was marked as off-topic.

This comment was marked as off-topic.

fn_name = fn_name[1:-2]
else:
fn_name = fn_name[:-1]
generic_functions.append(Function(fn_name))

This comment was marked as off-topic.

This comment was marked as off-topic.

// 1. Does the *implementation* of this function require linking against
// CUDA libraries?
//
// 2. Is this function *called* from non-CUDA ATen code?

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


// The VariableHooksInterface is an interface for autograd functionality
// which currently doesn't live in libATen.so AND needs to be called from
// ATen. In this case, it is only the type registry for Variable types,

This comment was marked as off-topic.

This comment was marked as off-topic.

${return_type} ${Type}::${api_name}(${type_method_formals}) const {
${return_call} at::native::${native_type_method_dispatch}(${actuals});
const auto& self_ty = *this;
(void)self_ty;

This comment was marked as off-topic.

This comment was marked as off-topic.

&& cudnn_enabled && detail::getCUDAHooks().versionCuDNN() >= 5110L);

if (use_cudnn && eps >= detail::getCUDAHooks().batchnormMinEpsilonCuDNN()) {
return std::get<0>(at::cudnn_batch_norm(

This comment was marked as off-topic.

This comment was marked as off-topic.


#ifdef _WIN32
# ifdef TH_EXPORTS
# ifdef ATen_cpu_EXPORTS

This comment was marked as off-topic.

ATEN_LIB = os.path.join(lib_path, 'libATen.so')
ATEN_LIBS = [os.path.join(lib_path, 'libATen_cpu.so')]
if WITH_CUDA:
ATEN_LIBS.extend(['-Wl,--no-as-needed', os.path.join(lib_path, 'libATen_cuda.so'), '-Wl,--as-needed'])

This comment was marked as off-topic.

This comment was marked as off-topic.

else:
outs = CodeTemplate("{ ${outs} }").substitute(outs=output_names)
# TODO: flatten allocates a std::vector, which could be expensive
outs = CodeTemplate("flatten( ${outs} )").substitute(outs=output_names)

This comment was marked as off-topic.

This comment was marked as off-topic.

for (auto& t : tl) {
variables.emplace_back(make_variable(std::move(t), /*requires_grad=*/false));
}
return variables;

This comment was marked as off-topic.

@ezyang
Copy link
Contributor Author

ezyang commented May 8, 2018

@pytorchbot retest this please

Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds.  This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time.  And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).

This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support.  This brings ATen's dynamic library
structure more similar to Caffe2's.  libATen.so is no more
(this is BC BREAKING)

The general principle for how this works is that we introduce
a *hooks* interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry.  This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits.  Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class.  We similarly use
the hooks interface to handle Variable registration.

We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL.  If you access the
registry directly you must maintain this invariant.

There are a few potholes along the way.  I document them here:

- Previously, PyTorch maintained a separate registry for variable
  types, because no provision for them was made in the Context's
  type_registry.  Now that we have the hooks mechanism, we can easily
  have PyTorch register variables in the main registry.  The code
  has been refactored accordingly.

- There is a subtle ordering issue between Variable and CUDA.
  We permit libATen_cuda.so and PyTorch to be loaded in either
  order (in practice, CUDA is always loaded "after" PyTorch, because
  it is lazily initialized.)  This means that, when CUDA types are
  loaded, we must subsequently also initialize their Variable equivalents.
  Appropriate hooks were added to VariableHooks to make this possible;
  similarly, getVariableHooks() is not referentially transparent, and
  will change behavior after Variables are loaded.  (This is different
  to CUDAHooks, which is "burned in" after you try to initialize CUDA.)

- The cmake is adjusted to separate dependencies into either CPU
  or CUDA dependencies.  The generator scripts are adjusted to either
  generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).

- I changed all native functions which were CUDA-only (the cudnn functions)
  to have dispatches for CUDA only (making it permissible to not specify
  all dispatch options.)  This uncovered a bug in how we were handling
  native functions which dispatch on a Type argument; I introduced a new
  self_ty keyword to handle this case.  I'm not 100% happy about it
  but it fixed my problem.

  This also exposed the fact that set_history incompletely handles
  heterogenous return tuples combining Tensor and TensorList.  I
  swapped this codegen to use flatten() (at the possible cost of
  a slight perf regression, since we're allocating another vector now
  in this code path).

- thc_state is no longer a public member of Context; use getTHCState() instead

- This PR comes with Registry from Caffe2, for handling static initialization.
  I needed to make a bunch of fixes to Registry to make it more portable

  - No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at
    least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary
    struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of
    token pasting because it does not work with MSVC.

  - It seems MSVC is not willing to generate code for constructors of template
    classes at use sites which cross DLL boundaries. So we explicitly instantiate
    the class to get around the problem. This involved tweaks to the boilerplate
    generating macros, and also required us to shuffle around namespaces a bit,
    because you can't specialize a template unless you are in the same namespace as
    the template.
  - Insertion of AT_API to appropriate places where the registry must be exported

- We have a general problem which is that on recent Ubuntu distributions,
  --as-needed is enabled for shared libraries, which is (cc @apaszke who was
  worrying about this in pytorch#7160 see also pytorch#7160 (comment)). For now, I've hacked
  this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to
  make CI work, but a more sustainable solution is to attempt to dlopen
  libATen_cuda.so when CUDA functionality is requested.

    - The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. So
      we pass -Wl,--no-as-needed when linking libATen_cuda.so to _C.so

- There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at pytorch#7353

- autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added
  a few more things to CUDAHooks (getNumGPUs)

- Added manualSeedAll to Generator so that we can invoke it polymorphically (it
  only does something different for CUDAGenerator)

- There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)

- CUDAHooks/VariableHooks structs live in at namespace because Registry's
  namespace support is not good enough to handle it otherwise (see Registry
  changes above)

- There's some modest moving around of native functions in ReduceOps and
  UnaryOps to get the CUDA-only function implementations into separate files, so
  they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA
  function due to object linkage boundaries.

- Some direct uses of native functions in CUDA code has to go away, since these
  functions are not exported, so you have to go through the dispatcher
  (at::native::empty_like to at::empty_like)

- Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API
  (which matters now that TH and THC are not in the same library)

- Added code debt in torch/_thnn/utils.py and other THNN parsing code to handle
  both TH_API and THC_API

- TensorUtils.h is now properly exported with AT_API

- Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and
  ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently

- Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't
  declare a type as possibly undefined when we should have. We didn't catch this
  previously because optional annotations are not tested on "pass-through" native
  ATen ops (which don't have dispatch). Upstream issue at pytorch#7316

- There's a new cmake macro aten_compile_options for applying all of our
  per-target compile time options. We use this on the cpu and cuda libraries.

- test/test_cpp_extensions.py can be run directly by invoking in Python,
  assuming you've setup your PYTHONPATH setup correctly

- type_from_string does some new funny business to only query for all valid CUDA
  types (which causes CUDA initialization) when we see "torch.cuda." in the
  requested string

Signed-off-by: Edward Z. Yang <[email protected]>
Signed-off-by: Edward Z. Yang <[email protected]>
Signed-off-by: Edward Z. Yang <[email protected]>
@ezyang
Copy link
Contributor Author

ezyang commented May 9, 2018

Yay, everything passed!!

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't reviewed the CMake and the file manager parts. The rest looks ok. It's not the prettiest patch, but I hope it will go away soon, and things will be much cleaner in c10.

Type * getTypeOpt(Backend p, ScalarType s) {
initCUDAIfNeeded(p);
auto & type = type_registry[static_cast<int>(p)][static_cast<int>(s)];
auto & type = type_registry[static_cast<int>(IsVariable::NotVariable)][static_cast<int>(p)][static_cast<int>(s)];

This comment was marked as off-topic.

doInitCUDA();
thc_state = detail::getCUDAHooks().initCUDA();
generator_registry[static_cast<int>(Backend::CUDA)] =
detail::getCUDAHooks().initCUDAGenerator(this);

This comment was marked as off-topic.

This comment was marked as off-topic.

cudaDeviceProp* getCurrentDeviceProperties() const;
cudaDeviceProp* getDeviceProperties(int device) const;
THCState* getTHCState() {
// AT_ASSERT(thc_state);

This comment was marked as off-topic.

std::unique_ptr<Generator>
generator_registry[static_cast<int>(Backend::NumOptions)];
// NB: type_registry has nullptr for all CUDA backends until
// CUDA initialization has occurred

This comment was marked as off-topic.

int getNumGPUs() const override;
};

// Sigh, the registry doesn't support namespaces :(

This comment was marked as off-topic.

}

// NB: If you use this macro, you may also need to add a CUDA forwarding
// stub in CUDAUnaryOps

This comment was marked as off-topic.

[static_cast<int>(Backend::${backend})]
[static_cast<int>(ScalarType::${scalar_type})]
.reset(new ${type_name}(context));
detail::getVariableHooks().registerVariableTypeFor(context, Backend::${backend}, ScalarType::${scalar_type});

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor Author

ezyang commented May 10, 2018

I agree with the other CI comments but I am going to merge to avoid a CI turnaround and I'll post a follow up PR.

@ezyang ezyang merged commit 64834f6 into pytorch:master May 10, 2018
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
* Split libATen.so into libATen_cpu.so and libATen_cuda.so

Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds.  This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time.  And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).

This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support.  This brings ATen's dynamic library
structure more similar to Caffe2's.  libATen.so is no more
(this is BC BREAKING)

The general principle for how this works is that we introduce
a *hooks* interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry.  This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits.  Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class.  We similarly use
the hooks interface to handle Variable registration.

We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL.  If you access the
registry directly you must maintain this invariant.

There are a few potholes along the way.  I document them here:

- Previously, PyTorch maintained a separate registry for variable
  types, because no provision for them was made in the Context's
  type_registry.  Now that we have the hooks mechanism, we can easily
  have PyTorch register variables in the main registry.  The code
  has been refactored accordingly.

- There is a subtle ordering issue between Variable and CUDA.
  We permit libATen_cuda.so and PyTorch to be loaded in either
  order (in practice, CUDA is always loaded "after" PyTorch, because
  it is lazily initialized.)  This means that, when CUDA types are
  loaded, we must subsequently also initialize their Variable equivalents.
  Appropriate hooks were added to VariableHooks to make this possible;
  similarly, getVariableHooks() is not referentially transparent, and
  will change behavior after Variables are loaded.  (This is different
  to CUDAHooks, which is "burned in" after you try to initialize CUDA.)

- The cmake is adjusted to separate dependencies into either CPU
  or CUDA dependencies.  The generator scripts are adjusted to either
  generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).

- I changed all native functions which were CUDA-only (the cudnn functions)
  to have dispatches for CUDA only (making it permissible to not specify
  all dispatch options.)  This uncovered a bug in how we were handling
  native functions which dispatch on a Type argument; I introduced a new
  self_ty keyword to handle this case.  I'm not 100% happy about it
  but it fixed my problem.

  This also exposed the fact that set_history incompletely handles
  heterogenous return tuples combining Tensor and TensorList.  I
  swapped this codegen to use flatten() (at the possible cost of
  a slight perf regression, since we're allocating another vector now
  in this code path).

- thc_state is no longer a public member of Context; use getTHCState() instead

- This PR comes with Registry from Caffe2, for handling static initialization.
  I needed to make a bunch of fixes to Registry to make it more portable

  - No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at
    least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary
    struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of
    token pasting because it does not work with MSVC.

  - It seems MSVC is not willing to generate code for constructors of template
    classes at use sites which cross DLL boundaries. So we explicitly instantiate
    the class to get around the problem. This involved tweaks to the boilerplate
    generating macros, and also required us to shuffle around namespaces a bit,
    because you can't specialize a template unless you are in the same namespace as
    the template.
  - Insertion of AT_API to appropriate places where the registry must be exported

- We have a general problem which is that on recent Ubuntu distributions,
  --as-needed is enabled for shared libraries, which is (cc @apaszke who was
  worrying about this in pytorch#7160 see also pytorch#7160 (comment)). For now, I've hacked
  this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to
  make CI work, but a more sustainable solution is to attempt to dlopen
  libATen_cuda.so when CUDA functionality is requested.

    - The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. So
      we pass -Wl,--no-as-needed when linking libATen_cuda.so to _C.so

- There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at pytorch#7353

- autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added
  a few more things to CUDAHooks (getNumGPUs)

- Added manualSeedAll to Generator so that we can invoke it polymorphically (it
  only does something different for CUDAGenerator)

- There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)

- CUDAHooks/VariableHooks structs live in at namespace because Registry's
  namespace support is not good enough to handle it otherwise (see Registry
  changes above)

- There's some modest moving around of native functions in ReduceOps and
  UnaryOps to get the CUDA-only function implementations into separate files, so
  they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA
  function due to object linkage boundaries.

- Some direct uses of native functions in CUDA code has to go away, since these
  functions are not exported, so you have to go through the dispatcher
  (at::native::empty_like to at::empty_like)

- Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API
  (which matters now that TH and THC are not in the same library)

- Added code debt in torch/_thnn/utils.py and other THNN parsing code to handle
  both TH_API and THC_API

- TensorUtils.h is now properly exported with AT_API

- Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and
  ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently

- Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't
  declare a type as possibly undefined when we should have. We didn't catch this
  previously because optional annotations are not tested on "pass-through" native
  ATen ops (which don't have dispatch). Upstream issue at pytorch#7316

- There's a new cmake macro aten_compile_options for applying all of our
  per-target compile time options. We use this on the cpu and cuda libraries.

- test/test_cpp_extensions.py can be run directly by invoking in Python,
  assuming you've setup your PYTHONPATH setup correctly

- type_from_string does some new funny business to only query for all valid CUDA
  types (which causes CUDA initialization) when we see "torch.cuda." in the
  requested string

Signed-off-by: Edward Z. Yang <[email protected]>

* Last mile libtorch fixes

Signed-off-by: Edward Z. Yang <[email protected]>

* pedantic fix

Signed-off-by: Edward Z. Yang <[email protected]>
facebook-github-bot pushed a commit that referenced this pull request Jul 12, 2018
Summary:
This PR changes the ATen `CMakeLists.txt` slightly, to enable standalone build of ATen inside PyTorch. Currently, the tests in ATen gets linked to `libcaffe.so libcaffe2.so`. As a result, ATen can't be built standalone without building from the root pytorch directory. I know that there is a big merge happening between caffe2 and pytorch and hence, the purpose of this PR is to really start a conversation on what would be the proper way of migrating the CMakeLists to enable clean builds. We should also follow up on this PR: #7275. For your reference, that PR has the explanation for why `-Wl --no-as-need` is needed. Moreover, without `set(ATen_CUDA_SRCS ${all_cuda_cpp})`, the standalone build will throw unresolved references.
Pull Request resolved: #9377

Reviewed By: smessmer

Differential Revision: D8825921

Pulled By: orionr

fbshipit-source-id: c521159b4885639fc7990a9819202051455d07db
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 12, 2018
Summary:
This PR changes the ATen `CMakeLists.txt` slightly, to enable standalone build of ATen inside PyTorch. Currently, the tests in ATen gets linked to `libcaffe.so libcaffe2.so`. As a result, ATen can't be built standalone without building from the root pytorch directory. I know that there is a big merge happening between caffe2 and pytorch and hence, the purpose of this PR is to really start a conversation on what would be the proper way of migrating the CMakeLists to enable clean builds. We should also follow up on this PR: pytorch/pytorch#7275. For your reference, that PR has the explanation for why `-Wl --no-as-need` is needed. Moreover, without `set(ATen_CUDA_SRCS ${all_cuda_cpp})`, the standalone build will throw unresolved references.
Pull Request resolved: pytorch/pytorch#9377

Reviewed By: smessmer

Differential Revision: D8825921

Pulled By: orionr

fbshipit-source-id: c521159b4885639fc7990a9819202051455d07db
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 13, 2018
Summary:
This PR changes the ATen `CMakeLists.txt` slightly, to enable standalone build of ATen inside PyTorch. Currently, the tests in ATen gets linked to `libcaffe.so libcaffe2.so`. As a result, ATen can't be built standalone without building from the root pytorch directory. I know that there is a big merge happening between caffe2 and pytorch and hence, the purpose of this PR is to really start a conversation on what would be the proper way of migrating the CMakeLists to enable clean builds. We should also follow up on this PR: pytorch/pytorch#7275. For your reference, that PR has the explanation for why `-Wl --no-as-need` is needed. Moreover, without `set(ATen_CUDA_SRCS ${all_cuda_cpp})`, the standalone build will throw unresolved references.
Pull Request resolved: pytorch/pytorch#9377

Reviewed By: smessmer

Differential Revision: D8825921

Pulled By: orionr

fbshipit-source-id: c521159b4885639fc7990a9819202051455d07db
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
This PR changes the ATen `CMakeLists.txt` slightly, to enable standalone build of ATen inside PyTorch. Currently, the tests in ATen gets linked to `libcaffe.so libcaffe2.so`. As a result, ATen can't be built standalone without building from the root pytorch directory. I know that there is a big merge happening between caffe2 and pytorch and hence, the purpose of this PR is to really start a conversation on what would be the proper way of migrating the CMakeLists to enable clean builds. We should also follow up on this PR: pytorch#7275. For your reference, that PR has the explanation for why `-Wl --no-as-need` is needed. Moreover, without `set(ATen_CUDA_SRCS ${all_cuda_cpp})`, the standalone build will throw unresolved references.
Pull Request resolved: pytorch#9377

Reviewed By: smessmer

Differential Revision: D8825921

Pulled By: orionr

fbshipit-source-id: c521159b4885639fc7990a9819202051455d07db
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants