Skip to content

Compiling pytorch on MacOSX 10.13.5 with support for CUDA GeForceGT 750M #8974

@joseandrespena

Description

@joseandrespena

Issue description

Compiling pytorch on MacOSX 10.13.5 with support for CUDA GeForceGT 750M 2048MB

Code example

Error log
/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu:222:134: warning: self-comparison always evaluates to true [-Wtautological-compare]
switch (memType) { case CUDAHistogramMemoryType::SHARED: (__cudaPushCallConfiguration(grid, block, (CUDAHistogramMemoryType::SHARED == CUDAHistogramMemoryType::SHARED) ? sharedMem : (0), at::globalContext().getCurrentCUDAStream())) ? (void)0 : kernelHistogram1D< output_t, input_t, int64_t, 1, 2, 1, CUDAHistogramMemoryType::SHARED> (aInfo, pInfo, bInfo, binsize, totalElements, getWeightsOp); if (!((cudaGetLastError()) == (cudaSuccess))) { throw Error({func, "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", 222}, at::str(at::str("cudaGetLastError() == cudaSuccess", " ASSERT FAILED at ", "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", ":", 222, ", please report a bug to PyTorch. ", "kernelHistogram1D failed"))); } ; ; break; case CUDAHistogramMemoryType::MULTI_BLOCK: (__cudaPushCallConfiguration(grid, block, (CUDAHistogramMemoryType::MULTI_BLOCK == CUDAHistogramMemoryType::SHARED) ? sharedMem : (0), at::globalContext().getCurrentCUDAStream())) ? (void)0 : kernelHistogram1D< output_t, input_t, int64_t, 1, 2, 1, CUDAHistogramMemoryType::MULTI_BLOCK> (aInfo, pInfo, bInfo, binsize, totalElements, getWeightsOp); if (!((cudaGetLastError()) == (cudaSuccess))) { throw Error({func, "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", 222}, at::str(at::str("cudaGetLastError() == cudaSuccess", " ASSERT FAILED at ", "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", ":", 222, ", please report a bug to PyTorch. ", "kernelHistogram1D failed"))); } ; ; break; default: (__cudaPushCallConfiguration(grid, block, (CUDAHistogramMemoryType::GLOBAL == CUDAHistogramMemoryType::SHARED) ? sharedMem : (0), at::globalContext().getCurrentCUDAStream())) ? (void)0 : kernelHistogram1D< output_t, input_t, int64_t, 1, 2, 1, CUDAHistogramMemoryType::GLOBAL> (aInfo, pInfo, bInfo, binsize, totalElements, getWeightsOp); if (!((cudaGetLastError()) == (cudaSuccess))) { throw Error({func, "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", 222}, at::str(at::str("cudaGetLastError() == cudaSuccess", " ASSERT FAILED at ", "/Users/josepen/Development/pytorch/aten/src/ATen/native/cuda/SummaryOps.cu", ":", 222, ", please report a bug to PyTorch. ", "kernelHistogram1D failed"))); } ; ; }

System Info

josepens-mbp:pytorch josepen$ python torch/utils/collect_env.py
Collecting environment information...
PyTorch version: 0.5.0a0+290d20b
Is debug build: No
CUDA used to build PyTorch: 9.2

OS: Mac OSX 10.13.5
GCC version: Could not collect
CMake version: version 3.11.1

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.2.64
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda/lib/libcudnn.7.dylib
/usr/local/cuda/lib/libcudnn.dylib
/usr/local/cuda/lib/libcudnn_static.a

Versions of relevant libraries:
[pip3] numpy (1.14.5)
[pip3] torch (0.3.1)
[pip3] torchtext (0.2.3)
[pip3] torchvision (0.2.1)
[conda] torch 0.5.0a0+290d20b
[conda] torchtext 0.2.3
[conda] torchvision 0.2.1
josepens-mbp:pytorch josepen$

  • PyTorch or Caffe2: pytorch
  • How you installed PyTorch (conda, pip, source): I'm compiling from source
  • Build command you used (if compiling from source): MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py install
  • OS: MacOSX 10.13.5
  • PyTorch version: I cloned the repo this is the hash version
    josepens-mbp:pytorch josepen$ git rev-parse HEAD
    290d20b
  • Python version: 3.6
  • CUDA/cuDNN version: 9.2.64
  • GPU models and configuration: GeForceGT 750M
  • GCC version (if compiling from source): gcc --version
    Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
    Apple LLVM version 9.1.0 (clang-902.0.39.2)
    Target: x86_64-apple-darwin17.6.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
  • CMake version: cmake --version
    cmake version 3.11.1
    CMake suite maintained and supported by Kitware (kitware.com/cmake)
  • Versions of any other relevant libraries:

Metadata

Metadata

Assignees

Labels

todoNot as important as medium or high priority tasks, but we will work on these.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions