Showing posts with label Nvidia. Show all posts
Showing posts with label Nvidia. Show all posts

Friday, April 26, 2013

Copperhead: Data Parallel Python from NVIDIA Research


By Vasudev Ram

News seen via PythonWeekly.

Copperhead, from NVIDIA Research, is a project to bring data parallelism to Python.



From the site (emphasis mine):

[ We define a small functional, data parallel subset of Python, which we then dynamically compile and execute on parallel platforms. Currently, we target NVIDIA GPUs, as well as multicore CPUs through OpenMP and Threading Building Blocks (TBB). ]

Soumds interesting.

Also see my recent related post:

Python-to-GPU compiler coming from Continuum Analytics and Nvidia

About GPUs and GPGPUs.

- Vasudev Ram - Dancing Bison Enterprises

Saturday, March 23, 2013

Python-to-GPU compiler coming from Continuum Analytics and Nvidia

Saw it first via Python Weekly, on The Register UK site, then confirmed it via a Google search that showed some other related links, see below.

Interesting news. If it works out, it means that Python can be used more for performance-intensive work that can leverage GPU's, which is already becoming a trend, from what I've been reading lately.

Nvidia, Continuum team up to sling Python at GPU coprocessors • The Register


NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler

Update: The AnandTech post linked above also has a comment by Travis Oliphant, founder of Continuum Analytics, about the pros and cons of this initiative vs. other existing options.

GPU-Accelerated Computing Reaches Next Generation Of Programmers With Python Support Of NVIDIA CUDA.

Excerpt from above Nvidia link:

[ Continuum Analytics' Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.

The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM's efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.

"Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective," said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. "CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python." ]

I had blogged about Continuum Analytics' cloud computing scientific Python product, Wakari, earlier.

- Vasudev Ram - Dancing Bison Enterprises