# Difference between revisions of "CUDA examples"

Line 25: | Line 25: | ||

{{Download|CUDA-OpenGL.tar.gz|Code example CUDA-OpenGL bindings|tgz}} | {{Download|CUDA-OpenGL.tar.gz|Code example CUDA-OpenGL bindings|tgz}} | ||

+ | |||

+ | And also this example exists as a Python implementation as well. This | ||

+ | requires both PyOpenGL and PyCUDA. It is written for the currently | ||

+ | downloadable version of PyCUDA (0.94.2), but doesn't work with | ||

+ | the current development version. The necessary changes should be simple | ||

+ | to figure out. | ||

+ | |||

+ | {{Download|PyGL.tar.gz|Code example CUDA-OpenGL bindings - in Python|tgz}} |

## Revision as of 19:05, 7 February 2011

# Gram-Schmidt method

This example implements the Gram-Schmidt orthogonalization method for fully occupied matrices. While for sparse matrices, the Householder method is typically more efficient, the Gram-Schmidt method is still widely used for fully occupied matrices due to its robustness. And due to its structure, the method can be easily implemented on GPUs using CUDA.

Code example Gram-Schmidt method (23 KB)

# Scalar product

Due to its very small computational cost, the scalar product is certainly 'not' suited for porting to a GPU. However, if a complex algorithm requires a scalar product and the algorithm is ported to a GPU, it is necessary to also port the scalar product to the GPU. Otherwise, one would need to transfer data back to the CPU, which is even more time consuming.

This example demonstrates several possible strategies. It also serves as a good example for the use of atomic operations, and how they can be avoided. e.g. on older hardware. Some of the examples deliberatedly don't work to show the pitfalls of massively parallel computation, and to demonstrate that atomic operations are really necessary. Note that the use of atomic operations is rather slow, and therefore in this example no speed up is gained by using them. However, the use of atomic operations makes the code considerably shorter and easier to read, which is important in a scientific environment, where code is continously developed further.

Code example scalar product (17 KB)

PyCUDA is an extremely powerful Python extension that does not only allow to use CUDA code from Python, but can do just-in-time kernel compilation for you, and allows to write code similiar to numpy, just that it will be executed on a GPU - and much faster therefore. This is an example code calculating again the scalar product, just in Python. Thanks to PyCUDA it is as fast as the plain CUDA code (or even faster, using GPUArray...).

Code example scalar product - in Python (15 KB)

# Ideal gas with direct OpenGL visualization

This example uses the CUDA-OpenGL-binding to render an ideal gas in a rotating box. An OpenGL vertex buffer is written directly from CUDA, which runs the ideal gas as a very simple kernel. Of course, the same bindung could also be used to visualize more complex data.

Code example CUDA-OpenGL bindings (15 KB)

And also this example exists as a Python implementation as well. This requires both PyOpenGL and PyCUDA. It is written for the currently downloadable version of PyCUDA (0.94.2), but doesn't work with the current development version. The necessary changes should be simple to figure out.