The GPU Computing SDK includes 100+ code samples, utilities, whitepapers, and additional documentation to help you get started developing, porting, and optimizing your applications for the CUDA architecture. You can get quick access to many of the SDK resources on this page, SDK documentation, or download the complete SDK.
Please note that you may need to install the latest NVIDIA drivers and CUDA Toolkit to compile and run the code samples.
|
NewDelete 
This sample demonstrates dynamic global memory allocation through device C++ new and delete operators and virtual function declarations available with CUDA 4.0. |
|

Browse Files
|
|
|
Simple Peer-to-Peer Transfers with Multi-GPU 
This application demonstrates the new CUDA 4.0 APIs that support Peer-To-Peer (P2P) copies, Peer-To-Peer (P2P) addressing, and UVA (Unified Virtual Memory Addressing) between multiple Tesla GPUs. |
|

Browse Files
|
|
|
Using Inline PTX 
A simple test application that demonstrates a new CUDA 4.0 ability to embed PTX in a CUDA kernel. |
|

or later
Browse Files
|
|
|
Simple Layered Texture 
Simple example that demonstrates how to use a new CUDA 4.0 feature to support layered Textures in CUDA C. |
|

Browse Files
|
|
|
CUDA C Monte Carlo: Single Asian Option 
This sample uses Monte Carlo to simulate Single Asian Options using the NVIDIA CURAND library. |
|

or later
Browse Files
|
|
|
CUDA C Monte Carlo Estimation of Pi (batch QRNG) 
This sample uses Monte Carlo simulation for Estimation of Pi (using batch QRNG). This sample also uses the NVIDIA CURAND library. |
|

or later
Browse Files
|
|
|
CUDA C Monte Carlo Estimation of Pi (batch PRNG) 
This sample uses Monte Carlo simulation for Estimation of Pi (using batch PRNG). This sample also uses the NVIDIA CURAND library. |
|

or later
Browse Files
|
|
|
CUDA C Monte Carlo Estimation of Pi (batch inline QRNG) 
This sample uses Monte Carlo simulation for Estimation of Pi (using batch inline QRNG). This sample also uses the NVIDIA CURAND library. |
|

or later
Browse Files
|
|
|
CUDA C Monte Carlo Estimation of Pi (inline PRNG) 
This sample uses Monte Carlo simulation for Estimation of Pi (using inline PRNG). This sample also uses the NVIDIA CURAND library. |
|

or later
Browse Files
|
|
|
CUDA C Random Fog 
This sample illustrates pseudo- and quasi- random numbers produced by CURAND. |
|

or later
Browse Files
|
|
|
simplePrintf 
This CUDA Runtime API sample is a very basic sample that implements how to use the printf function in the device code. Specifically, for devices with compute capability less than 2.0, the function cuPrintf is called; otherwise, printf can be used directly.
|
|

or later
Browse Files
|
|
|
Bilateral Filter 
Bilateral filter is an edge-preserving non-linear smoothing filter that is implemented with CUDA with OpenGL rendering. It can be used in image recovery and denoising. Each pixel is weight by considering both the spatial distance and color distance between its neibors. Reference:"C. Tomasi, R. Manduchi, Bilateral Filtering for Gray and Color Images, proceeding of the ICCV, 1998, http://users.soe.ucsc.edu/~manduchi/Papers/ICCV98.pdf" |
|

or later
Browse Files
|
|
|
Simple Surface Write 
Simple example that demonstrates the use of 2D surface references (Write-to-Texture) |
|

Browse Files
|
|
|
Function Pointers 
This sample illustrates how to use function pointers and implements the Sobel Edge Detection filter for 8-bit monochrome images. |
|

Browse Files
|
|
|
Interval 
Interval arithmetic operators and example. Uses various C++ features (templates and recursion). The recursive mode requires Compute SM 2.0 capabilities. |
|

Browse Files
|
|
|
Simple D3D11 Texture 
Simple program which demonstrates Direct3D11 Texture interoperability with CUDA. The program creates a number of D3D11 Textures (2D, 3D, and CubeMap) which are written to from CUDA kernels. Direct3D then renders the results on the screen. A Direct3D Capable device is required. |
|

or later
Browse Files
|
|
|
Simple Multi Copy and Compute 
Supported in GPUs with Compute Capability 1.1, overlaping compute with one memcopy is possible from the host system. For Quadro and Tesla GPUs with Compute Capability 2.0, a second overlapped copy operation in either direction at full speed is possible (PCI-e is symmetric). This sample illustrates the usage of CUDA streams to achieve overlapping of kernel execution with data copies to and from the device.
|
|

or later
Browse Files
|
|
|
Vector Addition 
This CUDA Runtime API sample is a very basic sample that implements element by element vector addition. It is the same as the sample illustrating Chapter 3 of the programming guide with some additions like error checking. |
|

or later
Browse Files
|
|
|
Vector Addition Driver API 
This Vector Addition sample is a basic sample that is implemented element by element. It is the same as the sample illustrating Chapter 3 of the programming guide with some additions like error checking. This sample also uses the new CUDA 4.0 kernel launch Driver API. |
|

or later
Browse Files
|
|
|
Device Query 
This sample enumerates the properties of the CUDA devices present in the system. |
|

or later
Browse Files
|
|
|
Device Query Driver API 
This sample enumerates the properties of the CUDA devices present using CUDA Driver API calls |
|

or later
Browse Files
|
|
|
Template 
A trivial template project that can be used as a starting point to create new CUDA projects. |
|

or later
Browse Files
|
|
|
C++ Integration 
This example demonstrates how to integrate CUDA into an existing C++ application, i.e. the CUDA entry point on host side is only a function which is called from C++ code and only the file containing this function is compiled with nvcc. It also demonstrates that vector types can be used from cpp. |
|

or later
Browse Files
|
|
|
Bandwidth Test 
This is a simple test program to measure the memcopy bandwidth of the GPU and memcpy bandwidth across PCI-e. This test application is capable of measuring device to device copy bandwidth, host to device copy bandwidth for pageable and page-locked memory, and device to host copy bandwidth for pageable and page-locked memory. |
|

or later
Browse Files
|
|
|
asyncAPI 
This sample uses CUDA streams and events to overlap execution on CPU and GPU. |
|

or later
Browse Files
|
|
|
Clock 
This example shows how to use the clock function to measure the performance of kernel accurately. |
|

or later
Browse Files
|
|
|
Simple Atomic Intrinsics 
A simple demonstration of global memory atomic instructions. Requires Compute Capability 1.1 or higher. |
|

or later
Browse Files
|
|
|
Pitch Linear Texture 
Use of Pitch Linear Textures |
|

or later
Browse Files
|
|
|
simpleStreams 
This sample uses CUDA streams to overlap kernel executions with memory copies between the host and a GPU device. This sample uses a new CUDA 4.0 feature that supports pinning of generic host memory. Requires Compute Capability 1.1 or higher. |
|

or later
Browse Files
|
|
|
Simple Templates 
This sample is a templatized version of the template project. It also shows how to correctly templatize dynamically allocated shared memory arrays. |
|

or later
Browse Files
|
|
|
Simple Texture 
Simple example that demonstrates use of Textures in CUDA. |
|

or later
Browse Files
|
|
|
Simple Texture (Driver Version) 
Simple example that demonstrates use of Textures in CUDA. This sample uses the new CUDA 4.0 kernel launch Driver API. |
|

or later
Browse Files
|
|
|
Simple Vote Intrinsics 
Simple program which demonstrates how to use the Vote (any, all) intrinsic instruction in a CUDA kernel. Requires Compute Capability 1.2 or higher. |
|

or later
Browse Files
|
|
|
simpleZeroCopy 
This sample illustrates how to use Zero MemCopy, kernels can read and write directly to pinned system memory. This sample requires GPUs that support this feature (MCP79 and GT200). |
|

Whitepaper
Browse Files
|
|
|
CUDA Context Thread Management 
Simple program illustrating how to the CUDA Context Management API and uses the new CUDA 4.0parameter passing and CUDA launch API. CUDA contexts can be created separately and attached independently to different threads. |
|

or later
Browse Files
|
|
|
Simple Multi-GPU 
This application demonstrates how to use the new CUDA 4.0 API for CUDA context management and multi-threaded access to run CUDA kernels on multiple-GPUs. |
|

or later
Browse Files
|
|
|
Simple OpenGL 
Simple program which demonstrates interoperability between CUDA and OpenGL. The program modifies vertex positions with CUDA and uses OpenGL to render the geometry. |
|

or later
Browse Files
|
|
|
Simple Texture 3D 
Simple example that demonstrates use of 3D Textures in CUDA. |
|

or later
Browse Files
|
|
|
Matrix Multiplication (CUDA Runtime API Version) 
This sample implements matrix multiplication and is exactly the same as Chapter 6 of the programming guide.
It has been written for clarity of exposition to illustrate various CUDA programming principles, not with the goal of providing the most performant generic kernel for matrix multiplication. To illustrate GPU performance for matrix multiply, this sample also shows how to use the new CUDA 4.0 interface for CUBLAS to demonstrate high-performance performance for matrix multiplication. |
|

or later
Browse Files
|
|
|
Matrix Multiplication (CUDA Driver API version with Dynamic Linking Version) 
This sample revisits matrix multiplication using the CUDA driver API.
It demonstrates how to link to CUDA driver at runtime and how to use JIT (just-in-time) compilation from PTX code.
It has been written for clarity of exposition to illustrate various CUDA programming principles, not with the goal of providing the most performant generic kernel for matrix multiplication.
CUBLAS provides high-performance matrix multiplication. |
|

or later
Browse Files
|
|
|
Scalar Product 
This sample calculates scalar products of a given set of input vector pairs. |
|

or later
Browse Files
|
|
|
Concurrent Kernels 
This sample demonstrates the use of CUDA streams for concurrent execution of several kernels on devices of compute capability 2.0 or higher. Devices of compute capability 1.x will run the kernels sequentially.It also illustrates how to introduce dependencies between CUDA streams with the new cudaStreamWaitEvent function introduced in CUDA 3.2 |
|

or later
Browse Files
|
|
|
Aligned Types 
A simple test, showing huge access speed gap between aligned and misaligned structures. |
|

or later
Browse Files
|
|
|
PTX Just-in-Time compilation 
This sample trates how to use JIT compilation for PTX code. |
|

or later
Browse Files
|
|
|
DCT8x8 
This sample demonstrates how Discrete Cosine Transform (DCT) for blocks of 8 by 8 pixels can be performed using CUDA: a naive implementation by definition and a more traditional approach used in many libraries. As opposed to implementing DCT in a fragment shader, CUDA allows for an easier and more efficient implementation.
|
|

or later
Whitepaper
Browse Files
|
|
|
1D Discrete Haar Wavelet Decomposition 
Discrete Haar wavelet decomposition for 1D signals with a length which is a power of 2. |
|

or later
Browse Files
|
|
|
Eigenvalues 
The computation of all or a subset of all eigenvalues is an important problem in Linear Algebra, statistics, physics, and many other fields. This sample demonstrates a parallel implementation of a bisection algorithm for the computation of all eigenvalues of a
tridiagonal symmetric matrix of arbitrary size with CUDA. |
|

or later
Whitepaper
Browse Files
|
|
|
Fast Walsh Transform 
Naturally(Hadamard)-ordered Fast Walsh Tranform for batched vectors of arbitrary eligible(power of two) lengths |
|

or later
Browse Files
|
|
|
CUDA Histogram 
This sample demonstrates efficient implementation of 64-bin and 256-bin histogram.
|
|

or later
Whitepaper
Browse Files
|
|
|
Line of Sight 
This sample is an implementation of a simple line-of-sight algorithm: Given a height map and a ray originating at some observation point, it computes all the points along the ray that are visible from the observation point. The implementation is based on the Thrust library (http://code.google.com/p/thrust/). |
|

or later
Browse Files
|
|
|
Matrix Transpose 
This sample demonstrates Matrix Transpose. Different performance are shown to achieve high performance. |
|

or later
Whitepaper
Browse Files
|
|
|
Box Filter 
Fast image box filter using CUDA with OpenGL rendering. |
|

or later
Browse Files
|
|
|
Post-Process in OpenGL 
This sample shows how to post-process an image rendered in OpenGL using CUDA. |
|

or later
Browse Files
|
|
|
CUDA Parallel Reduction 
A parallel sum reduction that computes the sum of a large arrays of values. This sample demonstrates several important optimization strategies for 1:Data-Parallel Algorithms like reduction. |
|

or later
Whitepaper
Browse Files
|
|
|
CUDA Parallel Prefix Sum (Scan) 
This example demonstrates an efficient CUDA implementation of parallel prefix sum, also known as "scan". Given an array of numbers, scan computes a new array in which each element is the sum of all the elements before it in the input array. |
|

or later
Browse Files
|
|
|
DirectX Texture Compressor (DXTC) 
High Quality DXT Compression using CUDA.
This example shows how to implement an existing computationally-intensive CPU compression algorithm in parallel on the GPU, and obtain an order of magnitude performance improvement. |
|

or later
Whitepaper
Browse Files
|
|
|
Image denoising 
This sample demonstrates two adaptive image denoising technqiues: KNN and NLM, based on computation of both geometric and color distance between texels. While both techniques are implemented in the DirectX SDK using shaders, massively speeded up variation of the latter techique, taking advantage of shared memory, is implemented in addition to DirectX counterparts. |
|

or later
Whitepaper
Browse Files
|
|
|
Sobel Filter 
This sample implements the Sobel edge detection filter for 8-bit monochrome images. |
|

or later
Browse Files
|
|
|
Recursive Gaussian Filter 
This sample implements a Gaussian blur using Deriche's recursive method. The advantage of this method is that the execution time is independent of the filter width. |
|

or later
Browse Files
|
|
|
Bicubic Texture Filtering 
This sample demonstrates how to efficiently implement bicubic Texture filtering in CUDA. |
|

or later
Browse Files
|
|
|
Fluids (OpenGL Version) 
An example of fluid simulation using CUDA and CUFFT, with OpenGL rendering. |
|

or later
Browse Files
|
|
|
CUDA FFT Ocean Simulation 
This sample simulates an Ocean heightfield using CUFFT and renders the result using OpenGL. |
|

or later
Browse Files
|
|
|
FFT-Based 2D Convolution 
This sample demonstrates how 2D convolutions with very large kernel sizes can be efficiently implemented using FFT transformations. |
|

or later
Browse Files
|
|
|
CUDA Separable Convolution 
This sample implements a separable convolution filter of a 2D signal with a gaussian kernel. |
|

or later
Whitepaper
Browse Files
|
|
|
Texture-based Separable Convolution 
Texture-based implementation of a separable 2D convolution with a gaussian kernel. Used for performance comparison against convolutionSeparable. |
|

or later
Browse Files
|
|
|
threadFenceReduction 
This sample shows how to perform a reduction operation on an array of values using the thread Fence intrinsic.
to produce a single value in a single kernel (as opposed to two or more kernel calls as shown in the "reduction" SDK sample). Single-pass reduction requires global atomic instructions (Compute Capability 1.1 or later) and the _threadfence() intrinsic (CUDA 2.2 or later). |
|

or later
Browse Files
|
|
|
CUDA Radix Sort using the Thrust Library 
This sample demonstrates a very fast and efficient parallel radix sort uses Thrust library (http://code.google.com/p/thrust/).. The included RadixSort class can sort either key-value pairs (with float or unsigned integer keys) or keys only. |
|

or later
Whitepaper
Browse Files
|
|
|
CUDA Sorting Networks 
This sample implements bitonic sort and odd-even merge sort (also known as Batcher's sort), algorithms belonging to the class of sorting networks. While generally subefficient on large sequences compared to algorithms with better asymptotic algorithmic complexity (i.e. merge sort or radix sort), may be the algorithms of choice for sorting batches of short- to mid-sized (key, value) array pairs.
Refer to the excellent tutorial by H. W. Lang http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/networks/indexen.htm
|
|

or later
Browse Files
|
|
|
Binomial Option Pricing 
This sample evaluates fair call price for a given set of European options under binomial model. This sample will also take advantage of double precision if a GTX 200 class GPU is present. |
|

or later
Browse Files
|
|
|
Black-Scholes Option Pricing 
This sample evaluates fair call and put prices for a given set of European options by Black-Scholes formula. |
|

or later
Whitepaper
Browse Files
|
|
|
Niederreiter Quasirandom Sequence Generator 
This sample implements Niederreiter Quasirandom Sequence Generator and Inverse Cumulative Normal Distribution function for Standart Normal Distribution generation. |
|

or later
Browse Files
|
|
|
Monte Carlo Option Pricing 
This sample evaluates fair call price for a given set of European options using Monte Carlo approach. This sample use double precision hardware if a GTX 200 class GPU is present. |
|

or later
Whitepaper
Browse Files
|
|
|
Monte Carlo Option Pricing with Multi-GPU support 
This sample evaluates fair call price for a given set of European options using the Monte Carlo approach, taking advantage of all CUDA-capable GPUs installed in the system. This sample use double precision hardware if a GTX 200 class GPU is present. The sample also takes advantage of CUDA 4.0 capability to supporting using a single CPU thread to
control multiple GPUs |
|

or later
Whitepaper
Browse Files
|
|
|
MersenneTwister 
This sample implements Mersenne Twister random number generator and Cartesian Box-Muller transformation on the GPU. |
|

or later
Whitepaper
Browse Files
|
|
|
Mandelbrot 
This sample uses CUDA to compute and display the Mandelbrot or Julia sets interactively. It also illustrates the use of "double single" arithmetic to improve precision when zooming a long way into the pattern. This sample use double precision hardware if a GT200 class GPU is present. Thanks to Mark Granger of NewTek who submitted this sample to the SDK! |
|

or later
Browse Files
|
|
|
Particles 
This sample uses CUDA to simulate and visualize a large set of particles and their physical interaction. It implements a uniform grid data structure using either atomic operations or a fast radix sort from the Thrust library |
|

or later
Whitepaper
Browse Files
|
|
|
Marching Cubes Isosurfaces 
This sample extracts a geometric isosurface from a volume dataset using the marching cubes algorithm. It uses the scan (prefix sum) function from the Thrust library to perform stream compaction. |
|

or later
Browse Files
|
|
|
Volume Rendering with 3D Textures 
This sample demonstrates basic volume rendering using 3D Textures. |
|

or later
Browse Files
|
|
|
CUDA N-Body Simulation 
This sample demonstrates efficient all-pairs simulation of a gravitational n-body simulation in CUDA. This sample accompanies the GPU Gems 3 chapter "Fast N-Body Simulation with CUDA". With CUDA 4.0, the nBody sample has been updated to take advantage of new features to easily scale the n-body simulation across multiple GPUs in a single PC. Adding “-numdevices=” to the command line option will cause the sample to use N devices (if available) for simulation. In this mode, the position and velocity data for all bodies are read from system memory using “zero copy” rather than from device memory. For a small number of devices (4 or fewer) and a large enough number of bodies, bandwidth is not a bottleneck so we can achieve strong scaling across these devices.
|
|

or later
Whitepaper
Browse Files
|
|
|
Smoke Particles 
Smoke simulation with volumetric shadows using half-angle slicing technique. Uses CUDA for procedural simulation, Thrust Library for sorting algorithms, and OpenGL for graphics rendering. |
|

or later
Whitepaper
Browse Files
|
|
|
Sobol Quasirandom Number Generator 
This sample implements Sobol Quasirandom Sequence Generator. |
|

or later
Browse Files
|
|
|
Matrix Multiplication (CUDA Driver API Version) 
This sample implements matrix multiplication and uses the new CUDA 4.0 kernel launch Driver API.
It has been written for clarity of exposition to illustrate various CUDA programming principles, not with the goal of providing the most performant generic kernel for matrix multiplication.
CUBLAS provides high-performance matrix multiplication. |
|

or later
Browse Files
|
|
|
simpleMPI 
Simple example demonstrating how to use MPI in combination with CUDA. This executable is not pre-built with the SDK installer. |
|

or later
Browse Files
|
|