2. TALK STRUCTURE
• What is CUDA?
• History of GPU
• Hardware Presentation
• How does it work?
• Code Example
• Examples & Videos
• Results & Conclusion
3. WHAT IS CUDA
• Compute Unified Device Architecture
• Is a parallel computing platform and
programming model created by NVIDIA and
implemented by the graphics processing
units (GPUs) that they produce
• CUDA gives developers access to the
virtual instruction set and memory of the
parallel computational elements in CUDA GPUs
8. VOODOO GRAPHICS SYSTEM ARCHITECTURE
Geom
Gather
Geom
Proc
Triangle
Proc
Pixel
Proc
Z / Blend
CPU
Core
Logic
FBI
FB
Memory
System
Memory
TMU
TEX
Memory
GPUCPU
9. GEFORCE GTX280 SYSTEM ARCHITECTURE
Geom
Gather
Geom
Proc
Triangle
Proc
Pixel
Proc
Z /
Blend
CPU
Core
Logic
GPU
GPU
Memory
System
Memory
GPUCPU
Physics
and AI
Scene
Mgmt
11. SOUL OF NVIDIA’S GPU ROADMAP
• Increase Performance / Watt
• Make Parallel Programming Easier
• Run more of the Application on the GPU
12. MYTHS ABOUT CUDA
• You have to port your entire application to the
GPU
• It is really hard to accelerate your application
• There is a PCI-e Bottleneck
16. DEVICE MODEL
Load/store
Global Memory
Thread Execution Manager
Input Assembler
Host
Texture Texture Texture Texture Texture Texture Texture TextureTexture
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Load/store Load/store Load/store Load/store Load/store
24. EXECUTION MODEL
Vocabulary:
• Host: CPU.
• Device: GPU.
• Kernel: A piece of code executed on GPU. ( function, program.. )
• SIMT: Single Instruction Multiple Threads
• Warps: A set of 32 threads. Minimum size of the data processed in
SIMT.
25. EXECUTION MODEL
All threads execute same code.
Each thread have an
unique identifier (threadID (x,y,z))
A CUDA kernel is executed by
an array of threads
SIMT
26. EXECUTION MODEL - SOFTWARE
Grid: A set of Blocks
Thread: Smallest logict unit
Block: A set of Threads.
(Max 512)
• Private Shared Memory
• Barrier (Threads synchronization)
• Barrier ( Grid synchronization)
• Without synchronization between blocks
27. EXECUTION MODEL
Specified by the programmer at Runtime
- Number of blocks (gridDim)
- Block size (BlockDim)
CUDA kernel invocation
f <<<G, B>>>(a, b, c)
29. EXECUTION MODEL
Each thread runs on a
scalar processor
Thread blocks are
running on the multiprocessor
A Grid only run a CUDA Kernel
30. SCHEDULE
tiempo
warp 8 instrucción 11
warp 1 instrucción 42
warp 3 instrucción 95
warp 8 instrucción 12
.
.
.
warp 3 instrucción 96
Bloque 1 Bloque 2 Bloque n
warp 1
2
m
warp 2
2
m
warp 2
2
m
• Threads are grouped into blocks
• IDs are assigned to blocks and
threads
• Blocks threads are distributed
among the multiprocessors
• Threads of a block are grouped into
warps
• A warp is the smallest unit of
planning and consists of 32 threads
• Various warps on each
multiprocessor, but only one is
running
31. CODE EXAMPLE
The following program calculates and prints the square of first 100 integers.
// 1) Include header files
#include <stdio.h>
#include <conio.h>
#include <cuda.h>
// 2) Kernel that executes on the CUDA device
__global__ void square_array(float*a,int N) {
int idx=blockIdx.x*blockDim.x+threadIdx.x;
if (idx <N )
a[idx]=a[idx]*a[idx];
}
// 3) main( ) routine, the CPU must find
int main(void) {
32. CODE EXAMPLE
// 3.1:- Define pointer to host and device arrays
float*a_h,*a_d;
// 3.2:- Define other variables used in the program e.g. arrays etc.
const int N=100;
size_t size=N*sizeof(float);
// 3.3:- Allocate array on the host
a_h=(float*)malloc(size);
// 3.4:- Allocate array on device (DRAM of the GPU)
cudaMalloc((void**)&a_d,size);
for (int i=0;i<N;i ++)
a_h[i]=(float)i;
33. CODE EXAMPLE
// 3.5:- Copy the data from host array to device array.
cudaMemcpy(a_d,a_h,size,cudaMemcpyHostToDevice);
// 3.6:- Kernel Call, Execution Configuration
int block_size=4;
int n_blocks=N / block_size + ( N % block_size ==0);
square_array<<<n_blocks,block_size>>>(a_d,N);
// 3.7:- Retrieve result from device to host in the host memory
cudaMemcpy(a_h,a_d,sizeof(float)*N,cudaMemcpyDeviceToHost);
34. CODE EXAMPLE
// 3.8:- Print result
for(int i=0;i<N;i++)
printf("%dt%fn",i,a_h[i]);
// 3.9:- Free allocated memories on the device and host
free(a_h);
cudaFree(a_d);
getch(); } )
41. CONCLUSION
• Easy to use and powerful so it is worth!
• GPU computing is the future. The Results
confirm our theory and the industry is giving
more and more importance.
• In the next years we will see more applications
that are using parallel computing