SlideShare uma empresa Scribd logo
1 de 43
Baixar para ler offline
J.A.R.
J.C.G.
T.R.G.B.
GPU: UNDERSTANDING CUDA
TALK STRUCTURE
• What is CUDA?
• History of GPU
• Hardware Presentation
• How does it work?
• Code Example
• Examples & Videos
• Results & Conclusion
WHAT IS CUDA
• Compute Unified Device Architecture
• Is a parallel computing platform and
programming model created by NVIDIA and
implemented by the graphics processing
units (GPUs) that they produce
• CUDA gives developers access to the
virtual instruction set and memory of the
parallel computational elements in CUDA GPUs
HISTORY
• 1981 – Monochrome Display Adapter
• 1988 – VGA Standard (VGA Controller) – VESA Founded
• 1989 – SVGA
• 1993 – PCI – NVidia Founded
• 1996 – AGP – Voodoo Graphics – Pentium
• 1999 – NVidia GeForce 256 – P3
• 2004 – PCI Express – GeForce6600 – P4
• 2006 – GeForce 8800
• 2008 – GeForce GTX280 / Core2
HISTORICAL PC
CPU
North Bridge Memory
South Bridge
VGA
Controller
Screen
Memory
Buffer
LAN UART
System Bus
PCI Bus
INTEL PC STRUCTURE
NEW INTEL PC STRUCTURE
VOODOO GRAPHICS SYSTEM ARCHITECTURE
Geom
Gather
Geom
Proc
Triangle
Proc
Pixel
Proc
Z / Blend
CPU
Core
Logic
FBI
FB
Memory
System
Memory
TMU
TEX
Memory
GPUCPU
GEFORCE GTX280 SYSTEM ARCHITECTURE
Geom
Gather
Geom
Proc
Triangle
Proc
Pixel
Proc
Z /
Blend
CPU
Core
Logic
GPU
GPU
Memory
System
Memory
GPUCPU
Physics
and AI
Scene
Mgmt
CUDA ARCHITECTURE ROADMAP
SOUL OF NVIDIA’S GPU ROADMAP
• Increase Performance / Watt
• Make Parallel Programming Easier
• Run more of the Application on the GPU
MYTHS ABOUT CUDA
• You have to port your entire application to the
GPU
• It is really hard to accelerate your application
• There is a PCI-e Bottleneck
CUDA MODELS
• Device Model
• Execution Model
DEVICE MODEL
Scalar
Processor
Many Scalar Processors + Register File + Shared Memory
DEVICE MODEL
Multiprocessor Device
DEVICE MODEL
Load/store
Global Memory
Thread Execution Manager
Input Assembler
Host
Texture Texture Texture Texture Texture Texture Texture TextureTexture
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Parallel Data
Cache
Load/store Load/store Load/store Load/store Load/store
HARDWARE PRESENTATION
Geforce GTS450
HARDWARE PRESENTATION
Geforce GTS450
HARDWARE PRESENTATION
Geforce GTS450 Especificaciones
HARDWARE PRESENTATION
Geforce GTX470
HARDWARE PRESENTATION
Geforce GTX470 Especificaciones
HARDWARE PRESENTATION
HARDWARE PRESENTATION
Geforce 8600 GT/GTS Especificaciones
EXECUTION MODEL
Vocabulary:
• Host: CPU.
• Device: GPU.
• Kernel: A piece of code executed on GPU. ( function, program.. )
• SIMT: Single Instruction Multiple Threads
• Warps: A set of 32 threads. Minimum size of the data processed in
SIMT.
EXECUTION MODEL
All threads execute same code.
Each thread have an
unique identifier (threadID (x,y,z))
A CUDA kernel is executed by
an array of threads
SIMT
EXECUTION MODEL - SOFTWARE
Grid: A set of Blocks
Thread: Smallest logict unit
Block: A set of Threads.
(Max 512)
• Private Shared Memory
• Barrier (Threads synchronization)
• Barrier ( Grid synchronization)
• Without synchronization between blocks
EXECUTION MODEL
Specified by the programmer at Runtime
- Number of blocks (gridDim)
- Block size (BlockDim)
CUDA kernel invocation
f <<<G, B>>>(a, b, c)
EXECUTION MODEL - MEMORY ARCHITECTURE
EXECUTION MODEL
Each thread runs on a
scalar processor
Thread blocks are
running on the multiprocessor
A Grid only run a CUDA Kernel
SCHEDULE
tiempo
warp 8 instrucción 11
warp 1 instrucción 42
warp 3 instrucción 95
warp 8 instrucción 12
.
.
.
warp 3 instrucción 96
Bloque 1 Bloque 2 Bloque n
warp 1
2
m
warp 2
2
m
warp 2
2
m
• Threads are grouped into blocks
• IDs are assigned to blocks and
threads
• Blocks threads are distributed
among the multiprocessors
• Threads of a block are grouped into
warps
• A warp is the smallest unit of
planning and consists of 32 threads
• Various warps on each
multiprocessor, but only one is
running
CODE EXAMPLE
The following program calculates and prints the square of first 100 integers.
// 1) Include header files
#include <stdio.h>
#include <conio.h>
#include <cuda.h>
// 2) Kernel that executes on the CUDA device
__global__ void square_array(float*a,int N) {
int idx=blockIdx.x*blockDim.x+threadIdx.x;
if (idx <N )
a[idx]=a[idx]*a[idx];
}
// 3) main( ) routine, the CPU must find
int main(void) {
CODE EXAMPLE
// 3.1:- Define pointer to host and device arrays
float*a_h,*a_d;
// 3.2:- Define other variables used in the program e.g. arrays etc.
const int N=100;
size_t size=N*sizeof(float);
// 3.3:- Allocate array on the host
a_h=(float*)malloc(size);
// 3.4:- Allocate array on device (DRAM of the GPU)
cudaMalloc((void**)&a_d,size);
for (int i=0;i<N;i ++)
a_h[i]=(float)i;
CODE EXAMPLE
// 3.5:- Copy the data from host array to device array.
cudaMemcpy(a_d,a_h,size,cudaMemcpyHostToDevice);
// 3.6:- Kernel Call, Execution Configuration
int block_size=4;
int n_blocks=N / block_size + ( N % block_size ==0);
square_array<<<n_blocks,block_size>>>(a_d,N);
// 3.7:- Retrieve result from device to host in the host memory
cudaMemcpy(a_h,a_d,sizeof(float)*N,cudaMemcpyDeviceToHost);
CODE EXAMPLE
// 3.8:- Print result
for(int i=0;i<N;i++)
printf("%dt%fn",i,a_h[i]);
// 3.9:- Free allocated memories on the device and host
free(a_h);
cudaFree(a_d);
getch(); } )
CUDA LIBRARIES
TESTING
TESTING
TESTING
EXAMPLES
• Video Example with a NVidia Tesla
• Development Environment
RADIX SORT RESULTS.
0
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
1.000.000 10.000.000 51.000.000 100.000.000
GTS 450
GTX 470
GeForce 8600
GTX 560M
CONCLUSION
• Easy to use and powerful so it is worth!
• GPU computing is the future. The Results
confirm our theory and the industry is giving
more and more importance.
• In the next years we will see more applications
that are using parallel computing
DOCUMENTATION & LINKS
• http://www.nvidia.es/object/cuda_home_new_es.html
• http://www.nvidia.com/docs/IO/113297/ISC-Briefing-Sumit-June11-Final.pdf
• http://cs.nyu.edu/courses/spring12/CSCI-GA.3033-012/lecture5.pdf
• http://www.hpca.ual.es/~jmartine/CUDA/SESION3_CUDA_GPU_EMG_JAM.pdf
• http://www.geforce.com/hardware/technology/cuda/supported-gpus
• http://en.wikipedia.org/wiki/GeForce_256
• http://en.wikipedia.org/wiki/CUDA
• https://developer.nvidia.com/technologies/Libraries
• https://www.udacity.com/wiki/cs344/troubleshoot_gcc47
• http://stackoverflow.com/questions/12986701/installing-cuda-5-samples-in-
ubuntu-12-10
QUESTIONS?

Mais conteúdo relacionado

Mais procurados

OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
ScyllaDB
 

Mais procurados (20)

NVIDIA CUDA
NVIDIA CUDANVIDIA CUDA
NVIDIA CUDA
 
GPU Virtualization in Embedded Automotive Solutions
GPU Virtualization in Embedded Automotive SolutionsGPU Virtualization in Embedded Automotive Solutions
GPU Virtualization in Embedded Automotive Solutions
 
Cuda
CudaCuda
Cuda
 
Qemu Introduction
Qemu IntroductionQemu Introduction
Qemu Introduction
 
Qnx os
Qnx os Qnx os
Qnx os
 
ALSS14: Xen Project Automotive Hypervisor (Demo)
ALSS14: Xen Project Automotive Hypervisor (Demo)ALSS14: Xen Project Automotive Hypervisor (Demo)
ALSS14: Xen Project Automotive Hypervisor (Demo)
 
Architecture Of The Linux Kernel
Architecture Of The Linux KernelArchitecture Of The Linux Kernel
Architecture Of The Linux Kernel
 
HPC 的に H100 は魅力的な GPU なのか?
HPC 的に H100 は魅力的な GPU なのか?HPC 的に H100 は魅力的な GPU なのか?
HPC 的に H100 は魅力的な GPU なのか?
 
Kvm and libvirt
Kvm and libvirtKvm and libvirt
Kvm and libvirt
 
Kubernetes best practices with GKE
Kubernetes best practices with GKEKubernetes best practices with GKE
Kubernetes best practices with GKE
 
SFO15-302: Energy Aware Scheduling: Progress Update
SFO15-302: Energy Aware Scheduling: Progress UpdateSFO15-302: Energy Aware Scheduling: Progress Update
SFO15-302: Energy Aware Scheduling: Progress Update
 
計算力学シミュレーションに GPU は役立つのか?
計算力学シミュレーションに GPU は役立つのか?計算力学シミュレーションに GPU は役立つのか?
計算力学シミュレーションに GPU は役立つのか?
 
Embedded Linux Kernel - Build your custom kernel
Embedded Linux Kernel - Build your custom kernelEmbedded Linux Kernel - Build your custom kernel
Embedded Linux Kernel - Build your custom kernel
 
Android Virtualization: Opportunity and Organization
Android Virtualization: Opportunity and OrganizationAndroid Virtualization: Opportunity and Organization
Android Virtualization: Opportunity and Organization
 
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...
 
Project ACRN hypervisor introduction
Project ACRN hypervisor introduction Project ACRN hypervisor introduction
Project ACRN hypervisor introduction
 
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
 
1072: アプリケーション開発を加速するCUDAライブラリ
1072: アプリケーション開発を加速するCUDAライブラリ1072: アプリケーション開発を加速するCUDAライブラリ
1072: アプリケーション開発を加速するCUDAライブラリ
 
Debugging Microservices - QCON 2017
Debugging Microservices - QCON 2017Debugging Microservices - QCON 2017
Debugging Microservices - QCON 2017
 
Kernel Recipes 2015: Representing device-tree peripherals in ACPI
Kernel Recipes 2015: Representing device-tree peripherals in ACPIKernel Recipes 2015: Representing device-tree peripherals in ACPI
Kernel Recipes 2015: Representing device-tree peripherals in ACPI
 

Destaque

Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2
Piyush Mittal
 
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
npinto
 
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
npinto
 
Image processing3 imageenhancement(histogramprocessing)
Image processing3 imageenhancement(histogramprocessing)Image processing3 imageenhancement(histogramprocessing)
Image processing3 imageenhancement(histogramprocessing)
John Williams
 
Open CL For Haifa Linux Club
Open CL For Haifa Linux ClubOpen CL For Haifa Linux Club
Open CL For Haifa Linux Club
Ofer Rosenberg
 

Destaque (20)

Cuda
CudaCuda
Cuda
 
Gpu with cuda architecture
Gpu with cuda architectureGpu with cuda architecture
Gpu with cuda architecture
 
Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2
 
Cuda tutorial
Cuda tutorialCuda tutorial
Cuda tutorial
 
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
[Harvard CS264] 03 - Introduction to GPU Computing, CUDA Basics
 
La programmation GPU avec C++ AMP pour les performances extrêmes
La programmation GPU avec C++ AMP pour les performances extrêmesLa programmation GPU avec C++ AMP pour les performances extrêmes
La programmation GPU avec C++ AMP pour les performances extrêmes
 
Blur Filter - Hanpo
Blur Filter - HanpoBlur Filter - Hanpo
Blur Filter - Hanpo
 
Newbie’s guide to_the_gpgpu_universe
Newbie’s guide to_the_gpgpu_universeNewbie’s guide to_the_gpgpu_universe
Newbie’s guide to_the_gpgpu_universe
 
General Programming on the GPU - Confoo
General Programming on the GPU - ConfooGeneral Programming on the GPU - Confoo
General Programming on the GPU - Confoo
 
Cliff sugerman
Cliff sugermanCliff sugerman
Cliff sugerman
 
Gpgpu intro
Gpgpu introGpgpu intro
Gpgpu intro
 
CSTalks - GPGPU - 19 Jan
CSTalks  -  GPGPU - 19 JanCSTalks  -  GPGPU - 19 Jan
CSTalks - GPGPU - 19 Jan
 
PT-4057, Automated CUDA-to-OpenCL™ Translation with CU2CL: What's Next?, by W...
PT-4057, Automated CUDA-to-OpenCL™ Translation with CU2CL: What's Next?, by W...PT-4057, Automated CUDA-to-OpenCL™ Translation with CU2CL: What's Next?, by W...
PT-4057, Automated CUDA-to-OpenCL™ Translation with CU2CL: What's Next?, by W...
 
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
[Harvard CS264] 06 - CUDA Ninja Tricks: GPU Scripting, Meta-programming & Aut...
 
Advances in the Solution of Navier-Stokes Eqs. in GPGPU Hardware. Modelling F...
Advances in the Solution of Navier-Stokes Eqs. in GPGPU Hardware. Modelling F...Advances in the Solution of Navier-Stokes Eqs. in GPGPU Hardware. Modelling F...
Advances in the Solution of Navier-Stokes Eqs. in GPGPU Hardware. Modelling F...
 
Image processing3 imageenhancement(histogramprocessing)
Image processing3 imageenhancement(histogramprocessing)Image processing3 imageenhancement(histogramprocessing)
Image processing3 imageenhancement(histogramprocessing)
 
LCU13: GPGPU on ARM Experience Report
LCU13: GPGPU on ARM Experience ReportLCU13: GPGPU on ARM Experience Report
LCU13: GPGPU on ARM Experience Report
 
Gpgpu
GpgpuGpgpu
Gpgpu
 
Open CL For Haifa Linux Club
Open CL For Haifa Linux ClubOpen CL For Haifa Linux Club
Open CL For Haifa Linux Club
 
Gaussian Image Blurring in CUDA C++
Gaussian Image Blurring in CUDA C++Gaussian Image Blurring in CUDA C++
Gaussian Image Blurring in CUDA C++
 

Semelhante a GPU: Understanding CUDA

Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08
Angela Mendoza M.
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
mouhouioui
 
A beginner’s guide to programming GPUs with CUDA
A beginner’s guide to programming GPUs with CUDAA beginner’s guide to programming GPUs with CUDA
A beginner’s guide to programming GPUs with CUDA
Piyush Mittal
 

Semelhante a GPU: Understanding CUDA (20)

Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
 
Introduction to Accelerators
Introduction to AcceleratorsIntroduction to Accelerators
Introduction to Accelerators
 
Cuda intro
Cuda introCuda intro
Cuda intro
 
Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08
 
gpuprogram_lecture,architecture_designsn
gpuprogram_lecture,architecture_designsngpuprogram_lecture,architecture_designsn
gpuprogram_lecture,architecture_designsn
 
lecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdflecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdf
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
 
lecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxlecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptx
 
Introduction to cuda geek camp singapore 2011
Introduction to cuda   geek camp singapore 2011Introduction to cuda   geek camp singapore 2011
Introduction to cuda geek camp singapore 2011
 
Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.
 
Nvidia® cuda™ 5 sample evaluationresult_2
Nvidia® cuda™ 5 sample evaluationresult_2Nvidia® cuda™ 5 sample evaluationresult_2
Nvidia® cuda™ 5 sample evaluationresult_2
 
A beginner’s guide to programming GPUs with CUDA
A beginner’s guide to programming GPUs with CUDAA beginner’s guide to programming GPUs with CUDA
A beginner’s guide to programming GPUs with CUDA
 
Intro to GPGPU with CUDA (DevLink)
Intro to GPGPU with CUDA (DevLink)Intro to GPGPU with CUDA (DevLink)
Intro to GPGPU with CUDA (DevLink)
 
Cuda materials
Cuda materialsCuda materials
Cuda materials
 
Computing using GPUs
Computing using GPUsComputing using GPUs
Computing using GPUs
 
GPU for DL
GPU for DLGPU for DL
GPU for DL
 
Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
 

Último

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 

GPU: Understanding CUDA

  • 2. TALK STRUCTURE • What is CUDA? • History of GPU • Hardware Presentation • How does it work? • Code Example • Examples & Videos • Results & Conclusion
  • 3. WHAT IS CUDA • Compute Unified Device Architecture • Is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce • CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs
  • 4. HISTORY • 1981 – Monochrome Display Adapter • 1988 – VGA Standard (VGA Controller) – VESA Founded • 1989 – SVGA • 1993 – PCI – NVidia Founded • 1996 – AGP – Voodoo Graphics – Pentium • 1999 – NVidia GeForce 256 – P3 • 2004 – PCI Express – GeForce6600 – P4 • 2006 – GeForce 8800 • 2008 – GeForce GTX280 / Core2
  • 5. HISTORICAL PC CPU North Bridge Memory South Bridge VGA Controller Screen Memory Buffer LAN UART System Bus PCI Bus
  • 7. NEW INTEL PC STRUCTURE
  • 8. VOODOO GRAPHICS SYSTEM ARCHITECTURE Geom Gather Geom Proc Triangle Proc Pixel Proc Z / Blend CPU Core Logic FBI FB Memory System Memory TMU TEX Memory GPUCPU
  • 9. GEFORCE GTX280 SYSTEM ARCHITECTURE Geom Gather Geom Proc Triangle Proc Pixel Proc Z / Blend CPU Core Logic GPU GPU Memory System Memory GPUCPU Physics and AI Scene Mgmt
  • 11. SOUL OF NVIDIA’S GPU ROADMAP • Increase Performance / Watt • Make Parallel Programming Easier • Run more of the Application on the GPU
  • 12. MYTHS ABOUT CUDA • You have to port your entire application to the GPU • It is really hard to accelerate your application • There is a PCI-e Bottleneck
  • 13. CUDA MODELS • Device Model • Execution Model
  • 14. DEVICE MODEL Scalar Processor Many Scalar Processors + Register File + Shared Memory
  • 16. DEVICE MODEL Load/store Global Memory Thread Execution Manager Input Assembler Host Texture Texture Texture Texture Texture Texture Texture TextureTexture Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Load/store Load/store Load/store Load/store Load/store
  • 23. HARDWARE PRESENTATION Geforce 8600 GT/GTS Especificaciones
  • 24. EXECUTION MODEL Vocabulary: • Host: CPU. • Device: GPU. • Kernel: A piece of code executed on GPU. ( function, program.. ) • SIMT: Single Instruction Multiple Threads • Warps: A set of 32 threads. Minimum size of the data processed in SIMT.
  • 25. EXECUTION MODEL All threads execute same code. Each thread have an unique identifier (threadID (x,y,z)) A CUDA kernel is executed by an array of threads SIMT
  • 26. EXECUTION MODEL - SOFTWARE Grid: A set of Blocks Thread: Smallest logict unit Block: A set of Threads. (Max 512) • Private Shared Memory • Barrier (Threads synchronization) • Barrier ( Grid synchronization) • Without synchronization between blocks
  • 27. EXECUTION MODEL Specified by the programmer at Runtime - Number of blocks (gridDim) - Block size (BlockDim) CUDA kernel invocation f <<<G, B>>>(a, b, c)
  • 28. EXECUTION MODEL - MEMORY ARCHITECTURE
  • 29. EXECUTION MODEL Each thread runs on a scalar processor Thread blocks are running on the multiprocessor A Grid only run a CUDA Kernel
  • 30. SCHEDULE tiempo warp 8 instrucción 11 warp 1 instrucción 42 warp 3 instrucción 95 warp 8 instrucción 12 . . . warp 3 instrucción 96 Bloque 1 Bloque 2 Bloque n warp 1 2 m warp 2 2 m warp 2 2 m • Threads are grouped into blocks • IDs are assigned to blocks and threads • Blocks threads are distributed among the multiprocessors • Threads of a block are grouped into warps • A warp is the smallest unit of planning and consists of 32 threads • Various warps on each multiprocessor, but only one is running
  • 31. CODE EXAMPLE The following program calculates and prints the square of first 100 integers. // 1) Include header files #include <stdio.h> #include <conio.h> #include <cuda.h> // 2) Kernel that executes on the CUDA device __global__ void square_array(float*a,int N) { int idx=blockIdx.x*blockDim.x+threadIdx.x; if (idx <N ) a[idx]=a[idx]*a[idx]; } // 3) main( ) routine, the CPU must find int main(void) {
  • 32. CODE EXAMPLE // 3.1:- Define pointer to host and device arrays float*a_h,*a_d; // 3.2:- Define other variables used in the program e.g. arrays etc. const int N=100; size_t size=N*sizeof(float); // 3.3:- Allocate array on the host a_h=(float*)malloc(size); // 3.4:- Allocate array on device (DRAM of the GPU) cudaMalloc((void**)&a_d,size); for (int i=0;i<N;i ++) a_h[i]=(float)i;
  • 33. CODE EXAMPLE // 3.5:- Copy the data from host array to device array. cudaMemcpy(a_d,a_h,size,cudaMemcpyHostToDevice); // 3.6:- Kernel Call, Execution Configuration int block_size=4; int n_blocks=N / block_size + ( N % block_size ==0); square_array<<<n_blocks,block_size>>>(a_d,N); // 3.7:- Retrieve result from device to host in the host memory cudaMemcpy(a_h,a_d,sizeof(float)*N,cudaMemcpyDeviceToHost);
  • 34. CODE EXAMPLE // 3.8:- Print result for(int i=0;i<N;i++) printf("%dt%fn",i,a_h[i]); // 3.9:- Free allocated memories on the device and host free(a_h); cudaFree(a_d); getch(); } )
  • 39. EXAMPLES • Video Example with a NVidia Tesla • Development Environment
  • 40. RADIX SORT RESULTS. 0 0,2 0,4 0,6 0,8 1 1,2 1,4 1,6 1.000.000 10.000.000 51.000.000 100.000.000 GTS 450 GTX 470 GeForce 8600 GTX 560M
  • 41. CONCLUSION • Easy to use and powerful so it is worth! • GPU computing is the future. The Results confirm our theory and the industry is giving more and more importance. • In the next years we will see more applications that are using parallel computing
  • 42. DOCUMENTATION & LINKS • http://www.nvidia.es/object/cuda_home_new_es.html • http://www.nvidia.com/docs/IO/113297/ISC-Briefing-Sumit-June11-Final.pdf • http://cs.nyu.edu/courses/spring12/CSCI-GA.3033-012/lecture5.pdf • http://www.hpca.ual.es/~jmartine/CUDA/SESION3_CUDA_GPU_EMG_JAM.pdf • http://www.geforce.com/hardware/technology/cuda/supported-gpus • http://en.wikipedia.org/wiki/GeForce_256 • http://en.wikipedia.org/wiki/CUDA • https://developer.nvidia.com/technologies/Libraries • https://www.udacity.com/wiki/cs344/troubleshoot_gcc47 • http://stackoverflow.com/questions/12986701/installing-cuda-5-samples-in- ubuntu-12-10