Hi! I am Srihari Unnikrishnan(@pythongiant) and this is a small primer on getting started with CUDA for machine learning! Send me an email at srihari[dot]unnikrishnan[at]gmail[dot]com if you have any questions or any suggestions/recommendations! I would be more than glad to help you out.
Ever wanted to harness the power of your GPU for lightning-fast computations? This guide takes you from having no GPU knowledge to writing your own CUDA kernels that can speed up your code by 10-100x.
This isn't just another technical manual. We'll build your intuition step by step:
No prior GPU knowledge required! We'll explain everything from the ground up.
Each chapter builds on the previous one. Start with Chapter 1 and work through in order.
The big picture: CPUs vs GPUs
Your first CUDA program
Why memory is everything in GPU programming
Ready-to-use techniques for parallel computing
Connect GPU code to real applications
nvidia-smi # Should show your GPU
nvcc --version # Should show CUDA toolkit
Most GPU guides throw technical terms at you. This guide:
By the end, you'll understand GPU programming deeply enough to write efficient code for your own problems.
Head to Chapter 1 to learn why GPUs can be so much faster than CPUs for the right problems.
Welcome! By the end of this chapter, you'll understand why your gaming GPU can also do serious computing work. No technical background needed - we'll use simple analogies.
For years, computers got faster by making CPUs run at higher speeds. But around 2005, this stopped working. CPUs couldn't get any faster without melting or using insane amounts of power.
At the same time, people needed to solve bigger and bigger problems:
These problems all have something in common: doing the same simple operation on huge amounts of data.
Imagine you're running a restaurant:
GPUs don't make one task fast. They make many identical tasks fast simultaneously.
CUDA is NVIDIA's system for writing programs that run on GPUs. It lets you:
CUDA gives you direct control over the GPU. You decide exactly how work gets divided up and executed.
CUDA programs have two parts:
You must explicitly copy data between CPU and GPU memory.
A kernel is a function that runs on the GPU. When you launch a kernel, you tell the GPU:
"Run this function 10,000 times, each time with different data"
Each run of the function is a thread. Threads are grouped into blocks, and blocks are arranged in a grid.
Don't worry about the details yet - we'll see this in action in the next chapter.
CUDA doesn't automatically make your code parallel. You have to:
This makes CUDA more work than automatic systems, but you get predictable, high performance.
Use CUDA when you have:
Don't use CUDA for:
This chapter doesn't have code yet - we're building concepts first. But here's what we'll do in the next chapter:
// A simple kernel that adds 1 to each element
__global__ void addOne(int* data, int N) {
int i = /* figure out which element this thread should handle */;
if (i < N) {
data[i] = data[i] + 1;
}
}
In Chapter 2, we'll run this code and see how it works!
Ready for your first CUDA program? Let's go to Chapter 2!
Now we get hands-on! You'll write and run your first CUDA program. We'll see how the abstract concepts from Chapter 1 become real code.
A program that demonstrates:
Every CUDA program has two parts:
Here's the GPU function we'll write:
__global__ void execution_model_demo(int *out, int N)
{
// Each thread figures out which data element it should handle
int global_idx = blockIdx.x * blockDim.x + threadIdx.x;
int local_idx = threadIdx.x;
// Skip if this thread is beyond our data size
if (global_idx >= N) return;
// Calculate which "warp" this thread is in (group of 32 threads)
int warp_id = local_idx / 32;
int lane_id = local_idx % 32;
// Intentional branching to show warp divergence
int value;
if (lane_id < 16) {
value = global_idx * 2; // First half of warp does this
} else {
value = global_idx * 3; // Second half does this
}
// All threads in block must reach here before any can continue
__syncthreads();
// Write result to memory
out[global_idx] = value;
}
int global_idx = blockIdx.x * blockDim.x + threadIdx.x;
threadIdx.x - This thread's position within its block (0-63)blockIdx.x - Which block this thread is in (0-3 for our example)blockDim.x - How many threads per block (64 in our case)So thread 5 in block 2 has global_idx = 2 * 64 + 5 = 133.
int warp_id = local_idx / 32; // Which group of 32 threads
int lane_id = local_idx % 32; // Position within the group
GPU hardware runs threads in groups of 32 called warps. All 32 threads in a warp execute the same instruction at the same time.
if (lane_id < 16) {
value = global_idx * 2;
} else {
value = global_idx * 3;
}
Within one warp:
The hardware runs these sequentially, not in parallel! The whole warp waits while first the "if" executes, then the "else".
__syncthreads();
All threads in the same block must reach this point before any can continue. This is crucial when threads need to share data.
nvcc execute_model_demo.cu -o demo
./demo
Index 0 -> 0 # lane_id=0 (<16) so 0*2 = 0
Index 1 -> 2 # lane_id=1 (<16) so 1*2 = 2
...
Index 15 -> 30 # lane_id=15 (<16) so 15*2 = 30
Index 16 -> 48 # lane_id=16 (>=16) so 16*3 = 48
Index 17 -> 51 # lane_id=17 (>=16) so 17*3 = 51
...
Notice the pattern change at index 16, 48, 80, etc. - this shows where warps begin.
When you launch a kernel:
execution_model_demo<<>>(d_out, N);
blocks = 4 - Launch 4 blocksthreads_per_block = 64 - 64 threads eachThe GPU scheduler assigns blocks to available processors and manages execution.
__syncthreads() coordinates within a blockif (lane_id < 8) { // Only first quarter
value = global_idx * 2;
} else {
value = global_idx * 3;
}
See how this affects performance.
__syncthreads();
// Do some work
__syncthreads();
// Do more work
const int threads_per_block = 128; // Try 32, 64, 128, 256
When you launch the kernel:
You now understand how CUDA code actually executes! In Chapter 3, we'll learn about GPU memory - why it's so important for performance.
__syncthreads()In this chapter, you'll learn why memory is the #1 factor in GPU performance. Bad memory usage can make your code 10x slower. Good memory usage can make it 100x faster than CPU.
GPUs have multiple types of memory, each with different speed/cost tradeoffs:
The way threads access memory determines if your code is fast or slow.
// Thread 0 reads data[0], Thread 1 reads data[1], etc.
// Hardware combines 32 reads into 1 big memory transaction
int value = global_data[threadIdx.x];
Result: Full memory bandwidth utilization.
// Thread 0 reads data[0], Thread 1 reads data[2], Thread 2 reads data[4]
// Hardware can't combine - needs multiple transactions
int value = global_data[threadIdx.x * 2];
Result: 2x-10x slower!
// Each thread reads a random location
int value = global_data[random_indices[threadIdx.x]];
Result: Potentially 32x slower!
Shared memory is fast on-chip memory that threads in the same block can share.
Instead of each thread loading the same data from slow global memory:
__shared__ float shared_data[256];
// Load data
shared_data[threadIdx.x] = global_data[blockIdx.x * 256 + threadIdx.x];
__syncthreads(); // Wait for all loads to complete
// Now all threads can read this data quickly
float sum = 0;
for (int i = 0; i < 256; i++) {
sum += shared_data[i]; // Fast shared memory reads
}
When you need to transpose or reorganize data:
__shared__ float tile[32][32];
// Load data in coalesced way
tile[threadIdx.y][threadIdx.x] = input[row * N + col];
__syncthreads();
// Write data in different pattern (also coalesced)
output[col * N + row] = tile[threadIdx.x][threadIdx.y];
Shared memory is divided into 32 banks. If multiple threads access the same bank simultaneously, accesses happen one at a time (serialized).
__shared__ float data[128];
float val = data[threadIdx.x]; // Each thread hits different bank
float val = data[threadIdx.x * 2]; // Threads hit same banks
The code in this chapter demonstrates these concepts:
nvcc memory_management.cu -o memory_demo
./memory_demo
Modify the coalesced access to be uncoalesced:
// Change this line in the kernel:
int value = global_in[global_idx]; // Coalesced
// To this:
int value = global_in[global_idx * 2]; // Uncoalesced
Measure the performance difference!
Add more data reuse in the reduction example:
// Instead of summing all elements once, sum them multiple times
for (int repeat = 0; repeat < 10; repeat++) {
float sum = 0;
for (int i = 0; i < blockDim.x; i++) {
sum += shared_data[i];
}
// Use the sum somehow
}
Change the bank conflict pattern:
// Try different strides
int conflict_idx = (local_idx * 4) % blockDim.x; // Stride-4
More active threads = more warps = better at hiding memory latency.
Modern GPUs have 500-2000 GB/s memory bandwidth. Your goal: achieve 80%+ of that.
In the matrix multiplication example:
4x speedup just from better memory usage!
You now understand GPU memory. In Chapter 4, we'll apply these patterns to real algorithms like reductions and image processing.
Now you know the basics! This chapter shows you ready-to-use patterns for speeding up real computations. These patterns appear in most GPU-accelerated code.
Four fundamental patterns that solve 90% of parallel problems:
Use when: Each output depends only on the corresponding input.
Examples: Adding vectors, scaling arrays, applying math functions.
__global__ void map_pattern(float *input, float *output, float scale, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx >= N) return;
output[idx] = input[idx] * scale; // Each thread: one operation
}
Why it works: No dependencies between elements. Thread 5's work doesn't affect thread 10's work.
Performance: Memory-bound (limited by how fast you can read/write data).
Use when: You need to combine all elements into one value (sum, max, min, etc.).
Challenge: Sequential sum is slow. Parallel tree reduction is fast!
__global__ void reduce_sum(float *input, float *block_sums, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int tid = threadIdx.x;
__shared__ float shared_data[256];
// Load data
shared_data[tid] = (idx < N) ? input[idx] : 0.0f;
__syncthreads();
// Tree reduction: halve active threads each step
for (int stride = blockDim.x/2; stride > 0; stride >>= 1) {
if (tid < stride) {
shared_data[tid] += shared_data[tid + stride];
}
__syncthreads();
}
// Thread 0 writes this block's sum
if (tid == 0) block_sums[blockIdx.x] = shared_data[0];
}
Why it works: Instead of N sequential steps, do log₂N parallel steps.
Speedup: From O(N) to O(log N) time!
Use when: Each output needs the sum/total of all previous elements.
Examples: Cumulative sums, finding positions in sorted data.
__global__ void scan_hillis_steele(float *input, float *output, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int tid = threadIdx.x;
__shared__ float temp[256];
temp[tid] = (idx < N) ? input[idx] : 0.0f;
__syncthreads();
// Each step: add from increasing distances
for (int offset = 1; offset < blockDim.x; offset *= 2) {
float val = 0.0f;
if (tid >= offset) {
val = temp[tid - offset];
}
__syncthreads();
if (tid >= offset) {
temp[tid] += val;
}
__syncthreads();
}
if (idx < N) output[idx] = temp[tid];
}
Input: [1, 2, 3, 4]
Output: [1, 3, 6, 10] (cumulative sums)
Use when: Each output depends on nearby inputs.
Examples: Image blurring, physics simulations, edge detection.
__global__ void stencil_1d(float *input, float *output, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int tid = threadIdx.x;
__shared__ float shared[258]; // Block data + halo
// Load main data
if (idx < N) {
shared[tid + 1] = input[idx];
} else {
shared[tid + 1] = 0.0f;
}
// Load halo (boundary elements)
if (tid == 0) {
shared[0] = (idx > 0) ? input[idx - 1] : 0.0f;
}
if (tid == blockDim.x - 1) {
shared[tid + 2] = (idx < N - 1) ? input[idx + 1] : 0.0f;
}
__syncthreads();
// Compute average of 3 neighbors
if (idx < N) {
float left = shared[tid];
float center = shared[tid + 1];
float right = shared[tid + 2];
output[idx] = (left + center + right) / 3.0f;
}
}
Why shared memory? Instead of 768 global memory loads, do 258 loads + fast shared reads.
The code demonstrates all four patterns with working examples:
nvcc classical_algorithms.cu -o patterns_demo
./patterns_demo
You'll see:
// Try different operations
output[idx] = sin(input[idx]) + cos(input[idx]);
output[idx] = (input[idx] > 0.5f) ? 1.0f : 0.0f; // Threshold
// Instead of sum, find maximum
if (tid < stride) {
shared_data[tid] = max(shared_data[tid], shared_data[tid + stride]);
}
// 5-point stencil instead of 3-point
// Need bigger halo region!
You now have tools for most parallel computations! In Chapter 5, we'll connect this to real Python/PyTorch code.
You've learned CUDA fundamentals. Now let's connect this to real Python code! You'll build custom operations that work seamlessly with PyTorch and machine learning.
A complete PyTorch extension with:
Your CUDA knowledge → PyTorch extension → Faster ML models
First, check your setup:
# Check CUDA and PyTorch
python -c "import torch; print('CUDA available:', torch.cuda.is_available())"
nvcc --version
cd Chapter\ 5/
pip install .
This compiles everything and installs the custom_ops module.
Create load_extension.py:
from torch.utils.cpp_extension import load
# Compile and load the extension
custom_ops = load(
name='custom_ops',
sources=['custom_ops.cpp', 'custom_kernels.cu'],
extra_cuda_cflags=['-O3', '--use_fast_math'],
verbose=True
)
print("Extension loaded successfully!")
Run it:
python load_extension.py
python test_custom_ops.py
You should see all tests pass, including performance benchmarks.
relu(input * scale + bias)Combines 3 operations into 1 kernel - saves memory bandwidth and kernel launches.
import torch
import custom_ops
# Create data on GPU
input_tensor = torch.randn(32, 64, device='cuda', requires_grad=True)
bias = torch.randn(64, device='cuda', requires_grad=True)
# Your custom operation
output = custom_ops.fused_op_forward(input_tensor, bias, scale=2.0)
# Automatic gradients work!
loss = output.sum()
loss.backward()
print("Gradients computed:", input_tensor.grad is not None)
The tests show your custom kernel vs PyTorch's separate operations:
Custom fused kernel: 0.45 ms
PyTorch separate ops: 0.67 ms
Speedup: 1.5x
Why faster? One kernel launch instead of three, no intermediate memory writes.
Let's add a simple element-wise square operation:
custom_kernels.cu)__global__ void square_kernel(const float* input, float* output, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx >= N) return;
output[idx] = input[idx] * input[idx];
}
torch::Tensor square_cuda(torch::Tensor input) {
const int N = input.numel();
auto output = torch::empty_like(input);
const int threads = 256;
const int blocks = (N + threads - 1) / threads;
square_kernel<<>>(input.data_ptr(),
output.data_ptr(), N);
return output;
}
custom_ops.cpp)torch::Tensor square_cuda(torch::Tensor input); // Declaration
class SquareFunction : public torch::autograd::Function {
public:
static torch::Tensor forward(torch::autograd::AutogradContext *ctx,
torch::Tensor input) {
TORCH_CHECK(input.is_cuda(), "Input must be CUDA tensor");
ctx->save_for_backward({input});
return square_cuda(input);
}
static torch::autograd::tensor_list backward(
torch::autograd::AutogradContext *ctx,
torch::autograd::tensor_list grad_outputs)
{
auto input = ctx->get_saved_variables()[0];
auto grad_output = grad_outputs[0];
// d/dx(x²) = 2x
auto grad_input = 2.0 * input * grad_output;
return {grad_input};
}
};
torch::Tensor square_forward(torch::Tensor input) {
return SquareFunction::apply(input);
}
// In PYBIND11_MODULE:
m.def("square_forward", &square_forward, "Element-wise square");
import torch
import custom_ops
x = torch.randn(10, device='cuda', requires_grad=True)
y = custom_ops.square_forward(x) # y = x²
loss = y.sum()
loss.backward()
print("x:", x)
print("y:", y)
print("x.grad:", x.grad) # Should be 2*x
Wrap your operation in a PyTorch module:
import torch.nn as nn
import custom_ops
class CustomLayer(nn.Module):
def __init__(self, features):
super().__init__()
self.weight = nn.Parameter(torch.randn(features, features))
self.bias = nn.Parameter(torch.zeros(features))
def forward(self, x):
# Use your custom fused operation
return custom_ops.fused_op_forward(
torch.matmul(x, self.weight),
self.bias,
scale=1.0
)
# Build model
model = nn.Sequential(
nn.Linear(784, 256),
CustomLayer(256), # Your custom CUDA layer
nn.Linear(256, 10)
).cuda()
# Train normally
optimizer = torch.optim.Adam(model.parameters())
# ... training loop ...
Use custom CUDA when:
Don't use custom CUDA for:
# CUDA version mismatch
pip install torch --index-url https://download.pytorch.org/whl/cu118
# Missing CUDA
export CUDA_HOME=/usr/local/cuda
# Tensors not on GPU
x = x.cuda()
# Wrong tensor types
x = x.float()
# Profile your code
with torch.profiler.profile() as prof:
# your code
print(prof)
You now have the complete pipeline:
Go build something fast!