CUDA Programming: From Zero to GPU Kernels

Hi! I am Srihari Unnikrishnan(@pythongiant) and this is a small primer on getting started with CUDA for machine learning! Send me an email at srihari[dot]unnikrishnan[at]gmail[dot]com if you have any questions or any suggestions/recommendations! I would be more than glad to help you out.

Table of Contents

Introduction

Ever wanted to harness the power of your GPU for lightning-fast computations? This guide takes you from having no GPU knowledge to writing your own CUDA kernels that can speed up your code by 10-100x.

What You'll Learn

This isn't just another technical manual. We'll build your intuition step by step:

Who This Is For

No prior GPU knowledge required! We'll explain everything from the ground up.

How to Use This Guide

Each chapter builds on the previous one. Start with Chapter 1 and work through in order.

  1. Read the explanations - we use simple analogies (like comparing CPUs to chefs and GPUs to assembly lines)
  2. Run the code examples - see the concepts in action
  3. Experiment - modify the code and see what happens
  4. Apply to your problems - adapt the patterns to your own code

Chapters Overview

Chapter 1: Why GPUs Exist

The big picture: CPUs vs GPUs

Chapter 2: How CUDA Code Runs

Your first CUDA program

Chapter 3: GPU Memory Magic

Why memory is everything in GPU programming

Chapter 4: Common Speed-Up Patterns

Ready-to-use techniques for parallel computing

Chapter 5: Using CUDA in Python/PyTorch

Connect GPU code to real applications

Getting Started

  1. Check your setup:
nvidia-smi  # Should show your GPU
nvcc --version  # Should show CUDA toolkit
  1. Install requirements:
  1. Start coding! Each chapter has working code you can compile and run.

What Makes This Different

Most GPU guides throw technical terms at you. This guide:

By the end, you'll understand GPU programming deeply enough to write efficient code for your own problems.

Ready to Start?

Head to Chapter 1 to learn why GPUs can be so much faster than CPUs for the right problems.

Chapter 1: Why GPUs Exist and What CUDA Is

Welcome! By the end of this chapter, you'll understand why your gaming GPU can also do serious computing work. No technical background needed - we'll use simple analogies.

The Big Problem Modern Computing Hit

For years, computers got faster by making CPUs run at higher speeds. But around 2005, this stopped working. CPUs couldn't get any faster without melting or using insane amounts of power.

At the same time, people needed to solve bigger and bigger problems:

These problems all have something in common: doing the same simple operation on huge amounts of data.

CPUs vs GPUs: A Simple Analogy

Imagine you're running a restaurant:

CPU = Skilled Chef

GPU = Assembly Line Workers

GPUs don't make one task fast. They make many identical tasks fast simultaneously.

What CUDA Actually Is

CUDA is NVIDIA's system for writing programs that run on GPUs. It lets you:

  1. Write normal C/C++ code
  2. Mark certain functions to run on the GPU
  3. Tell the GPU to run that function thousands of times in parallel

CUDA gives you direct control over the GPU. You decide exactly how work gets divided up and executed.

The Two-World Model

CUDA programs have two parts:

CPU Side (Host)

GPU Side (Device)

You must explicitly copy data between CPU and GPU memory.

Your First CUDA Concept: The Kernel

A kernel is a function that runs on the GPU. When you launch a kernel, you tell the GPU:

"Run this function 10,000 times, each time with different data"

Each run of the function is a thread. Threads are grouped into blocks, and blocks are arranged in a grid.

Don't worry about the details yet - we'll see this in action in the next chapter.

Why CUDA Feels Different

CUDA doesn't automatically make your code parallel. You have to:

This makes CUDA more work than automatic systems, but you get predictable, high performance.

When CUDA Makes Sense

Use CUDA when you have:

Don't use CUDA for:

Try It Yourself

This chapter doesn't have code yet - we're building concepts first. But here's what we'll do in the next chapter:

// A simple kernel that adds 1 to each element
__global__ void addOne(int* data, int N) {
    int i = /* figure out which element this thread should handle */;
    if (i < N) {
        data[i] = data[i] + 1;
    }
}

In Chapter 2, we'll run this code and see how it works!

Key Takeaways

Ready for your first CUDA program? Let's go to Chapter 2!

Chapter 2: Your First CUDA Program - Understanding How Code Runs on GPU

Now we get hands-on! You'll write and run your first CUDA program. We'll see how the abstract concepts from Chapter 1 become real code.

What You'll Build

A program that demonstrates:

The Basic Structure

Every CUDA program has two parts:

  1. CPU code - main program, memory management
  2. GPU code - the parallel kernel function

Your First Kernel

Here's the GPU function we'll write:

__global__ void execution_model_demo(int *out, int N)
{
    // Each thread figures out which data element it should handle
    int global_idx = blockIdx.x * blockDim.x + threadIdx.x;
    int local_idx  = threadIdx.x;

    // Skip if this thread is beyond our data size
    if (global_idx >= N) return;

    // Calculate which "warp" this thread is in (group of 32 threads)
    int warp_id   = local_idx / 32;
    int lane_id   = local_idx % 32;

    // Intentional branching to show warp divergence
    int value;
    if (lane_id < 16) {
        value = global_idx * 2;  // First half of warp does this
    } else {
        value = global_idx * 3;  // Second half does this
    }

    // All threads in block must reach here before any can continue
    __syncthreads();

    // Write result to memory
    out[global_idx] = value;
}

Understanding the Key Parts

Thread Identity

int global_idx = blockIdx.x * blockDim.x + threadIdx.x;

So thread 5 in block 2 has global_idx = 2 * 64 + 5 = 133.

Warps - The Real Execution Unit

int warp_id = local_idx / 32;    // Which group of 32 threads
int lane_id = local_idx % 32;    // Position within the group

GPU hardware runs threads in groups of 32 called warps. All 32 threads in a warp execute the same instruction at the same time.

Warp Divergence - Why Branching Hurts

if (lane_id < 16) {
    value = global_idx * 2;
} else {
    value = global_idx * 3;
}

Within one warp:

The hardware runs these sequentially, not in parallel! The whole warp waits while first the "if" executes, then the "else".

Synchronization

__syncthreads();

All threads in the same block must reach this point before any can continue. This is crucial when threads need to share data.

Running the Program

1. Compile

nvcc execute_model_demo.cu -o demo

2. Run

./demo

3. What You'll See

Index   0 -> 0    # lane_id=0 (<16) so 0*2 = 0
Index   1 -> 2    # lane_id=1 (<16) so 1*2 = 2
...
Index  15 -> 30   # lane_id=15 (<16) so 15*2 = 30
Index  16 -> 48   # lane_id=16 (>=16) so 16*3 = 48
Index  17 -> 51   # lane_id=17 (>=16) so 17*3 = 51
...

Notice the pattern change at index 16, 48, 80, etc. - this shows where warps begin.

Launch Configuration

When you launch a kernel:

execution_model_demo<<>>(d_out, N);

The GPU scheduler assigns blocks to available processors and manages execution.

Key Concepts You Just Saw

Try Modifying the Code

  1. Change the branching condition:
if (lane_id < 8) {  // Only first quarter
    value = global_idx * 2;
} else {
    value = global_idx * 3;
}

See how this affects performance.

  1. Add more synchronization:
__syncthreads();
// Do some work
__syncthreads();
// Do more work
  1. Experiment with different block sizes:
const int threads_per_block = 128;  // Try 32, 64, 128, 256

What Happens Inside the GPU

When you launch the kernel:

  1. GPU creates 4 blocks of 64 threads each = 256 threads
  2. Each block gets assigned to a processor
  3. Processors break threads into warps (groups of 32)
  4. Warps execute instructions, switching rapidly to hide memory delays
  5. When threads diverge, execution serializes within the warp
  6. Results get written to GPU memory, then copied back to CPU

Next Steps

You now understand how CUDA code actually executes! In Chapter 3, we'll learn about GPU memory - why it's so important for performance.

Key Takeaways

Chapter 3: GPU Memory - Why It's Everything for Performance

In this chapter, you'll learn why memory is the #1 factor in GPU performance. Bad memory usage can make your code 10x slower. Good memory usage can make it 100x faster than CPU.

The Memory Hierarchy You Must Understand

GPUs have multiple types of memory, each with different speed/cost tradeoffs:

Global Memory (Slow but Big)

Shared Memory (Fast but Small)

Registers (Fastest but Limited)

The #1 Rule: Memory Access Patterns Matter

The way threads access memory determines if your code is fast or slow.

Good Pattern: Coalesced Access

// Thread 0 reads data[0], Thread 1 reads data[1], etc.
// Hardware combines 32 reads into 1 big memory transaction
int value = global_data[threadIdx.x];

Result: Full memory bandwidth utilization.

Bad Pattern: Strided Access

// Thread 0 reads data[0], Thread 1 reads data[2], Thread 2 reads data[4]
// Hardware can't combine - needs multiple transactions
int value = global_data[threadIdx.x * 2];

Result: 2x-10x slower!

Worst Pattern: Random Access

// Each thread reads a random location
int value = global_data[random_indices[threadIdx.x]];

Result: Potentially 32x slower!

Shared Memory: The Secret Weapon

Shared memory is fast on-chip memory that threads in the same block can share.

Use Case 1: Data Reuse

Instead of each thread loading the same data from slow global memory:

__shared__ float shared_data[256];

// Load data
shared_data[threadIdx.x] = global_data[blockIdx.x * 256 + threadIdx.x];
__syncthreads();  // Wait for all loads to complete

// Now all threads can read this data quickly
float sum = 0;
for (int i = 0; i < 256; i++) {
    sum += shared_data[i];  // Fast shared memory reads
}

Use Case 2: Fixing Bad Access Patterns

When you need to transpose or reorganize data:

__shared__ float tile[32][32];

// Load data in coalesced way
tile[threadIdx.y][threadIdx.x] = input[row * N + col];
__syncthreads();

// Write data in different pattern (also coalesced)
output[col * N + row] = tile[threadIdx.x][threadIdx.y];

Bank Conflicts in Shared Memory

Shared memory is divided into 32 banks. If multiple threads access the same bank simultaneously, accesses happen one at a time (serialized).

No Conflict (Good)

__shared__ float data[128];
float val = data[threadIdx.x];  // Each thread hits different bank

Bank Conflict (Bad)

float val = data[threadIdx.x * 2];  // Threads hit same banks

Hands-On: Run the Memory Demos

The code in this chapter demonstrates these concepts:

  1. Memory Hierarchy Demo - Shows different memory types and their performance.
  2. Coalesced vs Uncoalesced Access - Compare the performance difference.
  3. Bank Conflicts - See how shared memory access patterns affect speed.

Compile and Run

nvcc memory_management.cu -o memory_demo
./memory_demo

Key Experiments

1. Change Access Patterns

Modify the coalesced access to be uncoalesced:

// Change this line in the kernel:
int value = global_in[global_idx];  // Coalesced
// To this:
int value = global_in[global_idx * 2];  // Uncoalesced

Measure the performance difference!

2. Modify Shared Memory Usage

Add more data reuse in the reduction example:

// Instead of summing all elements once, sum them multiple times
for (int repeat = 0; repeat < 10; repeat++) {
    float sum = 0;
    for (int i = 0; i < blockDim.x; i++) {
        sum += shared_data[i];
    }
    // Use the sum somehow
}

3. Experiment with Bank Conflicts

Change the bank conflict pattern:

// Try different strides
int conflict_idx = (local_idx * 4) % blockDim.x;  // Stride-4

Understanding Performance

Memory-Bound vs Compute-Bound

Occupancy Matters

More active threads = more warps = better at hiding memory latency.

The Bandwidth Goal

Modern GPUs have 500-2000 GB/s memory bandwidth. Your goal: achieve 80%+ of that.

Real-World Impact

In the matrix multiplication example:

4x speedup just from better memory usage!

Next Steps

You now understand GPU memory. In Chapter 4, we'll apply these patterns to real algorithms like reductions and image processing.

Key Takeaways

Chapter 4: Common Parallel Patterns - Speed Up Real Algorithms

Now you know the basics! This chapter shows you ready-to-use patterns for speeding up real computations. These patterns appear in most GPU-accelerated code.

What You'll Learn

Four fundamental patterns that solve 90% of parallel problems:

  1. Map - Apply the same operation to each element
  2. Reduce - Combine elements into a single value
  3. Scan - Compute running totals
  4. Stencil - Process neighborhoods (like image filters)

Pattern 1: Map - Element-wise Operations

Use when: Each output depends only on the corresponding input.

Examples: Adding vectors, scaling arrays, applying math functions.

Simple Map Example

__global__ void map_pattern(float *input, float *output, float scale, int N) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx >= N) return;
    output[idx] = input[idx] * scale;  // Each thread: one operation
}

Why it works: No dependencies between elements. Thread 5's work doesn't affect thread 10's work.

Performance: Memory-bound (limited by how fast you can read/write data).

Pattern 2: Reduce - Summing/Combining Elements

Use when: You need to combine all elements into one value (sum, max, min, etc.).

Challenge: Sequential sum is slow. Parallel tree reduction is fast!

Tree Reduction Example

__global__ void reduce_sum(float *input, float *block_sums, int N) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    int tid = threadIdx.x;

    __shared__ float shared_data[256];

    // Load data
    shared_data[tid] = (idx < N) ? input[idx] : 0.0f;
    __syncthreads();

    // Tree reduction: halve active threads each step
    for (int stride = blockDim.x/2; stride > 0; stride >>= 1) {
        if (tid < stride) {
            shared_data[tid] += shared_data[tid + stride];
        }
        __syncthreads();
    }

    // Thread 0 writes this block's sum
    if (tid == 0) block_sums[blockIdx.x] = shared_data[0];
}

Why it works: Instead of N sequential steps, do log₂N parallel steps.

Speedup: From O(N) to O(log N) time!

Pattern 3: Scan - Running Totals

Use when: Each output needs the sum/total of all previous elements.

Examples: Cumulative sums, finding positions in sorted data.

Parallel Scan Example (Hillis-Steele)

__global__ void scan_hillis_steele(float *input, float *output, int N) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    int tid = threadIdx.x;

    __shared__ float temp[256];

    temp[tid] = (idx < N) ? input[idx] : 0.0f;
    __syncthreads();

    // Each step: add from increasing distances
    for (int offset = 1; offset < blockDim.x; offset *= 2) {
        float val = 0.0f;
        if (tid >= offset) {
            val = temp[tid - offset];
        }
        __syncthreads();
        if (tid >= offset) {
            temp[tid] += val;
        }
        __syncthreads();
    }

    if (idx < N) output[idx] = temp[tid];
}

Input: [1, 2, 3, 4]

Output: [1, 3, 6, 10] (cumulative sums)

Pattern 4: Stencil - Neighborhood Operations

Use when: Each output depends on nearby inputs.

Examples: Image blurring, physics simulations, edge detection.

1D Stencil Example (3-point average)

__global__ void stencil_1d(float *input, float *output, int N) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    int tid = threadIdx.x;

    __shared__ float shared[258];  // Block data + halo

    // Load main data
    if (idx < N) {
        shared[tid + 1] = input[idx];
    } else {
        shared[tid + 1] = 0.0f;
    }

    // Load halo (boundary elements)
    if (tid == 0) {
        shared[0] = (idx > 0) ? input[idx - 1] : 0.0f;
    }
    if (tid == blockDim.x - 1) {
        shared[tid + 2] = (idx < N - 1) ? input[idx + 1] : 0.0f;
    }
    __syncthreads();

    // Compute average of 3 neighbors
    if (idx < N) {
        float left = shared[tid];
        float center = shared[tid + 1];
        float right = shared[tid + 2];
        output[idx] = (left + center + right) / 3.0f;
    }
}

Why shared memory? Instead of 768 global memory loads, do 258 loads + fast shared reads.

Hands-On: Run All Patterns

The code demonstrates all four patterns with working examples:

nvcc classical_algorithms.cu -o patterns_demo
./patterns_demo

You'll see:

Key Experiments

1. Modify the Map Operation

// Try different operations
output[idx] = sin(input[idx]) + cos(input[idx]);
output[idx] = (input[idx] > 0.5f) ? 1.0f : 0.0f;  // Threshold

2. Change Reduction Operation

// Instead of sum, find maximum
if (tid < stride) {
    shared_data[tid] = max(shared_data[tid], shared_data[tid + stride]);
}

3. Modify Stencil Pattern

// 5-point stencil instead of 3-point

// Need bigger halo region!

When NOT to Use These Patterns

Real-World Applications

Next Steps

You now have tools for most parallel computations! In Chapter 5, we'll connect this to real Python/PyTorch code.

Key Takeaways

Chapter 5: Connect CUDA to Python/PyTorch - Build Real Applications

You've learned CUDA fundamentals. Now let's connect this to real Python code! You'll build custom operations that work seamlessly with PyTorch and machine learning.

What You'll Build

A complete PyTorch extension with:

The Big Picture

Your CUDA knowledge → PyTorch extension → Faster ML models

Quick Start: Run the Examples

First, check your setup:

# Check CUDA and PyTorch
python -c "import torch; print('CUDA available:', torch.cuda.is_available())"
nvcc --version

Method 1: Install the Extension (Easiest)

cd Chapter\ 5/
pip install .

This compiles everything and installs the custom_ops module.

Method 2: JIT Compilation (For Development)

Create load_extension.py:

from torch.utils.cpp_extension import load

# Compile and load the extension
custom_ops = load(
    name='custom_ops',
    sources=['custom_ops.cpp', 'custom_kernels.cu'],
    extra_cuda_cflags=['-O3', '--use_fast_math'],
    verbose=True
)

print("Extension loaded successfully!")

Run it:

python load_extension.py

Test It Works

python test_custom_ops.py

You should see all tests pass, including performance benchmarks.

What the Code Does

Fused Operation: relu(input * scale + bias)

Combines 3 operations into 1 kernel - saves memory bandwidth and kernel launches.

In Python:

import torch
import custom_ops

# Create data on GPU
input_tensor = torch.randn(32, 64, device='cuda', requires_grad=True)
bias = torch.randn(64, device='cuda', requires_grad=True)

# Your custom operation
output = custom_ops.fused_op_forward(input_tensor, bias, scale=2.0)

# Automatic gradients work!
loss = output.sum()
loss.backward()
print("Gradients computed:", input_tensor.grad is not None)

Performance Comparison

The tests show your custom kernel vs PyTorch's separate operations:

Custom fused kernel: 0.45 ms
PyTorch separate ops: 0.67 ms
Speedup: 1.5x

Why faster? One kernel launch instead of three, no intermediate memory writes.

Build Your Own Operation

Let's add a simple element-wise square operation:

1. Add CUDA Kernel (custom_kernels.cu)

__global__ void square_kernel(const float* input, float* output, int N) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx >= N) return;
    output[idx] = input[idx] * input[idx];
}

torch::Tensor square_cuda(torch::Tensor input) {
    const int N = input.numel();
    auto output = torch::empty_like(input);
    
    const int threads = 256;
    const int blocks = (N + threads - 1) / threads;
    
    square_kernel<<>>(input.data_ptr(),
                                       output.data_ptr(), N);
    return output;
}

2. Add C++ Binding (custom_ops.cpp)

torch::Tensor square_cuda(torch::Tensor input);  // Declaration

class SquareFunction : public torch::autograd::Function {
public:
    static torch::Tensor forward(torch::autograd::AutogradContext *ctx,
                                torch::Tensor input) {
        TORCH_CHECK(input.is_cuda(), "Input must be CUDA tensor");
        ctx->save_for_backward({input});
        return square_cuda(input);
    }
    
    static torch::autograd::tensor_list backward(
        torch::autograd::AutogradContext *ctx,
        torch::autograd::tensor_list grad_outputs)
    {
        auto input = ctx->get_saved_variables()[0];
        auto grad_output = grad_outputs[0];
        // d/dx(x²) = 2x
        auto grad_input = 2.0 * input * grad_output;
        return {grad_input};
    }
};

torch::Tensor square_forward(torch::Tensor input) {
    return SquareFunction::apply(input);
}

// In PYBIND11_MODULE:
m.def("square_forward", &square_forward, "Element-wise square");

3. Test It

import torch
import custom_ops

x = torch.randn(10, device='cuda', requires_grad=True)
y = custom_ops.square_forward(x)  # y = x²
loss = y.sum()
loss.backward()

print("x:", x)
print("y:", y) 
print("x.grad:", x.grad)  # Should be 2*x

Use in Neural Networks

Wrap your operation in a PyTorch module:

import torch.nn as nn
import custom_ops

class CustomLayer(nn.Module):
    def __init__(self, features):
        super().__init__()
        self.weight = nn.Parameter(torch.randn(features, features))
        self.bias = nn.Parameter(torch.zeros(features))
    
    def forward(self, x):
        # Use your custom fused operation
        return custom_ops.fused_op_forward(
            torch.matmul(x, self.weight), 
            self.bias, 
            scale=1.0
        )

# Build model
model = nn.Sequential(
    nn.Linear(784, 256),
    CustomLayer(256),  # Your custom CUDA layer
    nn.Linear(256, 10)
).cuda()

# Train normally
optimizer = torch.optim.Adam(model.parameters())
# ... training loop ...

When to Use Custom CUDA

Use custom CUDA when:

Don't use custom CUDA for:

Debug Common Issues

Build Errors

# CUDA version mismatch
pip install torch --index-url https://download.pytorch.org/whl/cu118

# Missing CUDA
export CUDA_HOME=/usr/local/cuda

Runtime Errors

# Tensors not on GPU
x = x.cuda()

# Wrong tensor types
x = x.float()

Performance Issues

# Profile your code
with torch.profiler.profile() as prof:
    # your code
print(prof)

Key Takeaways

You're Done!

You now have the complete pipeline:

  1. Understand GPU architecture (Chapters 1-3)
  2. Write parallel algorithms (Chapter 4)
  3. Connect to real applications (Chapter 5)

Go build something fast!