AIMLverse Lab

LLM PagedAttention: The Key to Efficient LLM Batching


Large Language Models (LLMs) like GPT and LLaMA are powerful but computationally intensive, especially when deployed for real-time applications like chatbots, virtual assistants, or search engines. To make their deployment cost-effective and responsive, engineers batch multiple user requests together during inference.

Batching helps maximize GPU utilization, as the model weights can be shared across all requests in the batch. But there's a significant challenge: efficient batching is incredibly difficult due to the way LLMs manage memory specifically, how they handle the Key-Value (KV) cache.

Let’s understand this with a simple analogy.

Imagine a hotel where each guest (user request) is given a full private room, even if they only need a chair and table to work. Some guests stay longer, some shorter, but every room is reserved entirely, leading to lots of unused space. Over time, you run out of rooms not because the hotel is full, but because space is inefficiently used — this is like how traditional LLM serving wastes GPU memory with large, continuous KV cache blocks per request, limiting how many users you can serve at once.

PagedAttention fixes this by turning the hotel into a co-working space. Now, guests get only the desks or lockers (memory pages) they need, which can grow or shrink dynamically. A tracking system (page table) keeps record of who’s using what, allowing space to be reused smartly. This approach avoids memory waste, allows more guests to be served efficiently, and keeps everything running faster — just like how PagedAttention enables scalable, memory-efficient batching in large language model serving.

To serve LLMs efficiently (like GPT or LLaMA), you need to batch multiple requests together. This improves GPU usage because model weights are shared across the batch.

But there is again a problem - Batching is hard due to how KV cache behaves.

Each user request maintains its own cache, which grows dynamically as tokens are generated. For example, a request generating 10 tokens has a smaller cache than one generating 100 tokens. This variance in size and timing causes fragmentation in memory allocation, making it difficult to batch multiple users together efficiently.


Understanding the KV Cache Problem

The KV cache is central to transformer-based models. During inference (the generation phase), each token needs to look back at previously generated tokens to determine what comes next. Rather than recalculating all token relationships repeatedly, the transformer model caches previously computed attention vectors (K and V) for each token at each layer. This makes generation faster—but it also creates a memory-intensive scenario, especially when scaling to thousands of users.

Let’s break down the problem further:

  1. Per-Request Cache: Each user’s request maintains its own history. The longer the conversation or output generation, the larger the cache becomes.
  2. Dynamic Growth: A request might start small but grow large. You can’t predict in advance how much memory will be needed for each request.
  3. Inconsistent Lengths: In a batch, one request might be just starting, while another is generating its 100th token.
  4. Fragmentation: Because cache sizes vary, memory alignment becomes difficult. GPUs prefer data to be packed efficiently in aligned blocks, but these unpredictable growth patterns create gaps—wasted memory space.



All this adds up to a fundamental inefficiency in batching:

  1. Poor GPU Utilization: Only a small number of users can be processed at once.
  2. Increased Latency: With fewer users per batch, you get slower overall throughput.
  3. Memory Waste: The GPU memory runs out faster even though much of it isn’t actively used.

So, while batching is key to performance, the KV cache’s behavior makes it incredibly difficult to scale reliably—until PagedAttention enters the scene.

Lets do some calculation -

Total KV cache size ≈ batch_size × seq_length × 2 × num_layers × n_heads × head_dim × sizeof(precision)

Where:

  1. “2” accounts for storing both Key and Value vectors
  2. num_layers = number of transformer layers
  3. n_heads × head_dim often equals hidden_size
  4. precision is typically 2 bytes for FP16/BF16

Estimate Raw KV Cache Memory Usage

Assuming:

  1. batch size (b) = 16
  2. seq length (prompt + generation tokens) = 512
  3. transformer layers = 32
  4. heads × head_dim = 4096 (i.e., hidden size)
  5. using FP16 (2 bytes)

KV size per token = 2 (K+V) × 32 layers × 4096 size = 262,144 elements

In bytes per token = 262,144 × 2 bytes = 524,288 bytes ≈ 0.5 MB

For 512 tokens: 0.5 MB × 512 ≈ 256 MB per request

For batch=16: 256 MB × 16 = 4 GB total KV cache usage


Analyze Wasted Memory under Traditional Allocation

Suppose we allocate for a maximum sequence length of 1024 per request (since many systems pre-allocate up to maximum context). If most requests only generate 512 tokens, then:

  1. Each request has 512 tokens unused = memory wasted.
  2. Wasted memory per request ≈ 0.5 MB × 512 = 256 MB unused.
  3. For 16 requests: wasted = 256 MB × 16 = 4 GB wasted just from internal fragmentation.

PagedAttention avoids this by allocating only what is needed using fixed‑size pages


How PagedAttention Solves It

PagedAttention is a smarter memory management strategy developed to tackle this precise problem. It treats GPU memory not like rigid hotel rooms but like flexible co-working spaces, where guests (user requests) can take exactly what they need—no more, no less.

Here’s how it works:

  1. Memory Paging: Instead of allocating large, fixed KV cache blocks for each request, PagedAttention divides the cache into small, fixed-size pages. These are like individual desks in our co-working space.
  2. Page Table Tracking: A lightweight mapping system tracks which memory page belongs to which request. As a request generates more tokens, it gets more pages. When it finishes or shrinks, its pages are released and reused.
  3. Dynamic Growth Without Fragmentation: This enables dynamic memory expansion without creating holes or wasting space. The system can support many more users in the same memory footprint.

Benefits of PagedAttention

  1. Scalable Batching: More requests can now be grouped together without running into memory fragmentation. This increases batch size and GPU utilization.
  2. Faster Inference: By serving more users simultaneously, latency is reduced.
  3. Memory Efficiency: PagedAttention reduces overhead and avoids over-reserving memory for smaller or shorter requests.
  4. Flexibility: The system can handle requests of varying lengths and complexity without performance degradation.



PagedAttention isn’t just a memory hack—it’s a fundamental rethinking of how we serve LLMs at scale. By breaking the dependency on large, monolithic KV cache blocks, it enables true dynamic batching with high throughput and efficient GPU usage.

As AI applications grow in demand—from customer support to education, legal tech, and creative writing—the ability to serve many users efficiently will define the next generation of AI infrastructure. PagedAttention is a critical step forward, helping LLM providers deliver faster, cheaper, and more scalable solutions for real-world use.

  1. KV Cache is Per-Request: Each user has their own conversation history, each token adds more data to store. That means each request grows differently over time.
  2. It Grows Dynamically: When a request starts, it's small. But as generation proceeds (say from 1 to 100 tokens), its cache expands. You can’t predict in advance how big it will be — this leads to variable memory usage.
  3. Hard to Batch Together: In a batch, some requests are long, some short. You can’t efficiently align them in memory when each one’s cache is growing unpredictably. This creates fragmentation and wasted memory, limiting the batch size.


Consequences of the Problem:

  1. Poor GPU utilization: GPU cores are underused because the batch size is small.
  2. Increased latency: You must serve fewer users per batch.
  3. Memory waste: GPU memory is exhausted faster due to inefficiency.


While batching improves efficiency, LLMs make it tricky because:

KV Cache is Huge and Dynamic: Every request has its own growing KV cache. Cache size varies across requests depending on the length of input prompt and the number of tokens generated. So, requests in a batch can become unsynchronized (some long, some short)

Batching Requires Memory Alignment: GPU memory prefers aligned data blocks for efficiency. But when each request has different-length cache, it's hard to pack them tightly. Result is fragmented memory and lower GPU utilization

What PagedAttention Improves:

PagedAttention enables efficient dynamic batching by:

  1. Breaking the KV cache into small memory pages.
  2. Tracking which page belongs to which request.
  3. Reusing memory more flexibly

This makes dynamic memory growth possible without hurting batching



Share this article: