Back to Blog
Voice Model Deep Dives15 min read

Voice AI Deep Dive: Chunked Attention, Emformer-Style Memory, and KV Caching for Streaming

Introduction

Transformers are the default architecture for speech and language. But vanilla self-attention is expensive and (usually) non-causal:

  • Full-context attention: (O(T^2)) attention over time.
  • Offline training recipes that assume the full utterance is available.

Streaming ASR forces an uncomfortable constraint: you want Transformer quality with strict latency budgets (100–500 ms) and without recomputing the entire past at every new frame.

This deep dive covers three ideas that make streaming Transformers practical:

  1. Chunked processing (blockwise encoders)
  2. Memory mechanisms (Emformer-style “summaries”)
  3. Key/Value caching (efficient incremental attention)

1. Why Naive Transformers Are Bad at Streaming

A vanilla encoder with full self-attention needs the entire past at each layer. In streaming, you get:

  • Recompute overhead: every time step reprocesses all previous frames.
  • Latency blow-up: the model can’t keep up with real time.
  • Memory blow-up: KV states grow with sequence length.

You need a design that is:

  • Incremental: update with new audio only.
  • Bounded: fixed compute per second of audio.
  • Stable: minimal boundary artifacts in transcriptions.

2. Chunked Encoders: The Simplest Streaming Strategy

Chunked encoders process audio in blocks of length (C) frames, optionally with:

  • Left context: previous frames (cached)
  • Right context / lookahead: a small future window (R)

The encoder receives windows like:

  • ([t - L, ; t + C + R])

But only emits outputs for the central chunk ([t, t+C]).

The core tradeoff

  • Larger (C), larger (R): better accuracy, higher latency.
  • Smaller (C), smaller (R): worse boundary handling, lower latency.

Boundary artifacts you will see

  • Split words at chunk edges
  • Revisions as the model sees additional right context
  • Deletions of short function words (“a”, “the”) near boundaries

3. Emformer-Style Memory: Summarize the Past, Don’t Re-attend to It

A recurring pattern in streaming Transformers is: keep a compressed representation of older context rather than retaining all frames.

Conceptually:

  1. Process the current chunk with local attention.
  2. Update a memory bank representing longer history.
  3. Attend to memory + local context for predictions.

This gives you:

  • Bounded memory size: you store summaries, not all frames.
  • Longer effective context: memory represents more than a fixed window.

Why memory helps

Speech has dependencies across seconds:

  • Proper nouns and topic continuity
  • Coarticulation and prosody patterns
  • Speaker style and accent adaptation (implicitly)

If your model only sees 1–2 seconds, it will be brittle. Memory provides longer-range context without quadratic compute.

A researcher’s mental model

Think of memory as a learnable “state” updated at chunk boundaries. This resembles:

  • An RNN hidden state (but learned via attention)
  • A compressed segment embedding for previous audio

The compression quality determines whether memory helps or hurts.

4. KV Caching: Make Attention Incremental

Even with chunking, you often need attention over a limited past. KV caching is the standard optimization:

  • Cache attention keys/values from previous steps.
  • When new frames arrive, compute queries for the new chunk and attend over cached KV + new KV.

This reduces repeated computation across chunks and makes real-time decoding feasible on GPUs and even on-device in some setups.

What caching changes in practice

  • Lower TTFT variance
  • Higher throughput for streaming multi-user workloads
  • More predictable compute per second of audio

What caching does not solve

Caching does not remove boundary artifacts. If your chunking scheme is naive, you still get:

  • Chunk-edge errors
  • Delayed disambiguation
  • Partial revision churn

Caching is an efficiency tool, not a modeling tool.

5. Lookahead: The Most Powerful “Cheat”

Most production streaming Transformers allow a small right-context (R) (lookahead). Even 200 ms of lookahead can dramatically reduce errors like:

  • Confusing plosives (“b/p”, “d/t”) early in a word
  • Missing word endings and suffixes
  • Boundary deletions

But the cost is user-perceived delay:

  • A 300 ms lookahead is already noticeable in tight turn-taking.

The art is to allocate your latency budget:

  • Some to lookahead
  • Some to decoding and stability constraints
  • Some to endpointing logic

6. Stability Engineering: How to Make Streaming Transformers Feel Good

Streaming UX depends on how much partial text changes. For chunked attention systems, common strategies include:

  • Commit window: only finalize text after N frames without revision.
  • Prefix locking: once a prefix is stable across K decodes, freeze it.
  • Confidence gating: only display tokens above a probability threshold.

These are not “UI hacks.” They become part of the effective model and should be evaluated.

7. Evaluation Checklist for Researchers

When comparing chunking/memory/caching variants, track:

  • WER / CER on standard sets
  • Streaming WER under fixed chunk size + lookahead
  • Finalization delay per word (median and tail)
  • Revision rate: edits per emitted token
  • Boundary error rate: errors concentrated near chunk edges

If you only report offline WER, you will miss the entire streaming story.

8. Recommended Baseline Recipes

If you want a strong streaming baseline:

  • Start with a chunked encoder with modest lookahead (100–250 ms).
  • Add KV caching to stabilize throughput.
  • Add a small memory mechanism if your tasks include long-form speech.
  • Implement prefix locking and evaluate revision metrics.

Conclusion

Streaming Transformers work because they stop trying to be “full-context” on every step. Chunking controls compute, memory extends context, and KV caching makes attention incremental. The remaining challenge is boundary artifacts and partial stability, which must be treated as measurable properties, not afterthoughts.

Explore more Voice AI deep dives