Back to Blog
Voice Model Deep Dives9 min read

Quantization Explained: Running Whisper on iOS via CoreML

The Mobile Challenge

OpenAI's Whisper (Small or Medium) is a powerful model, but it is heavy.

  • Whisper Small: ~244 Million parameters (~500MB in FP16).
  • Whisper Medium: ~769 Million parameters (~1.5GB in FP16).

Running these raw on a mobile CPU drains battery and spikes latency. To run efficiently on iOS, we must target the Apple Neural Engine (ANE), which offers massive throughput at low power. However, the ANE has strict requirements.

What is Quantization?

Quantization reduces the precision of the model's weights and activations.

  1. FP32 (32-bit Float): Standard training precision. High accuracy, huge size.
  2. FP16 (16-bit Float): The standard for inference. Halves the size with virtually no accuracy loss.
  3. Int8 (8-bit Integer): Maps values to a range of -127 to 127. Reduces size by 4x vs FP32.

The Trade-off: Accuracy vs. Size

We benchmarked quantized versions of Whisper on the LibriSpeech test-clean dataset.

| Precision | Model Size (Small) | WER (Word Error Rate) | Notes | | :--- | :--- | :--- | :--- | | FP32 | 960 MB | 3.12% | Baseline | | FP16 | 480 MB | 3.14% | No perceptible loss | | Int8 (Linear) | 240 MB | 3.45% | Slight degradation | | Int4 (Mixed) | 120 MB | 5.80% | Noticeable errors |

Conclusion: Int8 is the "sweet spot" for mobile deployment. The 0.3% increase in WER is worth the 50% memory reduction and 2x speedup.

Implementing on CoreML

To run Whisper on iOS, we use Apple's CoreML framework.

1. Converting PyTorch to CoreML

We use coremltools to convert the .pt weights.

import coremltools as ct
import torch

# Trace the model
traced_model = torch.jit.trace(model, dummy_input)

# Convert
mlmodel = ct.convert(
    traced_model,
    inputs=[ct.TensorType(shape=dummy_input.shape)],
    compute_precision=ct.precision.FLOAT16
)

2. The ANE Bottleneck (Transformers)

The Apple Neural Engine is optimized for convolutions (CNNs). It struggles with the massive matrix multiplications in the Transformer's Attention mechanism.

  • Optimization: We replace standard MatMul operations with einsum or split the attention heads into smaller chunks that fit into the ANE's L2 cache.
  • Split Computation: Often, the Encoder (which runs once) is placed on the ANE, while the Decoder (which loops) is placed on the GPU.

Palettization vs. Linear Quantization

Apple supports Palettization (Look-up Table Quantization). Instead of linearly mapping weights, we cluster them into a "palette" of the 256 most common values. This preserves accuracy better than linear Int8 quantization for non-uniform weight distributions common in Transformers.

Conclusion for iOS Developers

For a production iOS app:

  1. Use Whisper Tiny or Base for real-time needs.
  2. Use Quantized (Int8) Small for high-accuracy offline transcription.
  3. Always leverage CoreML to offload the heavy lifting to the NPU/GPU, saving the user's battery life.

Download the CoreML Models