Skip to content

Mistral Small 4 119B 2603

Mistral AI
Code Multilingual Thinking Tool Calls Vision

Mistral Small 4 119B 2603 is a 119.4-billion-parameter sparse Mixture-of-Experts model from Mistral AI, activating 6.5 billion parameters per token across 4 of 128 routed experts plus 1 shared expert. It supports vision input, tool calling, code generation, and toggleable reasoning effort across 24 languages. A 256K context window, 40% lower latency, and 3x throughput compared to Mistral Small 3 make it well suited for high-volume agentic and multimodal workflows. Its MoE sparsity quantizes efficiently to GGUF for self-hosted deployment on multi-GPU setups.

Hardware Configuration

Optional — for precise deployment recommendations
Quantization Quality Size Fit
Q8_0 High 117.78 GB
Q6_K High 90.96 GB
Q4_K_M Medium 67.21 GB
Last updated: March 17, 2026