Skip to content

DeepSeek Coder V2 Lite Instruct

DeepSeek
Code Multilingual

DeepSeek Coder V2 Lite Instruct is a 15.71-billion-parameter Mixture of Experts model from DeepSeek, optimized for code generation and mathematical reasoning. It activates 6 of 64 experts plus 2 shared experts per token, delivering strong coding performance with low compute cost. It supports 338 programming languages and nine natural languages, making it one of the broadest polyglot code models available. With a 160K context window and flash attention, it handles large codebases efficiently and quantizes well to GGUF for self-hosted deployment.

Hardware Configuration

Optional — for precise deployment recommendations
Quantization Quality Size Fit
Q8_0 High 15.56 GB
Q6_K High 13.1 GB
Q5_K_M Medium 11.04 GB
Q4_K_M Medium 9.65 GB
Q3_K_L Low 7.88 GB
Last updated: March 5, 2026