NVIDIA Nemotron 3 Super 120B A12B (Q2_K_XL)on CPU Only
Overview
NVIDIA Nemotron 3 Super 120B A12B is a 123.61B parameter moe language model by NVIDIA, with code, multilingual, thinking, tool-calls capabilities. It supports a context window of up to 262,144 tokens.
Nemotron 3 Super 120B A12B is a 123.61-billion-parameter hybrid Mamba-2 Transformer LatentMoE model from NVIDIA, activating 12 billion parameters per token across 22 of 512 routed experts plus 1 shared expert. Trained on over 25 trillion tokens, it targets agentic reasoning, code generation, tool calling, and multilingual conversation in 7 languages. A 256K context window, toggleable thinking mode, and multi-token prediction enable high-throughput inference for complex multi-agent workflows. Its MoE sparsity quantizes well to GGUF for self-hosted deployment on multi-GPU setups.
At Q2_K_XL quantization (low quality tier), the model weighs 50.9 GB. This exceeds the 0 GB of VRAM on CPU Only. Inference is still possible via CPU offload or memory-mapped loading from disk, but expect significantly reduced performance.
A CPU-only configuration with no GPU acceleration. Inference runs entirely on the CPU, which is significantly slower than GPU-accelerated setups but requires no special hardware. Performance and maximum model size depend on available system RAM. Suitable for testing, development, or deployments where no GPU is available.
Hardware Requirements
| Model size | 50.9 GB |
| VRAM available | 0 GB |
| VRAM used | 0 GB |
| System RAM | |
| Min RAM required | 50.9 GB |
| GPU layers | 0 / 88 |
| Context size | 262,144 |
| Backend | cpu |
| Flash attention | No |
| Reading from disk | Yes |
Performance Notes
Deploy
Command
helmfile --state-values-file <(curl -s https://www.prositronic.eu/values/nemotron-3-super-120b-a12b/q2_k_xl/cpu.yaml) apply
Generated values.yaml
/values/nemotron-3-super-120b-a12b/q2_k_xl/cpu.yaml
Loading values…
Frequently Asked Questions
How much VRAM does NVIDIA Nemotron 3 Super 120B A12B (Q2_K_XL) need?
The Q2_K_XL quantization of NVIDIA Nemotron 3 Super 120B A12B requires 50.9 GB. The 0 GB of VRAM on CPU Only is insufficient for GPU layers, so inference runs on CPU.
Can I run NVIDIA Nemotron 3 Super 120B A12B on CPU Only?
It is possible but not recommended. CPU Only does not have enough VRAM to accelerate NVIDIA Nemotron 3 Super 120B A12B (Q2_K_XL), so inference will rely on CPU and system RAM.
What is quantization?
Quantization reduces a model's numerical precision from its original floating-point format to a more compact representation. This shrinks the file size and VRAM footprint, making it possible to run large models on consumer hardware. The trade-off is a small reduction in output quality. Q2_K_XL compresses NVIDIA Nemotron 3 Super 120B A12B from its original size down to 50.9 GB.
What quantization should I choose for NVIDIA Nemotron 3 Super 120B A12B?
Q2_K_XL is a low-quality quantization. Higher-quality quants (Q8, Q6) preserve more model accuracy but need more VRAM. Lower quants (Q4, Q3, Q2) reduce VRAM usage at the cost of some quality. Choose based on your available hardware and quality requirements.
Why are some layers offloaded to CPU?
CPU Only has 0 GB of VRAM, but NVIDIA Nemotron 3 Super 120B A12B (Q2_K_XL) requires approximately 50.9 GB. Only 0 of 88 layers fit in VRAM; the remaining layers run on CPU, which is slower but still functional.
What is MoE and how does it affect deployment?
NVIDIA Nemotron 3 Super 120B A12B uses a Mixture-of-Experts (MoE) architecture with 512 experts, of which 22 are active per token. This means only a fraction of the model weights are used for each inference step, allowing MoE models to be larger in total parameter count while remaining efficient at inference time.