Qwen3.5 35B A3B (Q2_K_XL)on CPU Only
Overview
Qwen3.5 35B A3B is a 35.95B parameter moe language model by Qwen, with code, multilingual, thinking, tool-calls, vision capabilities. It supports a context window of up to 262,144 tokens.
Qwen3.5 35B A3B is a Mixture-of-Experts model from Alibaba's Qwen team with 35 billion total parameters but only 3 billion active per token, routed across 256 experts for extreme efficiency. It is natively multimodal, processing text, images, and video, with built-in thinking capabilities for chain-of-thought reasoning. The model supports a 262K context window and covers over 200 languages. Released under the Apache 2.0 license, it delivers flagship-level performance at a fraction of the compute cost, quantizing efficiently for self-hosted deployment on consumer hardware.
At Q2_K_XL quantization (low quality tier), the model weighs 12.04 GB. This exceeds the 0 GB of VRAM on CPU Only. Inference is still possible via CPU offload or memory-mapped loading from disk, but expect significantly reduced performance.
A CPU-only configuration with no GPU acceleration. Inference runs entirely on the CPU, which is significantly slower than GPU-accelerated setups but requires no special hardware. Performance and maximum model size depend on available system RAM. Suitable for testing, development, or deployments where no GPU is available.
Hardware Requirements
| Model size | 12.04 GB |
| VRAM available | 0 GB |
| VRAM used | 0 GB |
| System RAM | |
| Min RAM required | 12 GB |
| GPU layers | 0 / 40 |
| Context size | 262,144 |
| Backend | cpu |
| Flash attention | No |
| Reading from disk | Yes |
Performance Notes
Deploy
Command
helmfile --state-values-file <(curl -s https://www.prositronic.eu/values/qwen3-5-35b-a3b/q2_k_xl/cpu.yaml) apply
Generated values.yaml
/values/qwen3-5-35b-a3b/q2_k_xl/cpu.yaml
Loading values…
Frequently Asked Questions
How much VRAM does Qwen3.5 35B A3B (Q2_K_XL) need?
The Q2_K_XL quantization of Qwen3.5 35B A3B requires 12.04 GB. The 0 GB of VRAM on CPU Only is insufficient for GPU layers, so inference runs on CPU.
Can I run Qwen3.5 35B A3B on CPU Only?
It is possible but not recommended. CPU Only does not have enough VRAM to accelerate Qwen3.5 35B A3B (Q2_K_XL), so inference will rely on CPU and system RAM.
What is quantization?
Quantization reduces a model's numerical precision from its original floating-point format to a more compact representation. This shrinks the file size and VRAM footprint, making it possible to run large models on consumer hardware. The trade-off is a small reduction in output quality. Q2_K_XL compresses Qwen3.5 35B A3B from its original size down to 12.04 GB.
What quantization should I choose for Qwen3.5 35B A3B?
Q2_K_XL is a low-quality quantization. Higher-quality quants (Q8, Q6) preserve more model accuracy but need more VRAM. Lower quants (Q4, Q3, Q2) reduce VRAM usage at the cost of some quality. Choose based on your available hardware and quality requirements.
Why are some layers offloaded to CPU?
CPU Only has 0 GB of VRAM, but Qwen3.5 35B A3B (Q2_K_XL) requires approximately 12.04 GB. Only 0 of 40 layers fit in VRAM; the remaining layers run on CPU, which is slower but still functional.
What is MoE and how does it affect deployment?
Qwen3.5 35B A3B uses a Mixture-of-Experts (MoE) architecture with 256 experts, of which 8 are active per token. This means only a fraction of the model weights are used for each inference step, allowing MoE models to be larger in total parameter count while remaining efficient at inference time.
How do I run Qwen3.5 35B A3B (Q2_K_XL) with Ollama?
Run ollama run qwen3.5:35b-a3b-q2_k_xl to start Qwen3.5 35B A3B (Q2_K_XL). Ollama handles downloading the model weights automatically on first run.