Gemma 4 26B A4B
Gemma 4 26B A4B
All models
Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model developed by Google DeepMind. It features 25.2 billion total parameters, with only 3.8 billion activating per token during inference, enabling high-quality outputs at reduced computational cost. The model supports multimodal inputs, including text, images, and video, and offers a 256K token context window, native function calling, configurable reasoning modes, and structured output capabilities. It is released under the Apache 2.0 license.
Related content
Data enriched Apr 24, 2026. Pricing from OpenRouter API.