Qwen3.5-27B
Qwen3.5-27B is a dense vision-language model that incorporates a linear attention mechanism, optimizing inference speed and performance. It supports reasoning-enabled tasks and structured outputs, making it suitable for complex applications. The model's capabilities are comparable to larger models like Qwen3.5-122B-A10B.
What is Qwen3.5-27B?
Qwen3.5-27B is an AI model from Alibaba that Agent Mag tracks for pricing, context window, modalities, benchmarks, and API compatibility. Builders can use this page to compare Qwen3.5-27B against other models for agent workflows and production deployments.
Qwen3.5-27B is a dense vision-language model that incorporates a linear attention mechanism, optimizing inference speed and performance. It supports reasoning-enabled tasks and structured outputs, making it suitable for complex applications. The model's capabilities are comparable to larger models like Qwen3.5-122B-A10B.
- Supports reasoning-enabled tasks
- Optimized for fast inference
- Structured output capabilities
- Comparable to larger models in performance
- Vision-language integration
- Limited information on training data
- Hallucination rate of 20.3%
- Low accuracy in research-level physics reasoning (0.9%)
- Moderate coding capability (32.6% on Terminal-Bench Hard)
- Limited omniscience accuracy (21.0%)
More from Alibaba
Qwen 3.6 Plus builds on a hybrid architecture that combines efficient linear attention with sparse mixture-of-experts routing, enabling strong scalability and high-performance inference. Compared to the 3.5 series, it delivers...
Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an efficient 9B-parameter architecture. It uses a unified vision-language design...
The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a sparse mixture-of-experts model, achieving higher inference efficiency. Its overall...
The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of...
Related content
Compare pricing, local installs, context windows, and modality filters across the full model catalog.
Find frameworks, SDKs, and infrastructure tools that pair with this model in production workflows.
See Agent Mag coverage of model benchmarks, agent frameworks, and deployment patterns.