Trinity Large Thinking
Trinity Large Thinking is an open-source reasoning model developed by Arcee AI. It excels in reasoning tasks, agentic workloads, and benchmarks such as PinchBench. The model supports reasoning-enabled workflows, allowing users to access its step-by-step thinking process for enhanced decision-making capabilities.
What is Trinity Large Thinking?
Trinity Large Thinking is an AI model from Arcee Ai that Agent Mag tracks for pricing, context window, modalities, benchmarks, and API compatibility. Builders can use this page to compare Trinity Large Thinking against other models for agent workflows and production deployments.
Trinity Large Thinking is an open-source reasoning model developed by Arcee AI. It excels in reasoning tasks, agentic workloads, and benchmarks such as PinchBench. The model supports reasoning-enabled workflows, allowing users to access its step-by-step thinking process for enhanced decision-making capabilities.
- Strong performance in reasoning tasks
- Excels in agentic workloads
- Supports reasoning-enabled workflows
- High benchmark scores in PinchBench and τ²-Bench Telecom
- Open-source availability
- Low performance in research-level physics reasoning (CritPt: 0.9%)
- Moderate hallucination rate (13.4%)
- Limited coding capabilities (Terminal-Bench Hard: 22.7%)
- Relatively low accuracy in omniscience tasks (22.8%)
- Performance variability across benchmarks
More from Arcee Ai
Trinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using 4-of-256 expert routing. It excels in creative writing,...
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with robust function...
Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k...
Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich multimodal...
Related content
Compare pricing, local installs, context windows, and modality filters across the full model catalog.
Find frameworks, SDKs, and infrastructure tools that pair with this model in production workflows.
See Agent Mag coverage of model benchmarks, agent frameworks, and deployment patterns.