Lfm2
Lfm2
All models
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
Lfm2 is available through Ollama for local agent workflows, with support for text input. LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
Lfm2 is a local model entry from Ollama that Agent Mag tracks for install commands, available tags, modalities, and agent workflow fit. Builders can install it with the Agent Mag CLI and run it through Ollama on their own machine.
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
| Tag | Size | Context | Input |
|---|---|---|---|
| lfm2:latest | 14GB | 32K | text |
| lfm2:24b | 14GB | 32K | text |
Compare pricing, local installs, context windows, and modality filters across the full model catalog.
Find frameworks, SDKs, and infrastructure tools that pair with this model in production workflows.
See Agent Mag coverage of model benchmarks, agent frameworks, and deployment patterns.