Mistral Medium vs Sonar Large
A complete head-to-head: pricing, context window, benchmarks, modality coverage, and openness, with a programmatic verdict synthesized from the underlying data.
Updated
Mistral Medium specs · Sonar Large specs- PriceSonar Large
Sonar Large is roughly 2.0× cheaper on output tokens ($1.00 vs $2.00 per 1M).
- Context windowMistral Medium
Mistral Medium accepts 128K tokens vs 127K, 1.0× the room for long documents and codebases.
- BenchmarksTie
No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores.
- ModalitiesTie
Both handle text.
- OpennessTie
Both are closed-weight, API-only.
It's a genuine coin-flip between Mistral Medium and Sonar Large: 1 category wins each, with the rest tied. Sonar Large is roughly 2.0× cheaper on output tokens ($1.00 vs $2.00 per 1M). Mistral Medium accepts 128K tokens vs 127K, 1.0× the room for long documents and codebases.
No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores. Both target the same set of modalities (text), so the deciding factors are price, context, and raw quality. Both are closed-weight, API-only.
Both shipped within roughly a month of each other in 2024, so they share the same generation of training data and tooling. Mistral Medium is usually picked for customer support and chatbots workloads, while Sonar Large sees more deployments in research agent and rag. If pricing matters more than every last benchmark point, run the numbers in the calculator below before committing.
Side-by-side specs
| Spec | Mistral Medium | Sonar Large |
|---|---|---|
| Provider | Mistral | Perplexity |
| Released | Dec 2024 | Nov 2024 |
| Modalities | text | text |
| Context window | 128K tokens | 127K tokens |
| Max output | , | , |
| Input · 1M | $0.40 / 1M tokens | $1.00 / 1M tokens |
| Output · 1M | $2.00 / 1M tokens | $1.00 / 1M tokens |
| Knowledge cutoff | , | , |
| Open weights | No | No |
| API available | Yes | Yes |
Pricing at scale
What you'd actually pay at typical workloads. Numbers come from each model's published per-million-token rates.
- Light usage, 100k in / 50k out per day$4.20 vs $4.50
- Heavy usage, 1M in / 500k out per day$42.00 vs $45.00
- RAG workload, 5M in / 200k out per day$72.00 vs $156
Light usage, 100k in / 50k out per day: $4.20 vs $4.50 per month, model A comes out ahead. Heavy usage, 1M in / 500k out per day: $42.00 vs $45.00 per month, model A comes out ahead. RAG workload, 5M in / 200k out per day: $72.00 vs $156 per month, model A comes out ahead.
Estimated spend for the listed models at your usage. Numbers are derived from each model's published per-million-token rates.
- Mistral Medium$0.140
- Sonar Large$0.150
Benchmarks compared
Only sourced numbers. Where a benchmark is missing for one model we show the available value rather than fabricating the other.
Mistral Medium fits when…
- Mid-tier balance
- EU-friendly
Sonar Large fits when…
- Web-search grounded
- Citation-first output
- Cheap
- Cost-sensitive workloads, 2.0× cheaper than Mistral Medium on output tokens.
Consider Mistral Large 2
Mistral's flagship API model, strong on code and reasoning, EU-friendly hosting.
Frequently asked
Is Mistral Medium or Sonar Large cheaper?
Sonar Large is cheaper at $1.00 / 1M tokens per million output tokens, vs $2.00 / 1M tokens for Mistral Medium.Which has the larger context window?
Mistral Medium accepts 128K tokens vs 127K for Sonar Large.Is Mistral Medium or Sonar Large better for coding?
Both Mistral Medium and Sonar Large are competitive on coding benchmarks. See each model's individual spec page for HumanEval and SWE-bench scores where published. For an opinionated pick, consult our Best LLM for Coding ranking.Are either of these models open source?
Neither model ships open weights, both are accessible only via their respective providers' APIs.When were Mistral Medium and Sonar Large released?
Mistral Medium was released by Mistral on 2024-12-18. Sonar Large was released by Perplexity on 2024-11-19.
The week's AI launches, in your inbox.
One short email every Friday, new models, leaks, and quietly-shipped APIs you missed.