LLM·Dex

Grok 4 vs Mistral Large 2

A complete head-to-head: pricing, context window, benchmarks, modality coverage, and openness, with a programmatic verdict synthesized from the underlying data.

Updated

Grok 4 specs · Mistral Large 2 specs
Verdict by category
  • PriceMistral Large 2

    Mistral Large 2 is roughly 2.5× cheaper on output tokens ($6.00 vs $15.00 per 1M).

  • Context windowGrok 4

    Grok 4 accepts 256K tokens vs 128K, 2.0× the room for long documents and codebases.

  • BenchmarksTie

    No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores.

  • ModalitiesGrok 4

    Grok 4 supports 2 modalities (text, vision) vs 1 for Mistral Large 2.

  • OpennessMistral Large 2

    Mistral Large 2 ships open weights (Mistral Research License); Grok 4 is API-only.

It's a genuine coin-flip between Grok 4 and Mistral Large 2: 2 category wins each, with the rest tied. Mistral Large 2 is roughly 2.5× cheaper on output tokens ($6.00 vs $15.00 per 1M). Grok 4 accepts 256K tokens vs 128K, 2.0× the room for long documents and codebases.

No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores. They differ in modality coverage, Grok 4 handles text, vision while Mistral Large 2 handles text, which can be the deciding factor before you even look at benchmarks. Mistral Large 2 ships open weights (Mistral Research License); Grok 4 is API-only.

Grok 4 is the newer of the two, released 12 months after Mistral Large 2, which usually means a more recent knowledge cutoff and updated safety post-training. Grok 4 is usually picked for reasoning and research agent workloads, while Mistral Large 2 sees more deployments in coding llm and multilingual. If pricing matters more than every last benchmark point, run the numbers in the calculator below before committing.

Side-by-side specs

SpecGrok 4Mistral Large 2
ProviderxAIMistral
ReleasedJul 2025Jul 2024
Modalitiestext, visiontext
Context window256K tokens128K tokens
Max output,,
Input · 1M$3.00 / 1M tokens$2.00 / 1M tokens
Output · 1M$15.00 / 1M tokens$6.00 / 1M tokens
Knowledge cutoff2024-112024-04
Open weightsNoYes (Mistral Research License)
API availableYesYes

Pricing at scale

What you'd actually pay at typical workloads. Numbers come from each model's published per-million-token rates.

  • Light usage, 100k in / 50k out per day$31.50 vs $15.00
  • Heavy usage, 1M in / 500k out per day$315 vs $150
  • RAG workload, 5M in / 200k out per day$540 vs $336

Light usage, 100k in / 50k out per day: $31.50 vs $15.00 per month, model B comes out ahead. Heavy usage, 1M in / 500k out per day: $315 vs $150 per month, model B comes out ahead. RAG workload, 5M in / 200k out per day: $540 vs $336 per month, model B comes out ahead.

Price calculator

Estimated spend for the listed models at your usage. Numbers are derived from each model's published per-million-token rates.

  • Grok 4$1.05
  • Mistral Large 2$0.500

Benchmarks compared

Only sourced numbers. Where a benchmark is missing for one model we show the available value rather than fabricating the other.

Grok 4Mistral Large 2
  • MMLU,84.0
  • HumanEval,92.0
  • GPQA87.5
Pick Grok 4 if

Grok 4 fits when…

  • Real-time X data access
  • Strong on benchmarks at launch
  • Less restrictive content policy than peers
  • Long-context tasks, handles 256K tokens vs 128K for Mistral Large 2.
  • Multimodal needs covering vision.
Pick Mistral Large 2 if

Mistral Large 2 fits when…

  • Strong code and math
  • EU data residency
  • Multilingual
  • Cost-sensitive workloads, 2.5× cheaper than Grok 4 on output tokens.
  • Self-hosting and on-prem requirements, open weights (Mistral Research License).
Don't want either?

Consider Grok 3

xAI's first frontier-tier release, established the company's Colossus-trained model line.

Frequently asked

  • Is Grok 4 or Mistral Large 2 cheaper?
    Mistral Large 2 is cheaper at $6.00 / 1M tokens per million output tokens, vs $15.00 / 1M tokens for Grok 4.
  • Which has the larger context window?
    Grok 4 accepts 256K tokens vs 128K for Mistral Large 2.
  • Is Grok 4 or Mistral Large 2 better for coding?
    Both Grok 4 and Mistral Large 2 are competitive on coding benchmarks. See each model's individual spec page for HumanEval and SWE-bench scores where published. For an opinionated pick, consult our Best LLM for Coding ranking.
  • Are either of these models open source?
    Mistral Large 2 ships open weights (Mistral Research License). Grok 4 is API-only.
  • When were Grok 4 and Mistral Large 2 released?
    Grok 4 was released by xAI on 2025-07-09. Mistral Large 2 was released by Mistral on 2024-07-24.
Friday digest

The week's AI launches, in your inbox.

One short email every Friday, new models, leaks, and quietly-shipped APIs you missed.