LLM·Dex

Mixtral 8×22B vs Pixtral Large

A complete head-to-head: pricing, context window, benchmarks, modality coverage, and openness, with a programmatic verdict synthesized from the underlying data.

Verdict by category
  • PriceTie

    Neither model publishes per-token API pricing.

  • Context windowPixtral Large

    Pixtral Large accepts 128K tokens vs 64K, 2.0× the room for long documents and codebases.

  • BenchmarksTie

    No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores.

  • ModalitiesPixtral Large

    Pixtral Large supports 2 modalities (text, vision) vs 1 for Mixtral 8×22B.

  • OpennessTie

    Both ship open weights, self-host either one.

On balance Pixtral Large edges ahead, winning 2 of 5 categories against Mixtral 8×22B's 0. Neither model publishes per-token API pricing. Pixtral Large accepts 128K tokens vs 64K, 2.0× the room for long documents and codebases.

No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores. They differ in modality coverage, Mixtral 8×22B handles text while Pixtral Large handles text, vision, which can be the deciding factor before you even look at benchmarks. Both ship open weights, self-host either one.

Pixtral Large is the newer of the two, released 7 months after Mixtral 8×22B, which usually means a more recent knowledge cutoff and updated safety post-training. Mixtral 8×22B is usually picked for open source llm and commercial use llm workloads, while Pixtral Large sees more deployments in vision and ocr. If pricing matters more than every last benchmark point, run the numbers in the calculator below before committing.

Side-by-side specs

SpecMixtral 8×22BPixtral Large
ProviderMistralMistral
ReleasedApr 2024Nov 2024
Modalitiestexttext, vision
Context window64K tokens128K tokens
Max output,,
Input · 1MPricing not publishedPricing not published
Output · 1MPricing not publishedPricing not published
Knowledge cutoff,,
Open weightsYes (Apache-2.0)Yes (Mistral Research License)
API availableYesYes

Pricing at scale

What you'd actually pay at typical workloads. Numbers come from each model's published per-million-token rates.

  • Light usage, 100k in / 50k out per day, vs ,
  • Heavy usage, 1M in / 500k out per day, vs ,
  • RAG workload, 5M in / 200k out per day, vs ,

Light usage, 100k in / 50k out per day: pricing not directly comparable (one or both models are missing public per-token rates). Heavy usage, 1M in / 500k out per day: pricing not directly comparable (one or both models are missing public per-token rates). RAG workload, 5M in / 200k out per day: pricing not directly comparable (one or both models are missing public per-token rates).

Price calculator

Estimated spend for the listed models at your usage. Numbers are derived from each model's published per-million-token rates.

  • Mixtral 8×22BPricing unavailable
  • Pixtral LargePricing unavailable

Benchmarks compared

Only sourced numbers. Where a benchmark is missing for one model we show the available value rather than fabricating the other.

Mixtral 8×22BPixtral Large
  • MMLU77.8
Pick Mixtral 8×22B if

Mixtral 8×22B fits when…

  • Apache-2.0
  • MoE economics
  • Mature
Pick Pixtral Large if

Pixtral Large fits when…

  • Strong document AI
  • Open weights
  • Long-context tasks, handles 128K tokens vs 64K for Mixtral 8×22B.
  • Multimodal needs covering vision.
Don't want either?

Consider Mistral Large 2

Mistral's flagship API model, strong on code and reasoning, EU-friendly hosting.

Frequently asked

  • Is Mixtral 8×22B or Pixtral Large cheaper?
    Per-token pricing isn't published for at least one of these models, check each model's spec page for current rates.
  • Which has the larger context window?
    Pixtral Large accepts 128K tokens vs 64K for Mixtral 8×22B.
  • Is Mixtral 8×22B or Pixtral Large better for coding?
    Both Mixtral 8×22B and Pixtral Large are competitive on coding benchmarks. See each model's individual spec page for HumanEval and SWE-bench scores where published. For an opinionated pick, consult our Best LLM for Coding ranking.
  • Are either of these models open source?
    Both ship with open weights. Mixtral 8×22B is licensed under Apache-2.0; Pixtral Large under Mistral Research License.
  • When were Mixtral 8×22B and Pixtral Large released?
    Mixtral 8×22B was released by Mistral on 2024-04-10. Pixtral Large was released by Mistral on 2024-11-18.
Friday digest

The week's AI launches, in your inbox.

One short email every Friday, new models, leaks, and quietly-shipped APIs you missed.