LLM·Dex

Molmo 72B vs SmolLM2 1.7B

A complete head-to-head: pricing, context window, benchmarks, modality coverage, and openness, with a programmatic verdict synthesized from the underlying data.

Updated

Molmo 72B specs · SmolLM2 1.7B specs
Verdict by category
  • PriceTie

    Neither model publishes per-token API pricing.

  • Context windowSmolLM2 1.7B

    SmolLM2 1.7B accepts 8.2K tokens vs 4K, 2.0× the room for long documents and codebases.

  • BenchmarksTie

    No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores.

  • ModalitiesMolmo 72B

    Molmo 72B supports 2 modalities (text, vision) vs 1 for SmolLM2 1.7B.

  • OpennessTie

    Both ship open weights, self-host either one.

It's a genuine coin-flip between Molmo 72B and SmolLM2 1.7B: 1 category wins each, with the rest tied. Neither model publishes per-token API pricing. SmolLM2 1.7B accepts 8.2K tokens vs 4K, 2.0× the room for long documents and codebases.

No directly comparable public benchmarks are available for both models, check the spec sheets for individual scores. They differ in modality coverage, Molmo 72B handles text, vision while SmolLM2 1.7B handles text, which can be the deciding factor before you even look at benchmarks. Both ship open weights, self-host either one.

SmolLM2 1.7B is the newer of the two, released 1 months after Molmo 72B, which usually means a more recent knowledge cutoff and updated safety post-training. Molmo 72B is usually picked for vision and open source llm workloads, while SmolLM2 1.7B sees more deployments in on device and edge deployment. If pricing matters more than every last benchmark point, run the numbers in the calculator below before committing.

Side-by-side specs

SpecMolmo 72BSmolLM2 1.7B
ProviderOtherOther
ReleasedSep 2024Nov 2024
Modalitiestext, visiontext
Context window4K tokens8.2K tokens
Max output,,
Input · 1MPricing not publishedPricing not published
Output · 1MPricing not publishedPricing not published
Knowledge cutoff,,
Open weightsYes (Apache-2.0)Yes (Apache-2.0)
API availableNoNo

Pricing at scale

What you'd actually pay at typical workloads. Numbers come from each model's published per-million-token rates.

  • Light usage, 100k in / 50k out per day, vs ,
  • Heavy usage, 1M in / 500k out per day, vs ,
  • RAG workload, 5M in / 200k out per day, vs ,

Light usage, 100k in / 50k out per day: pricing not directly comparable (one or both models are missing public per-token rates). Heavy usage, 1M in / 500k out per day: pricing not directly comparable (one or both models are missing public per-token rates). RAG workload, 5M in / 200k out per day: pricing not directly comparable (one or both models are missing public per-token rates).

Price calculator

Estimated spend for the listed models at your usage. Numbers are derived from each model's published per-million-token rates.

  • Molmo 72BPricing unavailable
  • SmolLM2 1.7BPricing unavailable

Benchmarks compared

Only sourced numbers. Where a benchmark is missing for one model we show the available value rather than fabricating the other.

Benchmark scores not yet available. We only publish numbers we can source from official model cards or independent leaderboards, see methodology.
Pick Molmo 72B if

Molmo 72B fits when…

  • Fully open (data + code + weights)
  • Apache-2.0
  • Multimodal needs covering vision.
Pick SmolLM2 1.7B if

SmolLM2 1.7B fits when…

  • Truly tiny
  • Apache-2.0
  • Runs on phones
  • Long-context tasks, handles 8.2K tokens vs 4K for Molmo 72B.
Don't want either?

Consider DBRX

Databricks' 132B MoE, a notable 2024 open-weight release tuned for enterprise.

Frequently asked

  • Is Molmo 72B or SmolLM2 1.7B cheaper?
    Per-token pricing isn't published for at least one of these models, check each model's spec page for current rates.
  • Which has the larger context window?
    SmolLM2 1.7B accepts 8.2K tokens vs 4K for Molmo 72B.
  • Is Molmo 72B or SmolLM2 1.7B better for coding?
    Both Molmo 72B and SmolLM2 1.7B are competitive on coding benchmarks. See each model's individual spec page for HumanEval and SWE-bench scores where published. For an opinionated pick, consult our Best LLM for Coding ranking.
  • Are either of these models open source?
    Both ship with open weights. Molmo 72B is licensed under Apache-2.0; SmolLM2 1.7B under Apache-2.0.
  • When were Molmo 72B and SmolLM2 1.7B released?
    Molmo 72B was released by Other on 2024-09-25. SmolLM2 1.7B was released by Other on 2024-11-01.
Friday digest

The week's AI launches, in your inbox.

One short email every Friday, new models, leaks, and quietly-shipped APIs you missed.