LLM·Dex
Use caseTop 7 picks

Best On-Device LLMs in 2026

Small models that run on phones, laptops, or edge hardware.

Updated

How we ranked

  • Performance under 8B parameters
  • Quantization tolerance (int4, int8)
  • Memory footprint after quantization
  • Inference speed on Apple Silicon / mobile NPUs
  • License permissiveness for commercial use

Read the full methodology for our sourcing and ranking standards.

On-device LLMs went from research curiosity to product-ready in the 2024, 2026 window. A 4-bit quantized 8B model fits in 6GB of RAM and runs on a recent MacBook at twenty-plus tokens per second, fast enough to feel like a chat app.

Microsoft's Phi line is the highest-quality-per-byte family in this size class. Meta's Llama and Google's Gemma are close behind, with the upside of broader community tooling. Qwen's small models are particularly strong at code and math.

For shipping consumer apps, the practical choice is whatever model fits in your target device's VRAM with room for the system. Pin to a 7-8B model and you'll cover most modern phones and laptops.

The ranking

  1. #1MicrosoftOpen weights

    Phi-4

    Microsoft's 14B model, exceptional quality-per-parameter via curated synthetic training data.

    Context
    16K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Exceptional quality at 14B parameters. MIT license, clean commercial use. Tracked weakness: Short 16k context.

  2. #2MicrosoftOpen weights

    Phi-3.5 Medium

    14B Phi-3.5, predecessor to Phi-4 with strong benchmark efficiency for its size.

    Context
    128K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. MIT license. 128k context. Tracked weakness: Superseded by Phi-4.

  3. #3GoogleOpen weights

    Gemma 2 9B

    Google's mid-2024 open-weight 9B, strong quality for its size, friendly license.

    Context
    8.2K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Strong 9B-class quality. Wide tooling. Tracked weakness: Short 8k context.

  4. #4AlibabaOpen weights

    Qwen2.5-7B

    Small Qwen, practical default for laptop and edge inference.

    Context
    128K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Apache-2.0. Runs on laptops. Tracked weakness: Quality limited by size.

  5. #5MetaOpen weights

    Llama 4 8B

    Meta's small Llama 4, built for on-device and edge inference.

    Context
    128K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Runs on consumer laptops. Broad tooling support. Tracked weakness: Quality limited by size.

  6. #6MistralOpen weights

    Ministral 8B

    Mistral's 8B edge model, designed specifically for on-device and on-prem deployment.

    Context
    128K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Edge-optimized. Strong 8B-class quality. Tracked weakness: Research license restricts unmodified commercial deployment.

  7. #7OtherOpen weights

    SmolLM2 1.7B

    HuggingFace's tiny model line, punches above its weight on a strict on-device budget.

    Context
    8.2K tokens
    Output · 1M
    Pricing not published
    Modalities
    text

    Why it ranks here. Truly tiny. Apache-2.0. Tracked weakness: Quality very limited.

How to choose

Don't pick on the headline ranking alone. Run your top two picks on a representative sample of your own workload and compare. The numbers in this list are sound, but task-specific quality varies in ways no benchmark fully captures. The criteria above are the right axes to evaluate on, but the weighting depends on your stack.

  • Cost-sensitive workloads, start with the cheapest of the top three; escalate only if quality is the bottleneck.
  • Privacy-sensitive workloads, filter to open-weight picks above. They're labeled with a green badge.
  • Latency-sensitive workloads, see the Fastest LLMs list, which can override task-specific picks.

Frequently asked

  • What is the best model for on-device llms?
    Our #1 pick is Phi-4 from Microsoft. Microsoft's 14B model, exceptional quality-per-parameter via curated synthetic training data.
  • How are these rankings determined?
    We rank by the criteria listed at the top of this page: Performance under 8B parameters; Quantization tolerance (int4, int8); Memory footprint after quantization. Where two models are close, we prefer the one with stronger production deployment evidence at the time of writing. Read the full methodology for our standards.
  • Phi-4 or Phi-3.5 Medium?
    Both are top-tier picks. Phi-4 edges ahead on the criteria most relevant to this task. Phi-3.5 Medium is the strongest alternative, see the head-to-head comparison page for full deltas.
  • Are open-source models on this list?
    Yes where they're competitive. Each entry below shows whether the model ships open weights and under what license.
  • How often is this list updated?
    Weekly. New launches that affect the ranking get reflected within seven days. The "last updated" stamp at the top of the page reflects the most recent dataset commit.

Related guides

Friday digest

The week's AI launches, in your inbox.

One short email every Friday, new models, leaks, and quietly-shipped APIs you missed.