LLM·Dex
Open weightsDeepSeekopentext

DeepSeek-V3

DeepSeek's flagship 671B-parameter MoE, frontier-level quality at a tiny fraction of frontier prices.

Updated


Quick facts

Released
Dec 2024
Context
128K tokens
Output / 1M
$1.10 / 1M tokens
License
MIT

About DeepSeek-V3

DeepSeek-V3 was the late-2024 release that broke the cost ceiling on frontier-quality LLMs. The 671B-parameter mixture-of-experts model activates only 37B per token, which makes it dramatically cheaper to serve than dense models of comparable quality.

It's the most-deployed open-weight frontier model in 2026, with major hosting on Together, Fireworks, OpenRouter, and DeepSeek's own API. The MIT license makes commercial use straightforward.

Benchmarks

Published scores from DeepSeek's model card or independent leaderboards. We do not publish numbers we cannot source, see methodology.

HumanEval
90.0
Python coding pass@1
MMLU
88.5
Broad academic knowledge
GPQA
,
Graduate-level reasoning
SWE-bench
,
Real software-engineering tasks
  • MMLU88.5
  • HumanEval90.0

Capabilities

Strengths

  • Frontier-level quality at open-weight prices
  • MIT license, clean commercial use
  • Cheap to serve via MoE architecture
  • Strong code and math

Tracked weaknesses

  • No native vision support
  • Geopolitical concerns for some enterprise customers

Pricing

Per-million-token rates as published by DeepSeek.

TierPriceNotes
Input$0.27 / 1M tokensTokens you send to the model
Output$1.10 / 1M tokensTokens the model generates
Context128K tokensMax combined input + output

Call DeepSeek-V3 from your code

Drop-in snippet for the DeepSeek SDK. Set your API key in the environment and run.

typescript
import OpenAI from "openai";

const client = new OpenAI({
  // Use OPENAI_API_KEY for OpenAI, or your provider's key + baseURL.
  apiKey: process.env.OPENAI_API_KEY,
});

const completion = await client.chat.completions.create({
  model: "deepseek-v3",
  messages: [
    { role: "user", content: "What's the time complexity of quicksort?" },
  ],
});

console.log(completion.choices[0].message.content);

Best for

Tasks where DeepSeek-V3 ranks among LLMDex's top picks.

Compare DeepSeek-V3 with…

Frequently asked

  • How much does DeepSeek-V3 cost per million tokens?
    DeepSeek-V3 is priced at $0.27 / 1M tokens for input tokens and $1.10 / 1M tokens for output tokens via the official DeepSeek API at the time of writing.
  • What is DeepSeek-V3's context window?
    DeepSeek-V3 supports a context window of 128K tokens.
  • Is DeepSeek-V3 open source?
    DeepSeek-V3 ships with open weights under the MIT license. You can self-host it, fine-tune it, and (subject to the license terms) deploy it commercially.
  • When was DeepSeek-V3 released?
    DeepSeek-V3 was released on Dec 26, 2024 by DeepSeek.
  • What is DeepSeek-V3's knowledge cutoff?
    DeepSeek-V3's training data has a knowledge cutoff of Jul 2024. For information after that date you'll need a tool-use or web-search wrapper.
Friday digest

Intelligence, distilled weekly.

One short email every Friday, new model launches, leaderboard moves, and pricing drops. Curated by hand. Free, no spam.