Claude Opus 4.7
Anthropic's mid-2026 flagship, ahead on SWE-bench, agent reliability, and writing quality.
Updated
Quick facts
- Released
- Feb 2026
- Context
- 500K tokens
- Output / 1M
- Pricing not published
- License
- Proprietary
About Claude Opus 4.7
Claude Opus 4.7 is Anthropic's mid-cycle Opus refresh, aimed at extending the lead on coding agents and long-running tool use established by Opus 4 and 4.5. Real-world deployments at Cursor, Cline, and Anthropic's own Claude Code suggest the gains are most pronounced on multi-hour agent tasks where memory and recovery matter.
For writing, Opus 4.7 keeps the line's defining strength: a willingness to commit to a voice and stay there. It's the consensus pick for fiction, copy, and any work where prose quality matters more than retrieval bandwidth.
Pricing follows the Opus pattern, premium versus mid-tier, so most teams reserve Opus for the workloads that visibly benefit and run mid-tier (Sonnet) elsewhere.
Benchmarks
Published scores from Anthropic's model card or independent leaderboards. We do not publish numbers we cannot source, see methodology.
Capabilities
Strengths
- Strongest published SWE-bench Verified scores in agent settings
- Best-in-class writing quality and voice control
- Excellent long-context recall and citation discipline
- Robust tool-use across long agent loops
Tracked weaknesses
- Premium pricing relative to GPT-5 line
- More conservative refusal patterns on edge content than peers
Pricing
Per-million-token rates as published by Anthropic.
Call Claude Opus 4.7 from your code
Drop-in snippet for the Anthropic SDK. Set your API key in the environment and run.
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
const message = await client.messages.create({
model: "claude-opus-4-7",
max_tokens: 1024,
messages: [
{ role: "user", content: "What's the time complexity of quicksort?" },
],
});
console.log(message.content);Best for
Tasks where Claude Opus 4.7 ranks among LLMDex's top picks.
- Best LLM for CodingDevelopers searching for the most capable model to write, edit, or refactor code in real codebases.See ranking
- Best LLM for Code ReviewEngineers automating PR review, catching bugs, security issues, or style regressions before merge.See ranking
- Best LLM for PythonPython-specific coding tasks: scripts, data work, ML pipelines, scientific computing.See ranking
Compare Claude Opus 4.7 with…
Frequently asked
How much does Claude Opus 4.7 cost?
Anthropic has not published per-token API pricing for Claude Opus 4.7 at the time of writing. Check the official site for current pricing tiers, or compare against alternative models on LLMDex.What is Claude Opus 4.7's context window?
Claude Opus 4.7 supports a context window of 500K tokens.Is Claude Opus 4.7 open source?
No. Claude Opus 4.7 is a closed-weight model, you can use it via Anthropic's API but the model weights are not publicly downloadable.When was Claude Opus 4.7 released?
Claude Opus 4.7 was released on Feb 15, 2026 by Anthropic.
Intelligence, distilled weekly.
One short email every Friday, new model launches, leaderboard moves, and pricing drops. Curated by hand. Free, no spam.