Anthropic’s Claude Opus 4.7 improves coding, raises token use

Anthropic released Claude Opus 4.7, which boosts coding and reasoning scores, surfaces chain-of-thought in output and increases token consumption; available across Claude.ai, API and cloud marketplaces.

Anthropic released Claude Opus 4.7 today, its most capable Opus model to date. The update is available on Claude.ai, via the Claude API and through Amazon Bedrock, Google Cloud Vertex AI and Microsoft Foundry. Pricing for the Opus tier is unchanged.

In its announcement, Anthropic wrote: “Our latest model, Claude Opus 4.7, is now generally available.” The company reported benchmark gains in coding and reasoning, and denied reducing model weights to manage compute demand.

On coding benchmarks, Opus 4.7 scored 80.5% on SWE-bench Multilingual, up from 77.8% for Opus 4.6. On GDPVal-AA, Opus 4.7 posted an Elo of 1,753 compared with 1,674 from a leading competitor. Document reasoning showed larger gains: OfficeQA Pro rose to 80.6% from 57.1% for 4.6. On long-context coherence measured by Vending-Bench 2, the model produced a simulated money balance of $10,937 versus $8,018 for 4.6.

Anthropic confirmed it experimented with methods to reduce some cybersecurity capabilities during training. The Opus 4.7 public build includes automated safeguards that detect and block prohibited or high-risk cybersecurity requests. Security professionals can apply to a new Cyber Verification Program to request controlled access to higher-risk cyber features.

Anthropic continues to restrict a more powerful research model, Mythos, to vetted partners and security firms. Opus 4.7 will serve as a public testbed for the safety guardrails Anthropic plans to apply to Mythos-class models.

The Opus 4.7 tokenizer maps the same input to roughly 1.0x–1.35x more tokens depending on content type. The model also generates more output tokens, especially on agentic tasks that require multi-step reasoning. Anthropic’s migration guide warns developers that higher-effort workflows can use substantially more tokens and may exhaust token quotas.

The model now surfaces chain-of-thought within the main text output rather than placing reasoning steps in a separate “thinking” box, making those steps visible in the final text. Anthropic has not confirmed whether visible reasoning will remain the default behavior.

In independent tests that measure iterative coding work, Opus 4.7 produced a polished, playable game after one round of fixes and subsequently detected additional bugs during follow-up passes. A competing model produced a working game without further iteration but scored lower on logic and physics after one round and is offered at a lower price point.

Developers who want to upgrade can use the model identifier claude-opus-4-7 and should review Anthropic’s migration guide. Anthropic recommends monitoring token use patterns, especially for workflows that trigger extensive internal rewriting or multi-pass reasoning. Opus 4.7 is priced at $5 per million input tokens and $25 per million output tokens.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author