
Claude Opus 4.7 Is Here: What Premium Teams Need to Know About the Tokenizer, xhigh, and Spend Controls
TL;DR: „Opus 4.7 costs the same per token as Opus 4.6 ($5/$25 per million input/output), but the new tokenizer + new xhigh effort level mean 20–30% higher costs in practice on complex coding tasks. Premium seats now get Opus 4.7 as the new default in Cowork, Claude Code, and chat. If you have 'extra usage' enabled, tighten your spend controls now."
— Till FreitagThe most important things in 30 seconds
Anthropic released Claude Opus 4.7 today – the most capable Opus model yet. At first glance it sounds like a normal point release: same price, better coding, stronger vision.
Look closer and there are three changes that will become relevant for every team with an active Anthropic account in the next few days:
- New tokenizer – the same input produces more tokens for some context types.
- New
xhigheffort level in Claude Code – set as the new default for premium seats. - Premium seats are automatically switched to Opus 4.7 – across Cowork, Claude Code, and chat.
All of this fits perfectly with Anthropic's strategy of the past two years: improve organically, dig deeper in the existing niche, no spectacular acquisitions.
What Opus 4.7 does better
Anthropic positions the release as "meaningful gains" in three areas:
- Coding – stronger performance on complex refactorings and multi-file tasks
- Agentic work – more reliable tool use and longer autonomous sessions
- Professional tasks – higher-quality output for slides, documents, and interface drafts
Concretely: better vision, stronger multimodal understanding, and higher-quality output for UI mockups and presentations. Opus 4.6 was already strong here – but 4.7 closes some of the gaps that were still visible in direct comparison with GPT-5.4 and Gemini 3.1 Pro.
For our daily workflows with Claude Code as a GTM layer, this is the most important change: tasks that needed two attempts with 4.6 land on the first try more often with 4.7.
The pricing structure (and why it still gets more expensive)
On paper, nothing happened:
| Model | Input ($/M) | Output ($/M) |
|---|---|---|
| Claude Opus 4.6 | $5 | $25 |
| Claude Opus 4.7 | $5 | $25 |
In practice your bill still goes up – for two reasons.
Reason 1: The new tokenizer
Anthropic updated the tokenizer to improve text processing. The trade-off: for certain context types, the same input now produces more tokens than before.
What this means in dollars depends heavily on your workload. If you mostly process Markdown and code, you'll likely see little difference. If you work with structured data, logs, or certain language mixes, watch the first few days closely.
Reason 2: The new xhigh effort level
For Claude Code, there's a new effort level between high and max: xhigh.
And – this is the key point – xhigh is the new default for premium seats.
Anthropic itself gives the magnitude: at xhigh effort, Opus 4.7 can cost roughly 20–30% more than Opus 4.6 at max effort.
That's not a small number. If you run intensive coding sessions with Claude Code, you should know the following levers:
- Lower the effort:
highormediumfor standard tasks,xhighonly for complex refactorings - Use server-managed settings: control defaults centrally for the entire team
- Adjust spend controls: especially if "extra usage" is enabled
What changes for seats
The default models are being switched:
| Seat type | New default | Where? |
|---|---|---|
| Standard | Claude Sonnet 4.6 (unchanged) | Cowork, Claude Code, chat |
| Premium | Claude Opus 4.7 | Cowork, Claude Code, chat |
If you're a premium user and take no action, you'll automatically work with the more expensive, more capable model starting today. For most teams that's intended – but it's good to know it's happening.
Practical recommendations for teams
What we adjusted in our own setups today:
1. Export the spend report now If you want to know how the tokenizer change affects your workload, you need a baseline. Export the spend report for the last 30 days – before 4.7 distorts the numbers.
2. Set effort defaults deliberately
xhigh is good for complex tasks but expensive as a default for every prompt. Use server-managed settings to define sensible defaults per team or project.
3. Tighten spend controls Especially for organizations with active "extra usage" – limits should match the new reality. Otherwise, there's a surprise at the end of the month.
4. Don't use Opus 4.7 for everything Sonnet 4.6 is still the right choice for 80% of tasks. Opus 4.7 is the premium model – worth it where it's actually needed. More on this in our model routing guide.
Lovable supports Opus 4.7 from day one
Lovable rolled out Opus 4.7 right at launch – with two notable data points:
- 40% fewer turns: Lovable's own benchmarks show Opus 4.7 completes the same tasks in 40% fewer turns than the previous model. That's exactly the effect that offsets the higher per-turn cost.
- Discounted rates through April 30: Lovable builders get discounted rates during the rollout window – so credits go noticeably further, especially in the first two weeks.
The community is already sharing concrete impressions. Alex Cinovoj puts it well:
"Fewer turns with Opus 4.7 lets me chain more steps in a single flow without hitting blockers. That's cut down on the manual tweaks I used to need."
A fair caveat: "fewer turns" is an efficiency win – not automatically a quality win. The question of whether user satisfaction and output quality scale up proportionally is a valid one. Our observation from the first days: for clearly scoped vibe-coding tasks, yes; for very open-ended prompts, the result still depends heavily on the initial prompt – Opus 4.7 thinks more thoroughly, but it doesn't read minds.
Practical takeaway: bank the efficiency gains, but don't let go of prompt discipline. A well-specified task in 3 turns will always beat a fuzzy one in 5.
Strategic context: what does this mean?
Opus 4.7 is not a dramatic release. No new benchmark record, no spectacular new modality. It's a classic Anthropic release: organic improvement in the existing niche, with focus on what enterprise customers actually need – coding, agentic work, professional output.
That fits the line we've been observing in our OpenAI vs. Anthropic timeline for two years: Anthropic builds, OpenAI buys. While OpenAI grabs headlines with talk-show acquisitions, Anthropic ships a model update every 3–4 months that gains a few more points on coding benchmarks.
For teams already betting on Claude, this is the best news: reliability, predictable improvements, no nasty surprises. The only surprise – the new xhigh default and the tokenizer change – is communicated transparently. That's exactly how it should be.








