Claude Opus 4.7 Is Here: What Premium Teams Need to Know About the Tokenizer, xhigh, and Spend Controls

    Claude Opus 4.7 Is Here: What Premium Teams Need to Know About the Tokenizer, xhigh, and Spend Controls

    Till FreitagTill Freitag17. April 20265 min read
    Till Freitag

    TL;DR: „Opus 4.7 costs the same per token as Opus 4.6 ($5/$25 per million input/output), but the new tokenizer + new xhigh effort level mean 20–30% higher costs in practice on complex coding tasks. Premium seats now get Opus 4.7 as the new default in Cowork, Claude Code, and chat. If you have 'extra usage' enabled, tighten your spend controls now."

    — Till Freitag

    The most important things in 30 seconds

    Anthropic released Claude Opus 4.7 today – the most capable Opus model yet. At first glance it sounds like a normal point release: same price, better coding, stronger vision.

    Look closer and there are three changes that will become relevant for every team with an active Anthropic account in the next few days:

    1. New tokenizer – the same input produces more tokens for some context types.
    2. New xhigh effort level in Claude Code – set as the new default for premium seats.
    3. Premium seats are automatically switched to Opus 4.7 – across Cowork, Claude Code, and chat.

    All of this fits perfectly with Anthropic's strategy of the past two years: improve organically, dig deeper in the existing niche, no spectacular acquisitions.

    What Opus 4.7 does better

    Anthropic positions the release as "meaningful gains" in three areas:

    • Coding – stronger performance on complex refactorings and multi-file tasks
    • Agentic work – more reliable tool use and longer autonomous sessions
    • Professional tasks – higher-quality output for slides, documents, and interface drafts

    Concretely: better vision, stronger multimodal understanding, and higher-quality output for UI mockups and presentations. Opus 4.6 was already strong here – but 4.7 closes some of the gaps that were still visible in direct comparison with GPT-5.4 and Gemini 3.1 Pro.

    For our daily workflows with Claude Code as a GTM layer, this is the most important change: tasks that needed two attempts with 4.6 land on the first try more often with 4.7.

    The pricing structure (and why it still gets more expensive)

    On paper, nothing happened:

    Model Input ($/M) Output ($/M)
    Claude Opus 4.6 $5 $25
    Claude Opus 4.7 $5 $25

    In practice your bill still goes up – for two reasons.

    Reason 1: The new tokenizer

    Anthropic updated the tokenizer to improve text processing. The trade-off: for certain context types, the same input now produces more tokens than before.

    What this means in dollars depends heavily on your workload. If you mostly process Markdown and code, you'll likely see little difference. If you work with structured data, logs, or certain language mixes, watch the first few days closely.

    Reason 2: The new xhigh effort level

    For Claude Code, there's a new effort level between high and max: xhigh.

    And – this is the key point – xhigh is the new default for premium seats.

    Anthropic itself gives the magnitude: at xhigh effort, Opus 4.7 can cost roughly 20–30% more than Opus 4.6 at max effort.

    That's not a small number. If you run intensive coding sessions with Claude Code, you should know the following levers:

    • Lower the effort: high or medium for standard tasks, xhigh only for complex refactorings
    • Use server-managed settings: control defaults centrally for the entire team
    • Adjust spend controls: especially if "extra usage" is enabled

    What changes for seats

    The default models are being switched:

    Seat type New default Where?
    Standard Claude Sonnet 4.6 (unchanged) Cowork, Claude Code, chat
    Premium Claude Opus 4.7 Cowork, Claude Code, chat

    If you're a premium user and take no action, you'll automatically work with the more expensive, more capable model starting today. For most teams that's intended – but it's good to know it's happening.

    Practical recommendations for teams

    What we adjusted in our own setups today:

    1. Export the spend report now If you want to know how the tokenizer change affects your workload, you need a baseline. Export the spend report for the last 30 days – before 4.7 distorts the numbers.

    2. Set effort defaults deliberately xhigh is good for complex tasks but expensive as a default for every prompt. Use server-managed settings to define sensible defaults per team or project.

    3. Tighten spend controls Especially for organizations with active "extra usage" – limits should match the new reality. Otherwise, there's a surprise at the end of the month.

    4. Don't use Opus 4.7 for everything Sonnet 4.6 is still the right choice for 80% of tasks. Opus 4.7 is the premium model – worth it where it's actually needed. More on this in our model routing guide.

    Lovable supports Opus 4.7 from day one

    Lovable rolled out Opus 4.7 right at launch – with two notable data points:

    • 40% fewer turns: Lovable's own benchmarks show Opus 4.7 completes the same tasks in 40% fewer turns than the previous model. That's exactly the effect that offsets the higher per-turn cost.
    • Discounted rates through April 30: Lovable builders get discounted rates during the rollout window – so credits go noticeably further, especially in the first two weeks.

    The community is already sharing concrete impressions. Alex Cinovoj puts it well:

    "Fewer turns with Opus 4.7 lets me chain more steps in a single flow without hitting blockers. That's cut down on the manual tweaks I used to need."

    A fair caveat: "fewer turns" is an efficiency win – not automatically a quality win. The question of whether user satisfaction and output quality scale up proportionally is a valid one. Our observation from the first days: for clearly scoped vibe-coding tasks, yes; for very open-ended prompts, the result still depends heavily on the initial prompt – Opus 4.7 thinks more thoroughly, but it doesn't read minds.

    Practical takeaway: bank the efficiency gains, but don't let go of prompt discipline. A well-specified task in 3 turns will always beat a fuzzy one in 5.

    Strategic context: what does this mean?

    Opus 4.7 is not a dramatic release. No new benchmark record, no spectacular new modality. It's a classic Anthropic release: organic improvement in the existing niche, with focus on what enterprise customers actually need – coding, agentic work, professional output.

    That fits the line we've been observing in our OpenAI vs. Anthropic timeline for two years: Anthropic builds, OpenAI buys. While OpenAI grabs headlines with talk-show acquisitions, Anthropic ships a model update every 3–4 months that gains a few more points on coding benchmarks.

    For teams already betting on Claude, this is the best news: reliability, predictable improvements, no nasty surprises. The only surprise – the new xhigh default and the tokenizer change – is communicated transparently. That's exactly how it should be.

    More from the AI strategy series

    TeilenLinkedInWhatsAppE-Mail

    Related Articles

    Chess pieces as a metaphor for the platform conflict between Anthropic and Lovable
    April 14, 20263 min

    Anthropic Is Building an App Builder – And It's Coming for Europe's Vibe-Coding Star Lovable

    Leaked screenshots reveal an integrated app builder inside Claude. What this means for Lovable, the European startup eco…

    Read more
    The AI Race in 31 Milestones: The Complete OpenAI vs. Anthropic Timeline
    April 11, 20262 min

    The AI Race in 31 Milestones: The Complete OpenAI vs. Anthropic Timeline

    From GPT-4o to Project Glasswing: Every acquisition, model launch, and product release from OpenAI and Anthropic on an i…

    Read more
    Claude Mythos & Project Glasswing: When AI Gets Too Good at Hacking, It Becomes the Defenders' Weapon
    April 11, 20264 min

    Claude Mythos & Project Glasswing: When AI Gets Too Good at Hacking, It Becomes the Defenders' Weapon

    Anthropic's new frontier model Claude Mythos Preview is so good at finding vulnerabilities that it won't be publicly rel…

    Read more
    Claude Mythos Preview: Benchmarks, Exploit Chains, and the Technical Deep Dive
    April 11, 20267 min

    Claude Mythos Preview: Benchmarks, Exploit Chains, and the Technical Deep Dive

    Claude Mythos Preview isn't incrementally better – it's a different category. 93.9% on SWE-bench, 100% on Cybench, and e…

    Read more
    OpenAI Buys a TV Show. Anthropic Builds the Future of Software. And Google? It's Playing a Different Game Entirely.
    April 11, 20266 min

    OpenAI Buys a TV Show. Anthropic Builds the Future of Software. And Google? It's Playing a Different Game Entirely.

    OpenAI buys TBPN, a Jony Ive hardware startup, and builds a desktop superapp. Anthropic turns Claude into a Developer OS…

    Read more
    Claude Managed Agents architecture – brain connected to multiple hands representing tools and sandboxes
    April 8, 20265 min

    Claude Managed Agents: Anthropic's Play to Own the Agent Runtime

    Anthropic launches Managed Agents in public beta – a hosted runtime that decouples the 'brain' from the 'hands.' No more…

    Read more
    OpenClaw Pricing Shock: How to Avoid the $500 Bill
    April 5, 20262 min

    OpenClaw Pricing Shock: How to Avoid the $500 Bill

    Anthropic just killed third-party tool coverage under Claude subscriptions. If you're running OpenClaw without prep, you…

    Read more
    Why We're Hiring Germany's First Vibe Coder
    March 25, 20264 min

    Why We're Hiring Germany's First Vibe Coder

    We're looking for Germany's First Vibe Coder. Not a marketing gag – the logical consequence of how we build software in …

    Read more
    Lovable Pricing & Plans Explained – Is It Worth It?
    February 27, 20264 min

    Lovable Pricing & Plans Explained – Is It Worth It?

    What does Lovable actually cost? We break down all plans, credits, and hidden costs – with honest assessments of which p…

    Read more