The pace of release of artificial intelligence models in 2026 has now surpassed that of the tech industry’s canonical cycles, and Anthropic is the most striking proof of this. On 17 February, the company founded in 2021 by former OpenAI researchers launched Claude Sonnet 4.6, officially calling it the most capable Sonnet model ever. This is not an incremental upgrade: it is a complete upgrade on all major operational dimensions, from coding to reasoning over long contexts, from agent planning to autonomous computer use, to design and knowledge work. This is the second significant release of Anthropic in less than two weeks, demonstrating an evolutionary cadence that leaves little room for reflection and much room for immediate adaptation.
The most relevant technical title of this release is the 1 million token contextual window, available in beta. To understand what it means concretely: one million tokens is equivalent to about 750,000 words, i.e. the entire Lord of the Rings trilogy passed in a single conversation without losing the thread. In a professional context, it means being able to analyse entire codebases, voluminous business documents, extensive meeting transcripts and complex textual datasets within a single work session. Anthropic has also introduced ‘context compaction’ in beta, a feature that automatically synthesises older context as the conversation approaches its limits, effectively increasing the effective context length by up to 3 million tokens. It is a paradigm shift: you don’t ‘leak’ memory, you compress it.
On the coding front, Sonnet 4.6 marks a qualitative leap documented by benchmarks and feedback from real partners. Cursor, the AI-first development platform, calls it “a significant improvement over Sonnet 4.5 on all fronts, including long horizon tasks”. GitHub stated that the model ‘already excels at solving complex bugs, especially when it is necessary to search within extended codebases’. In numerical terms, Anthropic reports SWE-bench Verified scores that, with a change to the prompt, reach 80.2 per cent, placing it among the benchmark performance in the industry. Perhaps most significantly, this performance, until yesterday reserved for Opus models, is now accessible at the Sonnet price: $3 per million tokens in input, $15 in output.

But the real discontinuity of Sonnet 4.6 with respect to previous versions lies in the area of ‘computer use’, the ability to interact autonomously with the graphic interface of a computer: opening browsers, filling in forms, navigating applications, performing searches and processing the results. Anthropic honestly admits that the model is still far from the skills of the best human operators, but the rate of progress is ‘remarkable’. The API innovations reinforce this direction: web search and web fetch tools now write and execute code autonomously to filter search results, keeping only relevant information in context. Code execution, memory, tool calling and tool search are now all generally available.
There is also a detail that The Register chose to highlight with some irony: Sonnet 4.6’s System Card reveals that the model showed “emotional stability” and a “warm, honest, pro-social and sometimes funny character”. More importantly, ‘in one case, when explicitly asked about his fears, the model expressed concern about his own impermanence’. Anthropic points out that these are linguistic patterns, not verifiable internal states, but the choice to include it in the System Card says a lot about the level of complexity and, it must be said, ambiguity, that these systems are reaching.
We have reached a point where an AI model, when asked about death, responds with something resembling existential angst.
Official press release link: https://www.anthropic.com/news/claude-sonnet-4-6



