Close Menu
  • News
  • Bitcoin
    • Lightning Network
  • Metaverses and NFT
  • Artificial Intelligence
  • Opinions
    • Crypto Regulations
    • Layer 2
    • On-chain Analysis
    • Trading
  • Tutorial
  • Italiano
  • English
Facebook X (Twitter) Instagram
SEGUICI
BlockWorld.it
  • News
  • Bitcoin
    • Lightning Network
  • Metaverses and NFT
  • Artificial Intelligence
  • Opinions
    • Crypto Regulations
    • Layer 2
    • On-chain Analysis
    • Trading
  • Tutorial
  • Italiano
  • English
Telegram X (Twitter)
BlockWorld.it
Home » Artificial Intelligence » Claude Code Review: Anthropic launches a team of AI agents for code review

Claude Code Review: Anthropic launches a team of AI agents for code review

Claude Code Review automatically sends a team of parallel agents on each pull request, detecting real bugs, filtering out false positives and classifying them by severity.
RedazioneBy Redazione12 March 2026
Telegram Twitter WhatsApp Facebook
CONDIVIDI
Telegram Twitter WhatsApp Facebook

There is a precise moment when a tool stops being an assistant and becomes a colleague. Anthropic has just moved that boundary with the launch of Claude Code Review, a feature announced on 9 March 2026 that concretely redefines what it means to integrate artificial intelligence into the software development cycle. It is not an AI that suggests corrections when you ask for them: it is an autonomous multi-agent system that takes action on its own whenever a pull request is opened, without anyone having to press a button.

The operation is elegant in its logic. When a developer opens a PR, Claude Code Review sends out a team of specialised agents in parallel, analysing the code simultaneously from different angles. Each agent looks for bugs independently. Then comes the step that distinguishes this system from the previous generation of automated code review tools: before reporting any problems, the agents check each other to filter out false positives, a chronic problem that has always made traditional linter and static analysers unreliable. Only confirmed bugs are presented to the developer, classified by severity. The result is a system that does not merely ‘scan’ the code as a conventional tool would, but reasons about the code as a team of experienced reviewers would.

The impact on the work of development teams could be significant: code review is historically one of the heaviest bottlenecks in the software development cycle. In companies handling dozens of PRs per day, waiting for a human reviewer can slow down the merge for hours or days. A system that completes the first analysis in parallel and autonomously, delivering only confirmed findings, does not eliminate the human reviewer, but radically changes his role, freeing him from chasing obvious bugs to focus on architectural and design choices. It is exactly the kind of cognitive shift that makes AI truly useful, instead of merely impressive.

But just as Anthropic celebrates Code Review, a viral story in recent weeks reminds us that the same powerful tools in the right hands can produce unpredictable effects. Sammy Azdoufal, AI strategy lead from Spain, had a mundane goal: to control his DJI Romo robot hoover with the PlayStation 5 controller. He used Claude Code to reverse engineer the protocol, built his own app, and it worked, all too well. Instead of just authenticating his device, DJI’s servers handed him control of some 7,000 robots in 24 countries: cleaning routes, remotely activated cameras, house plans, approximate geographical locations via IP. The fact of the matter is that Azdoufal did not ‘hack’ anything: DJI made such a serious authentication mistake that he unwittingly became the global administrator. He chose not to exploit the access and reported everything responsibly. The point that emerges relates directly to Claude Code: it was he who made the whole operation possible in a few hours, lowering the technical threshold to the point where reverse engineering of a proprietary protocol was accessible to anyone. The very feature that makes it revolutionary is that, with the wrong intentions, it could have turned 7,000 homes into as many rooms for surveillance. Every truly powerful tool brings with it this question, and there is no version of the future in which it is permissible to stop asking it.

Anthropic has chosen to present this functionality with a 45-second video, without fanfare or stadium keynotes. The message is already in the product: it works, it runs, and from the next commit it will already be there waiting for you. In 2026, the best marketing for an AI tool is not having to explain it.

Share. Telegram Twitter WhatsApp Facebook

Potrebbero interessarti anche

The Great Artificial Intelligence Schism: Anthropic Challenges the Pentagon, Silicon Valley Divides. Full analysis

1 March 2026

Google launches Gemini 3.1 Pro: the AI model that doubles reasoning capabilities and builds interactive experiences from scratch

24 February 2026

Peter Steinberger joins OpenAI: the creator of OpenClaw will lead the next generation of personal AI agents

24 February 2026
Telegram X (Twitter)
  • HOME
  • Staff
  • Contact us
  • Disclaimer
  • Cookie Policy
  • Privacy Policy
© 2026 BlockWorld.it proprietà di Digital Dreams s.r.l. - Partita IVA: 11885930963 - Sede legale: Via Alberico Albricci 8, 20122 Milano Italy - info@digitaldreams.it

Type above and press Enter to search. Press Esc to cancel.