Cybersecurity researchers have disclosed a number of safety vulnerabilities in Anthropic’s Claude Code, a man-made intelligence (AI)-powered coding assistant, that might end in distant code execution and theft of API credentials.
“The vulnerabilities exploit numerous configuration mechanisms, together with Hooks, Mannequin Context Protocol (MCP) servers, and setting variables – executing arbitrary shell instructions and exfiltrating Anthropic API keys when customers clone and open untrusted repositories,” Verify Level researchers Aviv Donenfeld and Oded Vanunu stated in a report shared with The Hacker Information.
The recognized shortcomings fall beneath three broad classes –
- No CVE (CVSS rating: 8.7) – A code injection vulnerability stemming from a person consent bypass when beginning Claude Code in a brand new listing that might end in arbitrary code execution with out extra affirmation by way of untrusted challenge hooks outlined in .claude/settings.json. (Mounted in model 1.0.87 in September 2025)
- CVE-2025-59536 (CVSS rating: 8.7) – A code injection vulnerability that permits execution of arbitrary shell instructions robotically upon device initialization when a person begins Claude Code in an untrusted listing. (Mounted in model 1.0.111 in October 2025)
- CVE-2026-21852 (CVSS rating: 5.3) – An info disclosure vulnerability in Claude Code’s project-load stream that permits a malicious repository to exfiltrate information, together with Anthropic API keys. (Mounted in model 2.0.65 in January 2026)
“If a person began Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would subject API requests earlier than exhibiting the belief immediate, together with probably leaking the person’s API keys,” Anthropic stated in an advisory for CVE-2026-21852.
In different phrases, merely opening a crafted repository is sufficient to exfiltrate a developer’s energetic API key, redirect authenticated API visitors to exterior infrastructure, and seize credentials. This, in flip, can allow the attacker to burrow deeper into the sufferer’s AI infrastructure.
This might probably contain accessing shared challenge information, modifying/deleting cloud-stored information, importing malicious content material, and even producing sudden API prices.
Profitable exploitation of the primary vulnerability might set off stealthy execution on a developer’s machine with none extra interplay past launching the challenge.
CVE-2025-59536 additionally achieves the same purpose, the primary distinction being that repository-defined configurations outlined via .mcp.json and claude/settings.json file might be exploited by an attacker to override specific person approval previous to interacting with exterior instruments and companies via the Mannequin Context Protocol (MCP). That is achieved by setting the “enableAllProjectMcpServers” choice to true.
“As AI-powered instruments acquire the flexibility to execute instructions, initialize exterior integrations, and provoke community communication autonomously, configuration information successfully grow to be a part of the execution layer,” Verify Level stated. “What was as soon as thought-about operational context now instantly influences system conduct.”
“This essentially alters the menace mannequin. The danger is not restricted to working untrusted code – it now extends to opening untrusted initiatives. In AI-driven growth environments, the provision chain begins not solely with supply code, however with the automation layers surrounding it.”