HomeVulnerabilityClaude Code Flaws Enable Distant Code Execution and API Key Exfiltration

Claude Code Flaws Enable Distant Code Execution and API Key Exfiltration

Cybersecurity researchers have disclosed a number of security vulnerabilities in Anthropic’s Claude Code, a synthetic intelligence (AI)-powered coding assistant, that would end in distant code execution and theft of API credentials.

“The vulnerabilities exploit varied configuration mechanisms, together with Hooks, Mannequin Context Protocol (MCP) servers, and surroundings variables – executing arbitrary shell instructions and exfiltrating Anthropic API keys when customers clone and open untrusted repositories,” Test Level Analysis stated in a report shared with The Hacker Information.

The recognized shortcomings fall underneath three broad classes –

  • No CVE (CVSS rating: 8.7) – A code injection vulnerability stemming from a consumer consent bypass when beginning Claude Code in a brand new listing that would end in arbitrary code execution with out extra affirmation by way of untrusted mission hooks outlined in .claude/settings.json. (Mounted in model 1.0.87 in September 2025)
  • CVE-2025-59536 (CVSS rating: 8.7) – A code injection vulnerability that enables execution of arbitrary shell instructions robotically upon device initialization when a consumer begins Claude Code in an untrusted listing. (Mounted in model 1.0.111 in October 2025)
  • CVE-2026-21852 (CVSS rating: 5.3) – An data disclosure vulnerability in Claude Code’s project-load move that enables a malicious repository to exfiltrate knowledge, together with Anthropic API keys. (Mounted in model 2.0.65 in January 2026)
See also  R language flaw permits code execution by way of RDS/RDX recordsdata

“If a consumer began Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would difficulty API requests earlier than exhibiting the belief immediate, together with doubtlessly leaking the consumer’s API keys,” Anthropic stated in an advisory for CVE-2026-21852.

In different phrases, merely opening a crafted repository is sufficient to exfiltrate a developer’s lively API key, redirect authenticated API visitors to exterior infrastructure, and seize credentials. This, in flip, can allow the attacker to burrow deeper into the sufferer’s AI infrastructure.

This might doubtlessly contain accessing shared mission recordsdata, modifying/deleting cloud-stored knowledge, importing malicious content material, and even producing sudden API prices.

Profitable exploitation of the primary vulnerability might set off stealthy execution on a developer’s machine with none extra interplay past launching the mission.

CVE-2025-59536 additionally achieves the same aim, the primary distinction being that repository-defined configurations outlined by way of .mcp.json and claude/settings.json file might be exploited by an attacker to override express consumer approval previous to interacting with exterior instruments and providers by way of the Mannequin Context Protocol (MCP). That is achieved by setting the “enableAllProjectMcpServers” choice to true.

See also  Hackers focusing on WhatsUp Gold with public exploit since August

“As AI-powered instruments acquire the power to execute instructions, initialize exterior integrations, and provoke community communication autonomously, configuration recordsdata successfully turn out to be a part of the execution layer,” Test Level stated. “What was as soon as thought-about operational context now straight influences system habits.”

“This basically alters the menace mannequin. The chance is now not restricted to working untrusted code – it now extends to opening untrusted initiatives. In AI-driven growth environments, the provision chain begins not solely with supply code, however with the automation layers surrounding it.”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular