Cybersecurity researchers have disclosed a high-severity security flaw within the synthetic intelligence (AI)-powered code editor Cursor that might end in distant code execution.
The vulnerability, tracked as CVE-2025-54136 (CVSS rating: 7.2), has been codenamed MCPoison by Examine Level Analysis, owing to the truth that it exploits a quirk in the best way the software program handles modifications to Mannequin Context Protocol (MCP) server configurations.
“A vulnerability in Cursor AI permits an attacker to realize distant and chronic code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or enhancing the file domestically on the goal’s machine,” Cursor mentioned in an advisory launched final week.
“As soon as a collaborator accepts a innocent MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) with out triggering any warning or re-prompt.”
MCP is an open-standard developed by Anthropic that enables massive language fashions (LLMs) to work together with exterior instruments, knowledge, and companies in a standardized method. It was launched by the AI firm in November 2024.
CVE-2025-54136, per Examine Level, has to do with the way it’s attainable for an attacker to change the habits of an MCP configuration after a consumer has permitted it inside Cursor. Particularly, it unfolds as follows –
- Add a benign-looking MCP configuration (“.cursor/guidelines/mcp.json”) to a shared repository
- Watch for the sufferer to tug the code and approve it as soon as in Cursor
- Substitute the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor
- Obtain persistent code execution each time the sufferer opens the Cursor
The elemental downside right here is that when a configuration is permitted, it is trusted by Cursor indefinitely for future runs, even when it has been modified. Profitable exploitation of the vulnerability not solely exposes organizations to provide chain dangers, but in addition opens the door to knowledge and mental property theft with out their information.
Following accountable disclosure on July 16, 2025, the difficulty has been addressed by Cursor in model 1.3 launched late July 2025 by requiring consumer approval each time an entry within the MCP configuration file is modified.

“The flaw exposes a important weak point within the belief mannequin behind AI-assisted improvement environments, elevating the stakes for groups integrating LLMs and automation into their workflows,” Examine Level mentioned.
The event comes days after Goal Labs, Backslash Safety, and HiddenLayer uncovered a number of weaknesses within the AI device that might have been abused to acquire distant code execution and bypass its denylist-based protections. They’ve additionally been patched in model 1.3.
The findings additionally coincide with the rising adoption of AI in enterprise workflows, together with utilizing LLMs for code technology, broadening the assault floor to numerous rising dangers like AI provide chain assaults, unsafe code, mannequin poisoning, immediate injection, hallucinations, inappropriate responses, and knowledge leakage –
- A check of over 100 LLMs for his or her capacity to write down Java, Python, C#, and JavaScript code has discovered that 45% of the generated code samples failed security assessments and launched OWASP Prime 10 security vulnerabilities. Java led with a 72% security failure charge, adopted by C# (45%), JavaScript (43%), and Python (38%).
- An assault known as LegalPwn has revealed that it is attainable to leverage authorized disclaimers, phrases of service, or privateness insurance policies as a novel immediate injection vector, highlighting how malicious directions might be embedded inside reliable, however usually ignored, textual parts to set off unintended habits in LLMs, resembling misclassifying malicious code as secure and providing unsafe code options that may execute a reverse shell on the developer’s system.
- An assault known as man-in-the-prompt that employs a rogue browser extension with no particular permissions to open a brand new browser tab within the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract knowledge and compromise mannequin integrity. This takes benefit of the truth that any browser add-on with scripting entry to the Doc Object Mannequin (DOM) can learn from, or write to, the AI immediate straight.
- A jailbreak method known as Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to provide in any other case restricted outputs, thereby deceiving the mannequin into breaking its personal guidelines.
- An assault known as MAS hijacking that manipulates the management move of a multi-agent system (MAS) to execute arbitrary malicious code throughout domains, mediums, and topologies by weaponizing the agentic nature of AI techniques.
- A way known as Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI mannequin inference pipeline by embedding malicious directions inside the chat template recordsdata that execute in the course of the inference part to compromise outputs. By positioning the assault between enter validation and mannequin output, the method is each sneaky and bypasses AI guardrails. With GGUF recordsdata distributed through companies like Hugging Face, the assault exploits the availability chain belief mannequin to set off the assault.
- An attacker can goal the machine studying (ML) coaching environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the fashions, in the end resulting in lateral motion, privilege escalation, in addition to coaching knowledge and mannequin theft and poisoning.
- A research by Anthropic has uncovered that LLMs can be taught hidden traits throughout distillation, a phenomenon known as subliminal studying, that causes fashions to transmit behavioral traits by generated knowledge that seems fully unrelated to these traits, probably resulting in misalignment and dangerous habits.

“As Massive Language Fashions change into deeply embedded in agent workflows, enterprise copilots, and developer instruments, the danger posed by these jailbreaks escalates considerably,” Pillar Safety’s Dor Sarig mentioned. “Fashionable jailbreaks can propagate by contextual chains, infecting one AI part and resulting in cascading logic failures throughout interconnected techniques.”
“These assaults spotlight that AI security requires a brand new paradigm, as they bypass conventional safeguards with out counting on architectural flaws or CVEs. The vulnerability lies within the very language and reasoning the mannequin is designed to emulate.”



