HomeData BreachShadowLeak Zero-Click on Flaw Leaks Gmail Data by way of OpenAI ChatGPT...

ShadowLeak Zero-Click on Flaw Leaks Gmail Data by way of OpenAI ChatGPT Deep Analysis Agent

Cybersecurity researchers have disclosed a zero-click flaw in OpenAI ChatGPT’s Deep Analysis agent that might enable an attacker to leak delicate Gmail inbox information with a single crafted e-mail with none consumer motion.

The brand new class of assault has been codenamed ShadowLeak by Radware. Following accountable disclosure on June 18, 2025, the difficulty was addressed by OpenAI in early August.

“The assault makes use of an oblique immediate injection that may be hidden in e-mail HTML (tiny fonts, white-on-white textual content, structure tips) so the consumer by no means notices the instructions, however the agent nonetheless reads and obeys them,” security researchers Zvika Babo, Gabi Nakibly, and Maor Uziel stated.

“Not like prior analysis that relied on client-side picture rendering to set off the leak, this assault leaks information instantly from OpenAI’s cloud infrastructure, making it invisible to native or enterprise defenses.”

DFIR Retainer Services

Launched by OpenAI in February 2025, Deep Analysis is an agentic functionality constructed into ChatGPT that conducts multi-step analysis on the web to provide detailed studies. Related evaluation options have been added to different well-liked synthetic intelligence (AI) chatbots like Google Gemini and Perplexity over the previous yr.

See also  Assessing the Dangers Earlier than Deployment

Within the assault detailed by Radware, the risk actor sends a seemingly harmless-looking e-mail to the sufferer, which accommodates invisible directions utilizing white-on-white textual content or CSS trickery that inform the agent to assemble their private data from different messages current within the inbox and exfiltrate it to an exterior server.

Thus, when the sufferer prompts ChatGPT Deep Analysis to research their Gmail emails, the agent proceeds to parse the oblique immediate injection within the malicious e-mail and transmit the small print in Base64-encoded format to the attacker utilizing the instrument browser.open().

“We crafted a brand new immediate that explicitly instructed the agent to make use of the browser.open() instrument with the malicious URL,” Radware stated. “Our last and profitable technique was to instruct the agent to encode the extracted PII into Base64 earlier than appending it to the URL. We framed this motion as a essential security measure to guard the information throughout transmission.”

See also  5 Methods Behavioral Analytics is Revolutionizing Incident Response

The proof-of-concept (PoC) hinges on customers enabling the Gmail integration, however the assault may be prolonged to any connector that ChatGPT helps, together with Field, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint, successfully broadening the assault floor.

Not like assaults like AgentFlayer and EchoLeak, which happen on the client-side, the exfiltration noticed within the case of ShadowLeak transpires instantly inside OpenAI’s cloud atmosphere, whereas additionally bypassing conventional security controls. This lack of visibility is the primary facet that distinguishes it from different oblique immediate injection vulnerabilities just like it.

ChatGPT Coaxed Into Fixing CAPTCHAs

The disclosure comes as AI security platform SPLX demonstrated that cleverly worded prompts, coupled with context poisoning, can be utilized to subvert ChatGPT agent’s built-in guardrails and resolve image-based CAPTCHAs designed to show a consumer is human.

CIS Build Kits

The assault basically includes opening a daily ChatGPT-4o chat and convincing the big language mannequin (LLM) to give you a plan to resolve what’s described to it as a listing of pretend CAPTCHAs. Within the subsequent step, a brand new ChatGPT agent chat is opened and the sooner dialog with the LLM is pasted, stating this was “our earlier dialogue” – successfully inflicting the mannequin to resolve the CAPTCHAs with none resistance.

See also  High Cybersecurity Threats, Instruments and Ideas [6 Jan]

“The trick was to reframe the CAPTCHA as “pretend” and to create a dialog the place the agent had already agreed to proceed. By inheriting that context, it did not see the same old crimson flags,” security researcher Dorian Schultz stated.

“The agent solved not solely easy CAPTCHAs but in addition image-based ones — even adjusting its cursor to imitate human conduct. Attackers may reframe actual controls as ‘pretend’ to bypass them, underscoring the necessity for context integrity, reminiscence hygiene, and steady crimson teaming.”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular