The malware fetches command payloads embedded in “Assistants” descriptions (which might be set to values like “SLEEP”, “Payload”, “End result”), then decrypts, decompresses, and executes them domestically. After execution, the outcomes are uploaded again by way of the identical API, very similar to the “dwelling off the land” assault mannequin, however in an AI cloud context.
As a result of the attacker makes use of a authentic cloud service for command-and-control, detection turns into more durable, researchers famous. There’s no C2 area, solely benign-looking visitors to api.openai.com.
Classes for defenders and platform suppliers
Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; moderately, its authentic API capabilities had been misused as a relay channel, highlighting a rising threat as generative AI turns into a part of enterprise and growth workflows. Attackers can now co-opt public AI endpoints to masks malicious intent, making detection considerably more durable.



