Cybersecurity researchers have disclosed that synthetic intelligence (AI) assistants that assist internet shopping or URL fetching capabilities will be became stealthy command-and-control (C2) relays, a method that might permit attackers to mix into reliable enterprise communications and evade detection.
The assault technique, which has been demonstrated in opposition to Microsoft Copilot and xAI Grok, has been codenamed AI as a C2 proxy by Verify Level.
It leverages “nameless internet entry mixed with shopping and summarization prompts,” the cybersecurity firm mentioned. “The identical mechanism may allow AI-assisted malware operations, together with producing reconnaissance workflows, scripting attacker actions, and dynamically deciding ‘what to do subsequent’ throughout an intrusion.”
The event indicators one more consequential evolution in how menace actors might abuse AI methods, not simply to scale or speed up totally different phases of the cyber assault cycle, but additionally leverage APIs to dynamically generate code at runtime that may adapt its conduct primarily based on data gathered from the compromised host and evade detection.
AI instruments already act as a drive multiplier for adversaries, permitting them to delegate key steps of their campaigns, whether or not or not it’s for conducting reconnaissance, vulnerability scanning, crafting convincing phishing emails, creating artificial identities, debugging code, or growing malware. However AI as a C2 proxy goes a step additional.

It basically leverages Grok and Microsoft Copilot’s web-browsing and URL-fetch capabilities to retrieve attacker-controlled URLs and return responses by way of their internet interfaces, basically remodeling it right into a bidirectional communication channel to simply accept operator-issued instructions and tunnel sufferer knowledge out.
Notably, all of this works with out requiring an API key or a registered account, thereby rendering conventional approaches like key revocation or account suspension ineffective.
Seen in another way, this method is not any totally different from assault campaigns which have weaponized trusted companies for malware distribution and C2. It is also known as living-off-trusted-sites (LOTS).

Nevertheless, for all this to occur, there’s a key prerequisite: the menace actor will need to have already compromised a machine by another means and put in malware, which then makes use of Copilot or Grok as a C2 channel utilizing specifically crafted prompts that trigger the AI agent to contact the attacker-controlled infrastructure and go the response containing the command to be executed on the host again to the malware.
Verify Level additionally famous that an attacker might transcend command era to utilize the AI agent to plot an evasion technique and decide the following plan of action by passing particulars in regards to the system and validating if it is even value exploiting.
“As soon as AI companies can be utilized as a stealthy transport layer, the identical interface may carry prompts and mannequin outputs that act as an exterior determination engine, a stepping stone towards AI-Pushed implants and AIOps-style C2 that automate triage, concentrating on, and operational selections in actual time, Verify Level mentioned.
The disclosure comes weeks after Palo Alto Networks Unit 42 demonstrated a novel assault method the place a seemingly innocuous internet web page will be became a phishing web site through the use of client-side API calls to trusted giant language mannequin (LLM) companies for producing malicious JavaScript dynamically in actual time.
The strategy is just like Final Mile Reassembly (LMR) assaults, which entails smuggling malware by way of the community through unmonitored channels like WebRTC and WebSocket and piecing them instantly within the sufferer’s browser, successfully bypassing security controls within the course of.
“Attackers might use fastidiously engineered prompts to bypass AI security guardrails, tricking the LLM into returning malicious code snippets,” Unit 42 researchers Shehroze Farooqi, Alex Starov, Diva-Oriane Marty, and Billy Melicher mentioned. “These snippets are returned through the LLM service API, then assembled and executed within the sufferer’s browser at runtime, leading to a completely purposeful phishing web page.”



