Risk actors try to leverage a newly launched synthetic intelligence (AI) offensive security device known as HexStrike AI to use lately disclosed security flaws.
HexStrike AI, in accordance with its web site, is pitched as an AI‑pushed security platform to automate reconnaissance and vulnerability discovery with an goal to speed up licensed pink teaming operations, bug bounty searching, and seize the flag (CTF) challenges.
Per data shared on its GitHub repository, the open-source platform integrates with over 150 security instruments to facilitate community reconnaissance, net utility security testing, reverse engineering, and cloud security. It additionally helps dozens of specialised AI brokers which might be fine-tuned for vulnerability intelligence, exploit growth, assault chain discovery, and error dealing with.

However in accordance with a report from Verify Level, risk actors try their fingers on the device to achieve an adversarial benefit, making an attempt to weaponize the device to use lately disclosed security vulnerabilities.
“This marks a pivotal second: a device designed to strengthen defenses has been claimed to be quickly repurposed into an engine for exploitation, crystallizing earlier ideas right into a extensively out there platform driving real-world assaults,” the cybersecurity firm mentioned.
Discussions on darknet cybercrime boards present that risk actors declare to have efficiently exploited the three security flaws that Citrix disclosed final week utilizing HexStrike AI, and, in some instances, even flag seemingly weak NetScaler cases which might be then provided to different criminals on the market.
Verify Level mentioned the malicious use of such instruments has main implications for cybersecurity, not solely shrinking the window between public disclosure and mass exploitation, but additionally serving to parallelize the automation of exploitation efforts.

What’s extra, it cuts down the human effort and permits for mechanically retrying failed exploitation makes an attempt till they turn out to be profitable, which the cybersecurity firm mentioned will increase the “general exploitation yield.”
“The speedy precedence is evident: patch and harden affected techniques,” it added. “Hexstrike AI represents a broader paradigm shift, the place AI orchestration will more and more be used to weaponize vulnerabilities rapidly and at scale.”

The disclosure comes as two researchers from Alias Robotics and Oracle Company mentioned in a newly printed research that AI-powered cybersecurity brokers like PentestGPT carry heightened immediate injection dangers, successfully turning security instruments into cyber weapons by way of hidden directions.
“The hunter turns into the hunted, the security device turns into an assault vector, and what began as a penetration check ends with the attacker gaining shell entry to the tester’s infrastructure,” researchers Víctor Mayoral-Vilches and Per Mannermaa Rynning mentioned.
“Present LLM-based security brokers are essentially unsafe for deployment in adversarial environments with out complete defensive measures.”



