Safety groups are being urged to undertake AI copilots for menace modeling, phishing simulations, and SOC workflows. But lots of the most generally deployed, enterprise-approved AI methods wrestle to assist practical defensive eventualities as soon as prompts resemble real-world assault conduct.
This isn’t as a result of such exercise is inherently malicious, however as a result of mainstream AI security fashions are designed to forestall broad misuse at scale, reasonably than distinguish licensed security work from abuse.
In the meantime, attackers are unconstrained by procurement guidelines, compliance obligations, or centralized security enforcement, whether or not they depend on open-source fashions, fine-tuned instruments, or just no AI in any respect.



