The idea is not at all new. Earlier than AI, attackers used encryption, packing, junk code insertion, instruction reordering, and mutation engines to generate hundreds of thousands of variants from a single malware household. Trendy endpoint platforms rely extra on behavioral evaluation than static signatures.
In apply, most so-called AI-driven polymorphism quantities to swapping a deterministic mutation engine for a probabilistic one powered by a big language mannequin. In principle, this might introduce extra variability. Realistically, although, it presents no clear benefit over present strategies.
Marcus Hutchins, malware analyst and risk intelligence researcher, calls AI polymorphic malware “a extremely enjoyable novelty analysis undertaking,” however not one thing that gives attackers a decisive benefit. He notes that non-AI strategies are predictable, low-cost, and dependable, whereas AI-based approaches require native fashions or third-party API entry and may introduce operational threat. Hutchins additionally pointed to examples like Google’s “Pondering Robotic” malware snippet, which queried the Gemini AI engine to generate code to evade antivirus. In actuality, the snippet merely prompted AI to provide a small code fragment with no outlined operate or assure of working in an precise malware chain.



