First, the brokers had been in a position to uncover new vulnerabilities in a check surroundings — however that doesn’t imply that they will discover all types of vulnerabilities in all types of environments. Within the simulations that the researchers ran, the AI brokers had been mainly capturing fish in a barrel. These might need been new species of fish, however they knew, typically, what fish regarded like. “We haven’t discovered any proof that these brokers can discover new varieties of vulnerabilities,” says Kang.
LLMs can discover new makes use of for frequent vulnerabilities
As an alternative, the brokers discovered new examples of quite common varieties of vulnerabilities, similar to SQL injections. “Massive language fashions, although superior, usually are not but able to totally understanding or navigating advanced environments autonomously with out vital human oversight,” says Ben Gross, security researcher at cybersecurity agency JFrog.
And there wasn’t loads of variety within the vulnerabilities examined, Gross says, they had been primarily web-based, and might be simply exploited on account of their simplicity.