David Benas, affiliate principal marketing consultant at software security vendor Black Duck, mentioned these security points are a pure consequence of coaching AI fashions on human-generated code.
“The earlier everyone seems to be comfy treating their code-generating LLMs as they might interns or junior engineers pushing code, the higher,” Benas mentioned. “The underlying fashions behind LLMs are inherently going to be simply as flawed because the sum of the human corpus of code, with an additional serving of flaw sprinkled on prime as a result of their tendency to hallucinate, inform lies, misunderstand queries, course of flawed queries, and so forth.”
Whereas AI coding assistants equivalent to GitHub Copilot enhance developer velocity, additionally they introduce new security dangers, John Smith, EMEA chief know-how officer at Veracode, instructed CSO.



