GitLab’s coding assistant Duo can parse malicious AI prompts hidden in feedback, supply code, merge request descriptions and commit messages from public repositories, researchers discovered. This system allowed them to trick the chatbot into making malicious code recommendations to customers, share malicious hyperlinks and inject rogue HTML code in responses that stealthily leaked code from personal initiatives.
“GitLab patched the HTML injection, which is nice, however the greater lesson is evident: AI instruments are a part of your appʼs assault floor now,” researchers from utility security agency Legit Safety mentioned in a report. “In the event that they learn from the web page, that enter must be handled like another user-supplied knowledge — untrusted, messy, and probably harmful.”
Immediate injection is an assault approach in opposition to giant language fashions (LLMs) to control their output to customers. And whereas it’s not a brand new assault, will probably be more and more related as enterprises develop AI brokers that parse user-generated knowledge and independently take actions based mostly on that content material.