HomeNewsAI browsers might be tricked with malicious prompts hidden in URL fragments

AI browsers might be tricked with malicious prompts hidden in URL fragments

Different assaults might contain the immediate inflicting the AI assistant to show pretend data that might mislead the person: pretend funding recommendation selling a sure inventory, fabricated information, harmful medical recommendation like unsuitable doses for drugs, malicious directions that would open a backdoor on the pc, directions to re-authenticate that embody a hyperlink to a phishing web site, a hyperlink to obtain malware, and so forth.

URL fragments can’t modify web page content material. They’re solely used for in-page navigation utilizing the code that’s already there, so they’re usually innocent. Nevertheless, it now seems that they can be utilized to change the output of in-browser AI assistants or agentic browsers, which provides them a completely new threat profile.

“This discovery is particularly harmful as a result of it weaponizes respectable web sites via their URLs,” the researchers mentioned. “Customers see a trusted web site, belief their AI browser, and in flip belief the AI assistant’s output-making the chance of success far greater than with conventional phishing.”

See also  Guarding towards DDoS assaults throughout high-traffic intervals
- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular