HomeNewsAI slop and pretend experiences are exhausting some security bug bounties

AI slop and pretend experiences are exhausting some security bug bounties

So-called AI slop, which means LLM-generated low high quality pictures, movies, and textual content, has taken over the web within the final couple of years, polluting web sites, social media platforms, not less than one newspaper, and even real-world occasions. 

The world of cybersecurity just isn’t proof against this downside, both. Within the final 12 months, individuals throughout the cybersecurity trade have raised considerations about AI slop bug bounty experiences, which means experiences that declare to have discovered vulnerabilities that don’t truly exist, as a result of they had been created with a big language mannequin that merely made up the vulnerability, after which packaged it right into a professional-looking writeup. 

“Persons are receiving experiences that sound cheap, they appear technically appropriate. After which you find yourself digging into them, making an attempt to determine, ‘oh no, the place is that this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, instructed information.killnetswitch. 

“It seems it was only a hallucination all alongside. The technical particulars had been simply made up by the LLM,” stated Ionescu. 

Ionescu, who used to work at Meta’s purple workforce tasked with hacking the corporate from the within, defined that one of many points is that LLMs are designed to be useful and provides optimistic responses. “If you happen to ask it for a report, it’s going to provide you a report. After which individuals will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the shoppers, and also you get into this irritating state of affairs,” stated Ionescu. 

See also  Iranian APT hacks helped direct missile strikes in Israel and the Crimson Sea

“That’s the issue individuals are working into, is we’re getting lots of stuff that appears like gold, nevertheless it’s truly simply crap,” stated Ionescu. 

Simply within the final 12 months, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open supply security challenge Curl acquired a pretend report. “The attacker miscalculated badly,” Sintonen wrote in a submit on Mastodon. “Curl can odor AI slop from miles away.”

In response to Sitonen’s submit, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, stated that they’ve the identical downside: that their inbox is “flooded with AI rubbish.” 

One open-source developer, who maintains the CycloneDX challenge on GitHub, pulled their bug bounty down completely earlier this 12 months after receiving “virtually completely AI slop experiences.”

The main bug bounty platforms, which primarily work as intermediaries between bug bounty hackers and firms who’re prepared to pay and reward them for locating flaws of their merchandise and software program, are additionally seeing a spike in AI-generated experiences, information.killnetswitch has discovered. 

Contact Us

Do you may have extra details about how AI is impacting the cybersecurity trade? We’d love to listen to from you. From a non-work gadget and community, you may contact Lorenzo Franceschi-Bicchierai securely on Sign at +1 917 257 1382, or by way of Telegram and Keybase @lorenzofb, or electronic mail.

See also  The MVPs of the APT recreation

Michiel Prins, the co-founder and senior director of product administration at HackerOne, instructed information.killnetswitch that the corporate has encountered some AI slop. 

“We’ve additionally seen an increase in false positives — vulnerabilities that seem actual however are generated by LLMs and lack real-world influence,” stated Prins. “These low-signal submissions can create noise that undermines the effectivity of security packages.”

Prins added that experiences that include “hallucinated vulnerabilities, obscure technical content material, or different types of low-effort noise are handled as spam.”

Casey Ellis, the founding father of Bugcrowd, stated that there are undoubtedly researchers who use AI to seek out bugs and write the experiences that they then undergo the corporate. Ellis stated they’re seeing an general improve of 500 submissions per week. 

“AI is broadly utilized in most submissions, nevertheless it hasn’t but brought on a big spike in low-quality ‘slop’ experiences,” Ellis instructed information.killnetswitch. “This’ll most likely escalate sooner or later, nevertheless it’s not right here but.”

Ellis stated that the Bugcrowd workforce who analyze submissions assessment the experiences manually utilizing established playbooks and workflows, in addition to with machine studying and AI “help.”

To see if different corporations, together with those that run their very own bug bounty packages, are additionally receiving a rise in invalid experiences or experiences containing non-existent vulnerabilities hallucinated by LLMs, information.killnetswitch contacted Google, Meta, Microsoft, and Mozilla. 

Damiano DeMonte, a spokesperson for Mozilla, which develops the Firefox browser, stated that the corporate has “not seen a considerable improve in invalid or low high quality bug experiences that may look like AI-generated,” and the rejection charge of experiences — which means what number of experiences get flagged as invalid — has remained regular at 5 or 6 experiences per 30 days, or lower than 10% of all month-to-month experiences.

See also  Keenadu: Android malware that comes preinstalled and may’t be eliminated by customers

Mozilla’s staff who assessment bug experiences for Firefox don’t use AI to filter experiences, as it might doubtless be troublesome to take action with out the chance of rejecting a reputable bug report,” DeMonte stated in an electronic mail.

Microsoft and Meta, corporations which have each wager closely on AI, declined to remark. Google didn’t reply to a request for remark. 

Ionescu predicts that one of many options to the issue of rising AI slop will probably be to maintain investing in AI-powered programs that may not less than carry out a preliminary assessment and filter submissions for accuracy. 

In truth, on Tuesday, HackerOne launched Hai Triage, a brand new triaging system that mixes people and AI. Based on HackerOne spokesperson Randy Walker, this new system leveraging “AI security brokers to chop by noise, flag duplicates, and prioritize actual threats.” Human analysts then step in to validate the bug experiences and escalate as wanted.

As hackers more and more use LLMs and firms depend on AI to triage these experiences, it stays to be seen which of the 2 AIs will prevail.

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular