HomeVulnerabilityGoogle Expands Its Bug Bounty Program to Deal with Synthetic Intelligence Threats

Google Expands Its Bug Bounty Program to Deal with Synthetic Intelligence Threats

Google has introduced that it is increasing its Vulnerability Rewards Program (VRP) to reward researchers for locating assault eventualities tailor-made to generative synthetic intelligence (AI) techniques in an effort to bolster AI security and security.

“Generative AI raises new and totally different considerations than conventional digital security, such because the potential for unfair bias, mannequin manipulation or misinterpretations of knowledge (hallucinations),” Google’s Laurie Richardson and Royal Hansen stated.

Among the classes which might be in scope embrace immediate injections, leakage of delicate information from coaching datasets, mannequin manipulation, adversarial perturbation assaults that set off misclassification, and mannequin theft.

It is price noting that Google earlier this July instituted an AI Purple Staff to assist tackle threats to AI techniques as a part of its Safe AI Framework (SAIF).

Additionally introduced as a part of its dedication to safe AI are efforts to strengthen the AI provide chain through present open-source security initiatives reminiscent of Provide Chain Ranges for Software program Artifacts (SLSA) and Sigstore.

Artificial Intelligence Threats

“Digital signatures, reminiscent of these from Sigstore, which permit customers to confirm that the software program wasn’t tampered with or changed,” Google stated.

See also  The rising menace of identity-related cyberattacks: Insights into the menace panorama

“Metadata reminiscent of SLSA provenance that inform us what’s in software program and the way it was constructed, permitting customers to make sure license compatibility, determine identified vulnerabilities, and detect extra superior threats.”

The event comes as OpenAI unveiled a brand new inner Preparedness staff to “monitor, consider, forecast, and shield” towards catastrophic dangers to generative AI spanning cybersecurity, chemical, organic, radiological, and nuclear (CBRN) threats.

The 2 corporations, alongside Anthropic and Microsoft, have additionally introduced the creation of a $10 million AI Security Fund, targeted on selling analysis within the subject of AI security.

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular