Google adds generative AI threats to its bug bounty program

Key Takeaways:

– Google has extended its Vulnerability Rewards Program to cover bugs related to generative AI.
– The bug bounty program aims to keep users safe and has paid out millions in rewards.
– The extension of the program is part of Google’s commitment to advancing the discovery of vulnerabilities in AI systems.
– Generative AI raises concerns about unfair bias, model manipulation, and misinterpretations of data.
– Google has published guidelines for the AI-focused portion of its VRP.
– The program pays between $500 and $31,337 for the highest severity vulnerabilities.
– The goal is to incentivize more security research and collaboration with the open-source security community.
– The extension of the VRP aims to make AI safer for everyone.

TechRadar:

Google has extended its Vulnerability Rewards Program to cover bugs relating to generative AI in a move that will benefit both developers and consumers.

The company’s bug bounty program is already a well-known initiative designed to keep users safe, and has paid out millions in rewards over the years, including more than $12 million in 2022 alone.

Source link

AI Eclipse TLDR:

Google has expanded its Vulnerability Rewards Program (VRP) to include bugs related to generative AI. The company’s bug bounty program, which aims to keep users safe, has already paid out millions in rewards, with over $12 million in 2022 alone. By extending the VRP to cover under-the-radar faults in generative AI, Google aims to ensure responsible AI. The company has committed to advancing the discovery of vulnerabilities in AI systems alongside other leading AI companies. Google’s VRP for AI addresses concerns such as unfair bias, model manipulation, and misinterpretation of data. The program offers rewards ranging from $500 to $31,337 for the highest severity vulnerabilities that could result in the takeover of a Google account. Even the lowest eligible security vulnerability rewards start at $100. Google hopes that by incentivizing security research and applying supply chain security to AI, it will encourage collaboration with the open-source security community and other industry players, ultimately making AI safer for everyone.