Google has recently revealed its plans to expand the scope of its Vulnerability Rewards Program (VRP), extending compensation to researchers who identify vulnerabilities specific to generative artificial intelligence (AI) systems. This expansion aims to enhance AI safety and security, acknowledging the distinctive challenges posed by generative AI technology, such as the potential for unfair bias, model manipulation, and misinterpretations of data.

Laurie Richardson and Royal Hansen from Google emphasized that generative AI introduces novel concerns that differ from traditional digital security threats. These concerns encompass issues like prompt injections, data leakage from training datasets, model manipulation, adversarial perturbation attacks leading to misclassification, and model theft. These categories now fall within the VRP’s purview as part of Google’s commitment to fortify AI security.

Earlier this year, in July, Google initiated an AI Red Team, an integral component of its Secure AI Framework (SAIF), dedicated to addressing emerging threats to AI systems. Additionally, Google’s commitment to ensuring secure AI extends to efforts to strengthen the AI supply chain. It leverages existing open-source security initiatives, including Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. SLSA provides essential metadata about software, enabling users to verify authenticity, ensuring compliance with licenses, identifying known vulnerabilities, and detecting more advanced threats.

Google stated, “Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn’t tampered with or replaced.” This proactive approach to AI security underscores Google’s commitment to safeguarding AI systems from various potential risks and vulnerabilities.

Meanwhile, OpenAI has also taken significant steps in bolstering AI safety. They introduced an internal Preparedness team tasked with monitoring, evaluating, forecasting, and protecting against catastrophic risks associated with generative AI. These risks encompass cybersecurity threats as well as those in the chemical, biological, radiological, and nuclear (CBRN) domains. This move underscores the increasing recognition of the multifaceted challenges posed by AI advancements and the need to prepare for potential adversities.

In a collaborative effort, Google, OpenAI, Anthropic, and Microsoft have jointly established a $10 million AI Safety Fund. The fund is dedicated to promoting research in the field of AI safety, underlining the collective commitment of these industry leaders to ensure the responsible and secure development of AI technologies. This initiative reflects the growing awareness of the need for comprehensive AI safety measures and the importance of collaboration in addressing these challenges.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Design a site like this with WordPress.com
Get started