Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Android Central
Android Central
Technology
Nickolas Diaz

Google and others agree to advance AI safety practices with the White House

AI at Google IO 2023

What you need to know

  • Google, along with six other companies, has voluntarily committed to advancing AI safety practices.
  • The companies' commitment will span, earning the public's trust, stronger security, and public reporting about their systems.
  • This echoes a similar collaboration Google has with the EU called the "AI Pact."

Google announces that it, along with six other leading AI companies, is banding together to advance "responsible practices in the development of artificial intelligence." Google, Amazon, Anthropic, Inflection, Meta, Microsoft, and OpenAI have all voluntarily committed to these new advanced practices and are meeting with the Biden-Harris Administration at the White House on July 21.

One of the largest commitments, arguably, is building trust in AI or, as the White House stated in its fact sheet, "earning the public's trust." Google cites the AI Principles it created back in 2018 to help people understand and feel comfortable around its artificial intelligence software.

However, as the Biden-Harris Administration states, companies must commit to developing ways of letting users know when content is AI-generated. A few ways include watermarking, metadata, and other tools to let users know where something, such as an image, originates.

These companies are also tasked with researching the risks to society AI systems pose, such as "harmful bias, discrimination, and protecting privacy."

(Image credit: Android Central)

Next, companies must continuously report about their AI systems publically so everyone, including the government and others in the industry, can understand where they're at on a security and societal risk factor level. Developing AI to help solve healthcare issues and environmental changes is included on the commitment list.

Security is another hot topic, and as the White House's fact sheet states, all seven companies are to invest in cybersecurity measures and "insider threat protocols" to protect proprietary and unreleased model weights. The latter has been deemed to be the most important when going about developing the right security protocols for AI systems.

Companies are also required to facilitate third-party discovery and report any vulnerabilities within their systems.

(Image credit: Nicholas Sutrich / Android Central)

All of this must be done before companies can roll out new AI systems to the public, the White House states. The seven companies need to conduct internal and external security tests of their AI systems before release. Additionally, information needs to be shared across the industry, the government, civil society, and academia about best practices for safety and other such threats to their systems.

Safety and comfort with artificial intelligence are required as companies such as Google have warned their employees to exercise caution when using AI chatbots over security concerns. That's not the first instance of such fear as Samsung had quite the scare when an engineer accidentally submitted confidential company code to an AI chatbot.

Lastly, Google voluntarily committing to advancing safe AI practices along with several others comes two months after it joined with the EU for a similar agreement. The company collaborated to create the "AI Pact," a new set of guidelines companies in the region were urged to voluntarily agree to get a handle on AI software before it goes too far.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.