Government scientists and artificial intelligence experts from nine countries and the European Union will convene in San Francisco after the U.S. elections to collaborate on the safe development of AI technology and mitigating potential risks.
The two-day international AI safety gathering, scheduled for November 20 and 21, was announced by President Joe Biden's administration. This meeting follows a commitment made at an AI Safety Summit in the UK last year to collectively address the dangers posed by advancements in AI.
The urgent topics likely to be discussed include the rise of AI-generated fake content and the challenge of determining when AI systems require regulatory measures due to their capabilities or potential dangers.
The meeting aims to establish standards in collaboration with countries to address risks associated with synthetic content and malicious use of AI by bad actors. The goal is to contain risks and leverage the full potential of AI technology.
San Francisco, a hub for generative AI technology, will host the technical collaboration on safety measures ahead of a broader AI summit planned for February in Paris. The event will be co-hosted by the U.S. Commerce Secretary and the Secretary of State, engaging national AI safety institutes from various countries.
Notably absent from the participant list is China, although efforts are being made to involve additional scientists in the discussions. The focus remains on preventing the misuse of AI in critical areas such as nuclear weapons and bioterrorism.
While governments worldwide have pledged to safeguard AI technology, approaches vary. The EU has enacted comprehensive AI legislation with stringent restrictions on high-risk applications, setting a precedent for regulation.
The Biden administration's executive order on AI mandates safety testing and information sharing for developers of powerful AI systems. Companies like OpenAI have collaborated with national AI safety institutes to ensure responsible deployment of advanced AI models.
Efforts to move beyond voluntary safety measures and establish regulatory frameworks are gaining momentum, with tech companies acknowledging the need for AI regulation to balance innovation with risk mitigation.
Recent legislative actions, such as California's bills targeting political deepfakes and proposed regulations for powerful AI models, reflect the growing consensus on the importance of AI governance in safeguarding against potential threats.