Talk of regulation has become inseparable from the conversation on artificial intelligence.
OpenAI's CEO -- Sam Altman, with his eyes set on the coming, unpredictable surge of artificial superintelligence -- has been a notable leader in the conversation, testifying earlier in May that government regulation of the blooming industry is absolutely vital in the long run.
DON'T MISS: ChatGPT Creators Propose New Rules to Keep the Tech Safe for Users
Both Google and Microsoft have additionally touted the importance of safety and responsibility when working with AI.
Facing coming regulation in the European Union, (GOOGL) Google's CEO hopped more firmly on board the regulation bandwagon, agreeing Wednesday to a voluntary pact to comply with EU rules ahead of its legal deadline.
"Agreed with Google CEO Sundar Pichai to work together with all major European and non-European AI actors to develop an 'AI Pact' on a voluntary basis ahead of the legal deadline of the AI regulation," Thierry Breton, the European Commissioner for Internal Market said. "We expect technology in Europe to respect all of our rules, on data protection, online safety and artificial intelligence."
Margrethe Vestager -- an executive vice president of the European Commission -- noted the importance of the EU's AI Act, but acknowledged the exponential rate at which the technology has been moving.
"We need the AI Act as soon as possible," she wrote. "But AI technology evolves at extreme speed. So we need voluntary agreement on universal rules for AI now."
The proposed AI Act, which marks the first attempt at governmental regulation of AI -- would seek to categorize AI systems into three categories: unacceptable risk, high-risk and everything else.
Under the proposed law, any AI system that poses an unacceptable risk would be banned; any systems that present high risk would be subject to legal requirements. Any other systems would remain unregulated.