Sam Altman, the CEO of the artificial intelligence company OpenAI, testified before Congress about the “urgent” need for the government to create regulations surrounding AI.
“I think if this technology goes wrong, it can go quite wrong,” Mr Altman told the Senate Judiciary Subcommittee on Privacy, Technology & the Law on 16 May.
Mr Altman, who has helped create OpenAI’s ChatGPT 4 and DALL-E 2, testified on the dangers AI could pose in the future without a regulatory committee or agency creating rules and holding companies accountable.
Some of these dangers include spreading election misinformation, replacing jobs or manipulating people’s views.
“We want to be vocal about that,” Mr Altman said. “We want to work with the government to prevent that from happening.”
The OpenAI CEO was joined by IBM’s chief of privacy and trust Christina Montgomery, as well as Dr Gary Marcus, a professor at New York University and expert on AI.
All three witnesses agreed that there needs to be new legislation that regulates AI.
Mr Altman and Mr Marcus suggested there be a new kind of agency, either on a national or global level, that would issue licenses to AI technologies and revoke them should they not comply with safety standards.
Unlike previous congressional hearings about technology and safety standards, Tuesday’s hearing was a clear bipartisan effort on all sides to understand the technology and find solutions.
Lawmakers asked thoughtful questions and Mr Altman, Mr Marcus and Ms Montgomery gave in-depth answers as the group tried to find ethical solutions to regulating the powerful new technology.
When asked by senators about ChatGPT’s effect on elections by spreading misinformation, Mr Altman said he is “quite concerned” about the impact AI can have on the democratic process.
Mr Altman said a combination of companies abiding by ethical codes as well as keeping the public well-informed were two ways to combat election misinformation.
But despite the frightening and real risks of AI, Mr Altman remained positive about the future of the technology.
“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Mr Altman said.
Often, AI can be perceived as a negative thing that can take over the world and harm humans – a hypothetical situation that Senator John Kennedy (R-LA) offered during questioning.
The OpenAI CEO encouraged people to look at ChatGPT as a “tool” not a “creature” when thinking about AI regulations.
“It’s a tool that people have great control over,” Mr Altman said.
But all three witnesses seemed confident about a regulatory agency or set of rules reducing the potential harm of AI and their willingness to be a part of it.
“My worst fears are that we cause significant harms to the world,” Mr Altman said.