President Joe Biden on Monday signed a sweeping new executive order on artificial intelligence, the first legitimate action the U.S. has taken to address the technology. The order is broken into eight separate categories, which include the creation of new safety and security standards around AI, the protection of workers and consumers in general and on several specific fronts and the promotion of innovation.
The lengthy fact sheet that the White House released concerning details of the order focuses on enhancing research and laying out guidelines to keep consumers safe. Part of this is a requirement that developers of more powerful AI systems share "critical information," including the results of internal safety reports, with the U.S. government.
Related: The ethics of artificial intelligence: A path toward responsible AI
OpenAI, the maker of ChatGPT, did not respond to TheStreet's request for comment regarding the order.
IBM IBM CEO, Arvind Krishna, however, said in a statement that he "welcomes" the order.
"We’re pleased that this Executive Order creates new opportunities for innovation and research, regarding the safety and risk-management of AI foundation models," he said.
Despite a focus on mandating the sharing of results of company-conducted safety tests, the order does not require companies to share the details of their training sets with the government or with safety researchers, a major point of concern for many AI safety researchers. Only through understanding the details of a training set can researchers understand the true capabilities (and threats) of an AI model; the major AI developers, such as OpenAI, have kept their datasets secret.
Professor Gary Marcus, a prominent AI researcher, said Monday that there is a "lot to like" in the order. It importantly addresses all of the many risks that AI presents, including bias, civil rights, algorithmic discrimination, criminal justice, workers' rights, cybersecurity, national security and bioweapons risk.
However, Marcus added that whether this order is effective depends on the specific wording of the order, and how stringently each requirement will be enforced.
Marcus argued that, though it is a good start, the order is not up to par with the process of the FDA approving a new drug, an approach he would like to see adopted for AI.
"When the FDA approves a drug, they (look) for evidence that the benefits outweigh the risks; paperwork alone is not enough," Marcus wrote. "What we really need is a peer-review process by independent scientists; it still seems like the companies are free to do largely as we please. That’s not enough."
Still, it represents an "important first step." What is needed now, according to Marcus, is some "teeth."
Nearly 70% of U.S. voters across a range of political affiliations support the executive order, according to a new poll from the AI Policy Institute (AIPI). Just 15% oppose it.
Read the full White House Fact Sheet here.
Related: Marc Andreessen defends Silicon Valley in bold, tech-loving manifesto
The order comes on the heels of OpenAI's new 'Preparedness' team
This first governmental step toward taming AI comes just four days after OpenAI, the company behind AI phenom ChatGPT, announced its attempt to mitigate the risks created by its own products and technology.
Citing, as usual, enormous benefits and "catastrophic risks," OpenAI wrote in a blog post that, to minimize these risks going forward, the company is starting a new "Preparedness" team. The team will track, evaluate, forecast and protect against such risks across categories of individualized persuasion, cybersecurity, autonomous replication and biological and nuclear threats.
Despite its regular recognition of such catastrophic risks, many of which have been dismissed by prominent AI researchers, OpenAI remains committed to its mission to build super-intelligent AI.
"I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes," Sam Altman, OpenAI's Chief Executive, wrote Oct. 24.
While it sounds like a good start, Dr. Sasha Luccioni, a prominent AI researcher, noted caution around internal audits that occur behind closed doors.
"I'm not getting my hopes up about this initiative," she told TheStreet, adding that "the framing around Artificial General Intelligence (which is a myth) and the connotations behind 'preparedness' (which is related to survivalism/doomsday prepping, for me) are frustrating."
Related: Artificial Intelligence is a sustainability nightmare - but it doesn't have to be
Luccioni said that preparedness over future existential risks is not necessary right now; what is needed instead is "concrete actions taken now to make Open AI's systems more transparent and robust: information about training data and bias, external red teaming and audits."
The key to AI safety revolves around transparency, something that the White House's new order does not address.
Dr. Srinivas Mukkamala, Ivanti’s CPO and AI authority leader, likewise said that it remains important to address the risks that today's AI models pose, rather than focusing only on the more dramatic risks of future models.
"The risks of AI to society and AI bias naturally occurring in data or algorithmic models is in its infancy but will grow rapidly," Mukkamala told TheStreet. "This is why we need to address the prejudice and ethical issues facing us today."
Without implementing data and model attribution, he said, "we won't have the ability to hold humans and organizations building the models accountable."
"It is still unclear how OpenAI will take accountability for what risks the new Preparedness Team finds," he added. "That will determine if this is a positive step forward or just a lot of noise.
Contact Ian on Signal, 732-804-1223, or at ian.krietzberg@thearenagroup.net
Related: How one tech company is tackling the recent proliferation of deepfake fraud
Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.