Fast Facts
- Gladstone AI last week published a nearly 300-page government-commissioned report detailing the "catastrophic" risks posed by AI.
- TheStreet sat down with David Bader, the Director of the Institute for Data Science at New Jersey Institute of Technology, to break down the report.
- "Certainly there's a lot of hype," Bader said, "but there's also a lot of real-world threats."
Last week, Gladstone AI published a report — commissioned for $250,000 by the U.S. State Department — that detailed the apparent "catastrophic" risks posed by untethered artificial intelligence technology. It was first reported on by Time.
The report, the result of more than 200 interviews with AI researchers, leading AI labs and prominent AI executives, was conducted by Gladstone's Edouard Harris, Jeremie Harris and Mark Beall over the past year. It warns that, if it remains unregulated, AI could “pose an extinction-level threat to the human species.”
The report, which TheStreet reviewed in full, focuses on two key risks: the weaponization of AI and a potential loss of human control. In terms of weaponization, the report warns that models can be used to power everything from mass disinformation campaigns to large-scale cyberattacks, going on to suggest that advanced, future models might be able to assist in the creation of biological weaponry.
The risk of control, according to the report, is based on future, highly advanced (and theoretical) models that, in achieving "superhuman" capabilities, "may engage in power-seeking behaviors," becoming " effectively uncontrollable."
Though the report says there is evidence to support this, the report does not explore that evidence.
Related: No, Elon Musk, AI self-awareness is not 'inevitable'
The lines of effort
To address these two risks, the report lays out five lines of effort.
The government, according to the report, should establish an interim set of safeguards, including the creation of an AI Observatory to monitor developments in the space, a set of responsible safeguards for developers and an AI Safety Task Force to enforce those safeguards. It calls here for more control over the advanced AI supply chain.
The report also calls for the funding of open AI safety research, as well as the establishment of deployment safety standards.
The report's fourth line of effort calls for the establishment of an AI regulatory agency, which would have licensing powers over the companies developing the tech.
And the fifth line of effort calls for the creation of international safeguards and an international rule-making and licensing agency that would oversee and monitor AI projects around the globe.
Related: The ethics of artificial intelligence: A path toward responsible AI
The AGI of it all
The glaring hole in the report — as pointed out by a number of experts on X — is that it is predicated around catastrophic risks stemming from the possible future creation of artificial general intelligence (AGI), a theoretical AI system that would have human-adjacent knowledge.
"There's no science in X risk." — Dr. Suresh Venkatasubramanian
AGI, however, does not exist, and many experts do not believe it will ever be possible, especially considering that researchers have yet to understand the whys and hows behind human intelligence, cognition or consciousness; the replication of that inexplicable human reality is, therefore, a significant challenge.
"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Dr. Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor, said of the so-called AI extinction risk last year. "It's a great degree of religious fervor sort of masked as rational thinking."
"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."
And though current language models might seem intelligent, researchers have decried that as nothing but a mirage — the models are, in actuality, predictive generators trained on an enormous amount of content.
What?
— Meredith Whittaker (@mer__edith) March 12, 2024
Govs hire consultants daily to write whatever. Often, they produce trash. Here, a v small consultancy wrote a report for USG amplifying discredited X-risk ideologies + prescribing bad policy. AFAICT it's based based on ~5 ppl's hunches?
Why're we taking this seriously? https://t.co/w34vJcskt4
The report, acknowledging that it views AGI as the driver behind all this AI risk — and ignoring mention of the many current harms incited by the tech — even states that while companies including OpenAI, Google DeepMind, Anthropic and Nvidia have suggested AGI is only a few years away, "they may also face incentives to exaggerate their near-term capabilities."
The report says that, in an effort to address this problem, it asked a handful of technical sources what the odds are of AI leading to "global and irreversible effects." The lowest estimate was 4%; the highest was 20%.
But Harris later added in a post on X that he and his team surveyed only "5-10" people, saying that the "20% was slightly more anomalous, but the folks working on the most cutting edge systems gave higher numbers in every case."
The report goes on to acknowledge that "there is no direct empirical evidence that a future AGI system will behave dangerously" in regards to a potential loss of control, adding that on the flip side, there is also no evidence to suggest that AGI will behave safely, either.
More deep dives on AI:
- AI tax fraud: Why it's so dangerous and how to protect yourself from it
- Cybersecurity expert says the next generation of identity theft is here: 'Identity hijacking'
- Deepfake program shows scary and destructive side of AI technology
Without that evidence, the report instead relies on "theoretical arguments guided by experiments on today’s less capable AI systems," a model that it says has "significant limitations."
“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist, told the New Yorker. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”
On the same day the report was published, Beall left Gladstone to launch what he called "the first AI safety Super PAC," according to VentureBeat. His plan is to "run a national voter education campaign on AI policy."
He told VentureBeat that the PAC has already secured initial investments and plans to raise millions of dollars in the coming weeks and months.
Related: Human creativity persists in the era of generative AI
Sifting through the hype
While it was dismissed by many for its lack of evidence in arguments not supported by science, others lauded the report, not for its fear of nonexistent AGI, but for its action-oriented suggestions that could allow the government to better rein in the industry.
The nonprofit Control AI said that it is "heartening to see this taken seriously at the highest levels," adding that "we should be well-prepared, and regulations and guard rails should be in place to ensure that it only benefits humanity."
Ed Newton-Rex, the CEO of the nonprofit Fairly Trained, called the report "extremely important" for recommending that the government "act fast" on AI.
TheStreet sat down with David Bader, the Director of the Institute for Data Science at New Jersey Institute of Technology, to discuss the report, who acknowledged that "certainly there's a lot of hype, but there's also a lot of real-world threats."
"Now is the time that the world has to have these conversations." — Dr. David Bader
With the technology moving as quickly as it is, he said that people are hearing everything from promises of an AI-fueled utopia to an AI-fueled apocalypse.
"My thought is that there are some real concerns to think about, but we do have some time to think about it," he said. "This report raises some interesting directions for trying to understand what to do."
The report's recommendation to safeguard the supply chains behind AI technology in an effort to increase what it found to be lacking security at the labs building the tech, according to Bader, is a good one.
He did say, however, that there are a number of other risks and harms posed by AI that are important to not ignore, including issues of algorithmic bias, harms stemming from deepfake creation and mounting problems concerning disinformation, cyberattacks and self-driving.
He went on to acknowledge the national security risks laid out in the report, as well as the value of the five lines of effort, saying that they represent a number of good ways forward in terms of policies and laws regarding the mitigation of AI risk.
Still, he said that he is pessimistic that the government will be able to "put the genie back in the battle and control AI," even through laws and regulations. "I think this regulation is probably going to be one of the hardest regulations to create," he added, citing the difficulty inherent to the report's lofty goals of international regulatory efforts.
But when it comes to the AGI of it all, Bader isn't sold.
"I'm still a little bit leery that that's something that we'll achieve. I think the hype over AI at the moment ... we see it isn't a panacea of excellent and fantastic information," Bader said. "It hallucinates a lot. There's a lot of bias. We're getting there but I still think it's a long way off before we see AGI."
"There is still a lot of hype with AI but it's getting better and better every day," he added. "Now is the time that the world has to have these conversations."
Related: Deepfake porn: It's not just about Taylor Swift
Why not shut it down?
In this conversation, stemming from the report, about the risk of world-destroying AI, I asked Bader: "Why not shut it down?"
"Every technology, every basic and foundational technology we create, can be used for good purposes and it can be used for nefarious purposes," Bader said. "So whether it's a ballpoint pen, or whether it is a weather satellite or whether it's a new medicine, everything that we create can do good or do harm."
AI, he said, may be able to help humanity mitigate climate change (an effort some companies are already exploring), help solve geopolitical crises or help feed populations around the world.
"When we look at these technologies, I don't think the right thing to do is shut it down. If we did that we would be left in this country without automobiles, without electricity, without lightbulbs, without all the technology that got us to where we are today," Bader said.
"I think technology has a way of making lives better. So I'm more of an optimist that we should continue developing these technologies, but then we have to understand how to mitigate the risks and reduce the potential bad uses of that technology."
Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.
Related: Senate Judiciary Committee seeks to build new framework to rein in Big Tech