Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Meta and IBM team up against dominant Big Tech players

In a prominent push toward open-sourcing artificial intelligence, IBM and Meta on Tuesday launched a group called the AI Alliance, an international coalition of corporations, universities and organizations that are collectively committed to "open science" and "open technologies." 

The Alliance, according to a statement, will be "action-oriented," and is meant to better shape the equitable evolution of the technology.

Some prominent members of the organization include AMD, Cornell University, Harvard University, Yale University, NASA, Hugging Face and Intel. 

Related: IBM exec explains the difference between it and prominent AI competitors

The goal of the group, according to a statement, is to enhance responsible innovation by ensuring trust, safety and scientific rigor. To achieve that goal, it will push for the development of benchmarks and evaluation standards, support AI skill-building around the world and highlight members' use of responsible AI. 

The Alliance, which plans to partner with government and non-profit initiatives, said that it will establish a governing board and a technical oversight committee in order to help the group achieve those goals, but has yet to do so. The organization did not say when this board would be established. 

IBM SVP Darío Gil wrote Tuesday that in light of the recent drama at OpenAI, it is even more important that AI not be relegated to only a "few personalities and institutions." 

"The future of AI is approaching a fork in the road. One path is dangerously close to creating consolidated control of AI, driven by a small number of companies that have a closed, proprietary vision for the AI industry," Gil said. 

"Down the other path lies a broad, open road: a highway that belongs to the many, not the few, and is protected by the guardrails that we create together."

Related: The ethics of artificial intelligence: A path toward responsible AI

Critical transparency in AI

The statement provided by the companies does not elaborate how it will achieve and ensure the safety or responsibility of these shared AI models. 

Still, the premise of the Alliance gives rise to a key element of the AI debate: closed versus open-source technology. 

Closed source models — which include ChatGPT, made by OpenAI, and the models produced by Microsoft and Google — are closed, meaning that, while users can interact with the technology through an internet interface, no one besides the companies themselves has access to the software (or the training data). 

Open-source models, meanwhile — like Meta's Llama and IBM's geospatial model, which was open-sourced through Hugging Face — are designed for greater accessibility. 

Proponents of open-sourced AI claim that the method democratizes the technology, something the Alliance refers to in its statement, and further enables the kind of transparency that is so vital (and often lacking) in the industry. 

With closed-source models, research into individual models is nigh impossible, which makes it difficult for researchers, and then regulators, to understand the legitimate capabilities of a given model, as well as the environmental cost of training and running said model. 

"With large language models, the majority are closed source and so you don't really get access to their nuts and bolts," AI expert Dr. Sasha Luccioni told TheStreet in June. "And so it's hard to do any kind of meaningful studies on them because you don't know where they're running, how big they are. You don't know much about them."

More Business of AI:

AI researcher Dr. John Licato told TheStreet in May that the crux of achieving ethical, safe AI revolves around transparent research into current models. 

When that research is done only by for-profit companies, he said, "that's when all the things that we're afraid could happen with AI are much more likely to happen."

Critics of open sourcing, however, claim that an open model is much more rife for misuse. 

AI expert Gary Marcus said in a November post that "nobody has strong positive guarantees that there are no serious possible consequences of open source AI," in terms of the potential of misinformation generation and the creation of bioweapons. 

"That said, we don’t have any strong positive guarantees whatsoever," he added. 

Clément Delangue, the co-founder and CEO of Hugging Face, replied to Marcus's post, saying that his points apply to non-open-source AI as well, and at a potentially larger scale since proprietary AI is mass-deployed. 

"Open-source is the only way to keep non-open-source in check," he said. "Without it, you'll have extreme concentration of power/knowledge/secrecy with 1,000x the risk. Open-source is more the solution than the problem for AI risks."

Indeed, issues of a lack of democratic decision-making around these technologies, and the way that may impact regulation, have a litany of experts more concerned about AI than anything else

Related: Think tank director warns of the danger around 'non-democratic tech leaders deciding the future'

The Alliance isn't a 'silver bullet'

Those issues of power concentration in AI apply even to IBM and Meta's AI Alliance, AI expert and Ivanti CPO Srinivas Mukkamala told TheStreet. 

The Alliance, he said, seems to be the private sector's attempt of grappling with the ways AI could change the world, and the complexities around how the technology will be regulated. 

The Alliance alone, while a noble step, isn't nearly enough to address all the important issues, he said. 

"While the AI Alliance is attempting to solve many of the foreseeable problems created by AI, we haven’t yet started to grapple with creating truly equitable access to data," Mukkamala said. "The AI Alliance isn’t the silver bullet that will be able to address all of the risks and inequity of AI."

"We need to have more alliances than just this one tackling AI governance and use, and ensure we are not concentrating power into the hands of the lucky few," he added. 

His view is one shared by the American public. 

Polling from the Artificial Intelligence Policy Institute has found that an overwhelming portion of the populace does not trust tech companies to self-regulate when it comes to AI. 

Mukkamala's greatest concern is a world in which the uneven adoption of AI accelerates global inequality and poverty at an enormous rate. 

"We must take steps now to avoid a future of the digital haves and have-nots, and while the AI Alliance is a start, to truly anticipate and resolve the dangers of AI we need more oversight and global cooperation," Mukkamala said.

Regardless of the impact the Alliance will have, the impression that everyone should be a part of the regulatory conversation is one that has been shared publicly by IBM executives. 

"You can't just have the rules being written by a handful of companies that are the most powerful in the world right now," Christina Montgomery, IBM's chief privacy officer, told TheStreet in a September interview. "We've been very concerned that that's going to influence the regulatory environment in some way that isn't going to be helpful in terms of innovation." 

Contact Ian with tips via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: Artificial Intelligence is a sustainability nightmare - but it doesn't have to be

Get exclusive access to portfolio managers’ stock picks and proven investing strategies with Real Money Pro. Get started now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.