Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Darío Gil

The future of AI is too important to be decided behind closed doors. There is a better way

(Credit: Getty Images)

Somewhere during the 96 hours between the surprise firing of OpenAI CEO Sam Altman and his rehiring, observers and social media users started speculating about an eventual movie and which famous actor would play which much less famous AI researcher.

Of course, the saga did seem made for the screen with its palace intrigue and whiplash swings of power. But for many of us working to advance the frontiers of computing, this was a drama foretold–a fitting encapsulation of the contradictions present in the grand utopian and dystopian visions that have dominated the debate in the year since artificial intelligence entered the public consciousness.

When the gaze of any nascent technology is focused entirely on a few personalities and institutions, things are almost guaranteed to go sideways. We’re not in a movie. AI is too important a technology to be shaped in relative secrecy by a small cast of characters. It’s a technology that’s going to transform everybody’s lives–so we all have a stake in its development.

There is a better way forward.

It begins by acknowledging the diverse scientific and technical community that has contributed for decades to the fundamental advances that have made today’s AI moment possible. This community includes the science agencies that support curiosity-driven exploratory work, the universities that educate generation after generation of computer scientists and AI experts, the industrial research laboratories that create breakthrough demonstrations of AI systems, and the wide array of enterprises, from non-profits to start-ups to established multinationals, that commercialize and scale AI products and services.

For the past six months, we have been engaging with the AI community to launch a more open, transparent model for innovation. We’ve pulled together more than 40 organizations across industry, startups, academia, research, and government, from Berkeley to ETH and the University of Tokyo to startups such as Hugging Face and Anyscale to established companies such as Meta, AMD, Intel, Dell, Oracle, and Sony, to scientific collaborators such as CERN, NASA, NSF, and the Cleveland Clinic. These organizations individually and together are innovating across all aspects of AI education, research, technology, applications, and governance. It is a truly diverse, international coalition of institutions designed to better reflect the needs and complexity of our societies.

Today, we’re giving this global network of innovators a name–the AI Alliance.

The fact is, every single one of us has a vested interest in the future of artificial intelligence. Whether we seek it out or not, AI is destined to play an increasingly prominent role in each of our lives in the years ahead, redefining the ways we work, play, learn, communicate, and more. And since we all have a stake in how this powerful technology grows, it is essential that AI’s evolution is guided by shared principles, not personalities.

Open science and open innovation are the core principles that have brought the AI Alliance into existence. Unlike the closed-door chaos we’re seeing today, the AI Alliance will harness the energy and creativity of a much broader set of institutions, drawing on the diverse skills and perspectives of all those who wish to take part in building the future. The alliance will bring together a critical mass of computing, data, tools, and talent to accelerate open innovation in AI. It will build and support open technologies across software, models, and tools, enable students, developers and scientists to understand, experiment, and adopt open technologies, and advocate for the value of open innovation with organizational and societal leaders, policy and regulatory bodies, and the public.

This open ecosystem is a catalyst for driving an AI agenda underpinned by some of society’s most fundamentally important principles: scientific rigor, trust, ethics, resiliency, and responsibility. As AI advances, so must our ability to improve governance and safety–and this can only be done through the collective power of an open, healthy AI community that promotes the exchange of ideas and collaboration on decisions and outputs.

And we are not alone in doing this. Other initiatives, such as the World Economic Forum’s AI Governance Alliance and the EU’s European AI Alliance, recognize the importance of an open ecosystem for responsible innovation.

The future of AI is approaching a fork in the road. One path is dangerously close to creating consolidated control of AI, driven by a small number of companies that have a closed, proprietary vision for the AI industry. We’re already seeing glimpses of the chaos that lies down that path, and it’s not hard to imagine the stifled innovation, hoarded benefits, and questionable oversight waiting just around the bend.

Down the other path lies a broad, open road: a highway that belongs to the many, not the few, and is protected by the guardrails that we create together. Through the AI Alliance, we’re setting to shape the future of AI–one in which a diverse and growing set of institutions with shared values and principles will advance safe and responsible AI rooted in open innovation.

Darío Gil is IBM’s SVP and director of research.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.