Should your company implement A.I. now, or wait, and prevent any negative potential impacts?
Andrew Yang, the tech-savvy 2020 presidential candidate, wants you to wait. It’s why he signed a letter asking for a pause in A.I. development for six months, alongside Tesla and Twitter CEO Elon Musk, former Facebook executive Tristan Harris, author Yuval Noah Harari, and hundreds of others. He wants to use this period for “meaningful conversations” on A.I.’s impact.
But what would such a pause mean for companies besides OpenAI, Microsoft, and Google? We put the question to Yang in an interview this week. In this Q&A, Yang shared his views on the potential impact of A.I. on companies and society and how businesses and governments can and should react. (Questions and answers are lightly edited for brevity and clarity.)
Fortune: What is the most impactful A.I. technology, and what risks and promises does it carry?
Yang: It’s ChatGPT-4. Because of its widespread availability and accessibility, there’s a myriad of organizations tinkering with it and using it for all kinds of projects. It is likely to have job displacement and other unforeseen effects. It’s advancing so quickly that by the time its next iteration, GPT-5, comes around, it can render a lot of education and a non-trivial amount of employment obsolete.
What are some ways non-tech companies can use ChatGPT-4, and what effects will that have?
They can use it to make their product interfaces smoother for their users. If they can put in a prompt that ChatGPT translates into products or services, 2 million Americans working in call centers can be replaced. Companies can also make all their consultants A.I.-enabled, which would give them a competitive advantage [versus the ones that do not], for sure.
How can companies use A.I. responsibly?
That’s one of the major issues. If you are a well-resourced company, and you deploy A.I., it’s still hard to know what’s inside the wiring. You cannot trust A.I.—you can only use it and hope for the best. You could put resources into monitoring, so you could track what is happening. But sometimes even the developers of A.I. cannot explain how it came to a particular point. The alternative is not to use it at all.
Is there a template that companies could follow, as they deploy A.I.?
In many ways, as a company, you want to come clean and say, "We’re using A.I. for this," to let your customers and users know. That way you can ask them, "If you see something amiss, let us know, because we’re in uncharted territory." We shouldn’t assume that all of this is resolved or buttoned up.
Given the negative potential risks of A.I., what should our response as a society be?
Millions of people could lose their job in America. We should reckon with what the human effects are. If we don’t have meaningful countermeasures or ways to help people transition, then we should be diligent about safeguarding people during what could be a very rocky period.
Between business and government, who should take the lead in ensuring these safeguards?
In an ideal world, it is the responsibility of governments. Companies are not in the business of employing lots of people. They are in the business of accomplishing their goals and creating lots of shareholder value. If I can accomplish my goals and create value by deploying A.I., we should face facts about what these companies are almost bound to do.
Given that Yang has led a tech-enabled business for six years, disrupting the test preparation market, his views on how A.I.-enabled companies can outcompete those that aren’t are probably not far off the mark. If anything, though, his views make me more eager to try it out—cautiously.
And the government intervention Yang recommends has already started happening. There’s the imminent A.I. Act in Europe, for example, which could become a global standard for regulation. Explainers from the World Economic Forum and PwC are helpful in understanding this legislation.
Earlier this year, the U.S. National Institute for Standards and Technology (NIST) launched a voluntary A.I. Risk Management Framework. NIST also offers workshops and an A.I. Resource Center. To sign up for help from NIST, don’t expect an A.I.-enabled chatbot, though. A simple email address (ai-inquiries@nist.gov) must do.
If any of our readers have positive experiences in getting help from this or other organizations, please let us know. We’d love to feature some examples of impactful, responsible uses of A.I.
Peter Vanham
Executive Editor, Fortune Impact and Connect
@petervanham
peter.vanham@fortune.com