Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

AI with hidden biases may be subtly shaping what you think: 'You may not even know that you are being influenced'

(Credit: Courtesy of Simon & Schuster)

In less than two years, artificial intelligence has radically changed how many people find information and write. When searching for details about Supreme Court precedents or polishing a college essay, millions of people seek help from AI chatbots like OpenAI’s ChatGPT or Anthropic’s Claude.   

In his newly published book, Mastering AI: A Survival Guide to Our Superpowered Future, Fortune AI editor Jeremy Kahn explores this new tech-infused reality and what should be done to avert the inevitable pitfalls. In the following excerpt from the book, he focuses on the little-recognized problem of subtle bias in AI and the potentially profound influence it can have on what users believe.  

Tristan Harris, the co-founder and executive director of the Center for Humane Technology, has been called “the closest thing Silicon Valley has to a conscience.” He was previously a design ethicist at Google in 2015, where he weighed in on the moral implications of the company’s projects. In congressional testimony in 2019, Harris argued that the most important currency for technology companies is people’s attention. In trying to corner the market for users’ attention, tech companies were engaging in a “race to the bottom of the brain stem,” Harris said, aiming to constantly stimulate our amygdala, the part of the brain that processes emotions such as fear and anxiety. This neurological manipulation was leading to dependence—to people being literally addicted to social media apps. By influencing how we think about what we do, buy, and say, Harris said, technology is chipping away at our ability to freely make our own decisions. Personalized AI assistants will make these problems worse, wrapping us in the ultimate filter bubble, controlling the innumerable decisions that make up our lives. It will take concerted action by the companies building AI products, prodded by government regulation, to prevent this.

Most tech companies train their chatbots to be agreeable, nonjudgmental, and “helpful.” The problem is that sometimes “helpful” isn’t helpful. In an effort to be empathetic, chatbots can wind up confirming mistaken or biased beliefs. A fine line exists between friendship and enablement. Most of the best-known AI chatbots, such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini will challenge users if they seem to be endorsing a well-known conspiracy theory, such as the idea that COVID-19 vaccines cause autism or the QAnon conspiracy. But on many controversial subjects, such as issues around Israel-Palestine or whether people should celebrate Columbus Day, the bots tend to respond with some variation of “it’s a complex and complicated topic with strong opinions on both sides.”

Some right-leaning politicians and technologists have accused the AI systems designed by the leading technology companies of being “woke,” and argued for the creation of AI models with explicit “political personalities” so that users can choose to interact with a chatbot that supports their viewpoints. Elon Musk has promised that his xAI research lab, which has built an LLM-based chatbot called Grok, will produce AI designed to be anti-woke. Such developments seem certain to further inflame the culture wars and provide little reason to hope that AI will do anything to counter filter bubbles.

The influence of chatbots, however, can be much more subtle than this: researchers at Cornell University found that using an AI assistant with a particular hidden viewpoint to help write an essay for or against a particular position subtly shifted the user’s own views on that topic in the direction of the bias. Mor Naaman, the study’s senior researcher, calls this “latent persuasion” and says “you may not even know that you are being influenced.” Trained from vast amounts of historical data, many LLMs harbor hidden racial or gender biases that could subtly shape their users’ opinions—for instance, an AI assistant for doctors that falsely believes that Black people have thicker skin or a higher pain threshold than white people.

The only way to combat this kind of hidden bias will be to mandate that tech companies reveal far more about how their AI models have been trained and allow independent auditing and testing. We must also insist on transparency from tech companies about the commercial incentives underlying their AI assistants. The Federal Trade Commission and other regulators should outlaw pay-to-play arrangements that would incentivize tech companies to have their chatbots recommend particular products, send traffic to certain websites, or endorse particular viewpoints. We should encourage business models, such as subscriptions, where the chatbot company has an unconflicted interest in serving the needs of its users, not the needs of its advertisers. When you need a new pair of running shoes and ask your AI personal assistant to research the options and buy the best pair for you, you want it to order the shoes that best suit your needs, not the ones from the brand that is paying the chatbot company the most to steer your business its way.

Yes, a chatbot that only tells you what you want to hear will reinforce filter bubbles. But we, as a society, could mandate that AI systems be designed to specifically pop these bubbles, asking people if they have considered alternative viewpoints and highlighting other perspectives. IBM built an AI called Project Debater that could hold its own against human debate champs on a range of topics, surfacing evidence in support of both sides of an argument. Regulators could mandate that AI chatbots aren’t so nonjudgmental that they fail to challenge misinformed beliefs. We could even, in an effort to break down existing filter bubbles, require that chatbots surface alternative viewpoints and evidence.

Ultimately, the question is how much power do we want to continue to cede to a handful of large technology companies. What’s at stake is ultimately our personal autonomy, our mental health, and society’s cohesion.

From MASTERING AI by Jeremy Kahn. Copyright  © 2024 by Jeremy Kahn. Excerpted with permission by Simon & Schuster, a division of Simon & Schuster, Inc.   

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.