Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tribune News Service
Tribune News Service
Comment
David DeGrazia

Commentary: At what point does AI become conscious? And what do we owe it once it gets there?

A.I. has become proficient at recognizing faces, understanding speech, reading and writing, and diagnosing diseases, and it has potential to discover new medicines. Hanson Robotics’ Sophia, a lifelike bot, could converse naturally with a person and sprinkle the conversation with irony. ChatGPT, launched by Open AI in November, writes papers of higher quality than most humans can write. And in February, Bing’s A.I. Chat program stunned a New York Times columnist by claiming to be in love with him and saying “I want to be alive.”

A.I. is not only becoming scarily intelligent in some respects; it is also learning voraciously and surprising us with novelty.

Now that A.I.’s novelty is starting to make people wonder whether a program might have desires and interests of its own, we face tough questions. First: At what point can we realistically judge that an artificial system has its own consciousness, or subjective experience, rather than simply sounding as if it does? Second: If and when we reach that point, should we acknowledge that the system has moral status or rights? And if so, what follows practically for our relationship to A.I.?

The question about how to judge whether an artificial system is conscious raises the classic mind/body problem: how to understand the relationship between minds, such as your consciousness, and matter, such as your brain. Consensus on this issue is lacking. But many scientists and philosophers who study consciousness today hold that minds are not mysterious supernatural phenomena beyond the reach of science — immaterial souls — but instead are causally produced by, or realized in, brains. They see consciousness as part of the natural world.

Many find it hard to imagine that artificial materials such as silicon might generate something as wondrous as consciousness. Yet, upon reflection, it seems no less amazing that our fleshy brains achieve consciousness. Yet they do.

What evidence should convince us that an artificial system not only acts as if it is conscious — as Bing’s A.I. Chat program sometimes does — but really is conscious? Some A.I. experts maintain that if a highly advanced robot asks us whether we humans are conscious or wonders aloud about a nonbodily afterlife or shows a preference for future pleasures over past ones (suggesting that they actually feel pleasure), such behaviors would strongly suggest a familiarity with consciousness that only a conscious being could have.

If an A.I. achieved consciousness, should we treat it as having moral status or rights? My suggestion as a philosopher-ethicist is that, if their consciousness included any felt desires; any sensory feelings, such as pain or bodily comfort; or any emotional states, such as frustration or joy, then they would have interests of their own. In that case I would apply the same standard I apply to nonhuman animals: If they have feelings and interests of their own, then they have moral status; they matter morally for their own sake and should not be regarded as mere resources for our use. Sentient beings — conscious beings with feelings — have moral status irrespective of their species or whether they are alive. To hold otherwise is to cling to speciesism, an irrational prejudice against members of other species, or what I call “biologism,” an irrational prejudice against nonliving entities.

Suppose some future artificial systems convince us of their sentience. Should we infer that they have not only (some) moral status but also the especially strong protections we call moral rights? Assuming these robots or other systems persuade us that they are sentient on the basis of highly intelligent, self-aware behavior, I would argue that they are so person-like as to qualify as persons, despite being artificial and nonliving. On this basis we ought to accord them basic rights that all persons should enjoy: a right to “life” (or non-destruction), a right not to be caused to suffer or otherwise be harmed needlessly, and liberty rights including, crucially, a right not to be enslaved.

Advances in A.I. are driven partly by scientific and philosophical curiosity. But to a great extent they are driven by our interests in the work A.I. can do for us: clear minefields, perform surgeries, diagnose diseases, write papers and provide companionship, among other tasks. Ironically, advances in A.I. might lead to the existence of entities that should be recognized as having a moral right to refuse to keep working, involuntarily, for our benefit.

____

ABOUT THE WRITER

David DeGrazia is the Elton Professor of Philosophy at George Washington University.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.