Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Sylvia Lu, Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan

AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond

One AI harm is pervasive facial recognition, which erodes privacy. DSCimage/iStock via Getty Images

As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.

These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety.

AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?

The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that.

Slow burns

One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives.

Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.

Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.

Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.

This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible.

Researcher Kumba Sennaar describes how AI systems perpetuate and exacerbate biases.

Why regulation lags behind

Despite these mounting dangers, legal frameworks worldwide have struggled to keep up. In the United States, a regulatory approach emphasizing innovation has made it difficult to impose strict standards on how these systems are used across multiple contexts.

Courts and regulatory bodies are accustomed to dealing with concrete harms, like physical injury or economic loss, but algorithmic harms are often more subtle, cumulative and hard to detect. The regulations often fail to address the broader effects that AI systems can have over time.

Social media algorithms, for example, can gradually erode users’ mental health, but because these harms build slowly, they are difficult to address within the confines of current legal standards.

Four types of algorithmic harm

Drawing on existing AI and data governance scholarship, I have categorized algorithmic harms into four legal areas: privacy, autonomy, equality and safety. Each of these domains is vulnerable to the subtle yet often unchecked power of AI systems.

The first type of harm is eroding privacy. AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm.

The second type of harm is undermining autonomy. AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users.

The third type of harm is diminishing equality. AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.

The fourth type of harm is impairing safety. AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health.

Because these cumulative harms often arise from AI applications protected by trade secret laws, victims have no way to detect or trace the harm. This creates a gap in accountability. When a biased hiring decision or a wrongful arrest is made due to an algorithm, how does the victim know? Without transparency, it’s nearly impossible to hold companies accountable.

This UNESCO video features researchers from around the world explaining the issues around the ethics and regulation of AI.

Closing the accountability gap

Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.

As AI systems become more widely used in critical societal functions – from health care to education and employment – the need to regulate harms they can cause becomes more pressing. Without intervention, these invisible harms are likely to continue to accumulate, affecting nearly everyone and disproportionately hitting the most vulnerable.

With generative AI multiplying and exacerbating AI harms, I believe it’s important for policymakers, courts, technology developers and civil society to recognize the legal harms of AI. This requires not just better laws, but a more thoughtful approach to cutting-edge AI technology – one that prioritizes civil rights and justice in the face of rapid technological advancement.

The future of AI holds incredible promise, but without the right legal frameworks, it could also entrench inequality and erode the very civil rights it is, in many cases, designed to enhance.

The Conversation

Sylvia Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.