Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Amber Burton, Paolo Confino

How employers can avoid A.I. lawsuits

Robotic hand pressing a keyboard on a laptop 3D rendering (Credit: Getty Images)

Good morning!

As cities roll out new laws governing the use of A.I. in the workplace, more employers are taking measures to cover their backsides from potential lawsuits.

The risks are high, with so much still unknown about the rapidly evolving technology and its potential ramifications. A New York City law, which will go into effect next month, aims to protect job candidates from potential biases in recruiting processes utilizing A.I., and the Equal Employment Opportunity Commission recently reminded employers that they're responsible for any discrimination in hiring, firing, or promotions that are a result of A.I., even if it's the fault of a third-party vendor.

Michael Schmidt, a labor and employment attorney with Cozen O'Connor, says employers would be wise to establish robust policies and internal checks to ensure protection from legal risks, likening it to the early days of moderating social media use in the workplace. 

“You are applying a new platform and technology to traditional employment law issues,” says Schmidt. “We need to figure out how to apply the same risks and rewards when it comes to third-party intellectual property, harassment, discrimination, and accurate and good content.” 

He suggests that HR leaders begin by auditing the technology vendors they already employ in hiring.

“Very often, organizations don't even know that third parties that they have assisting them with the recruitment process are using A.I. as part of that recruitment process,” says Schmidt. “The burden is on the employer to make sure and not just assume that the third party is compliant, but to really look into what the third party is doing and how they're doing it.”

HR leaders should ask vendors what A.I. is used in the recruitment process and how data is used and gathered, he tells Fortune. Then, perhaps most importantly, evaluate how your operations comply with state and federal regulations.

Schmidt recommends breaking the process down into two steps. First, assess the nature of your workforce and the workplace, and ask what your company is trying to achieve with its A.I. policy. Is it security, antidiscrimination, or protection from plagiarism? Is it to evaluate whether the use of A.I. is warranted at all?

Once an organization decides whether to embrace or limit the use of A.I. and in what ways, Schmidt says leaders should then map out the particulars of who will be using it and to what extent. For example, he says employers might allow the use of generative A.I. in employees' rote decision-making but not replace decision-making altogether.

“The takeaway of all of this is that A.I. really has many advantages for employers when used correctly and appropriately,” says Schmidt. “It's just that employers need to stay on top of what the regulatory landscape looks like in the technology landscape to make sure that they are using it appropriately.”

Amber Burton
amber.burton@fortune.com
@amberbburton

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.