The A.I. revolution stands to disrupt nearly every aspect of the people function and how it manages the full employee life cycle. Much is at stake.
“HR is going to be a massive use case for A.I.,” says Josh Bersin, founder and CEO of the Josh Bersin Company, a human capital advisory firm. “Think of how many people are hired, promoted, paid, or let go every day around the world—a lot. If [A.I.] makes us 10% smarter on these decisions, it will make a big difference.”
Though we sit at the dawn of a new technological age, anxieties are brewing over A.I.’s shortcomings. Improperly built and managed tools can engender bias in hiring and performance evaluations or become easy targets for data leaks. These are all concerns plaguing most HR executives, who must move quickly and cautiously as companies join the A.I. gold rush.
Potential for bias
Companies are increasingly using A.I. to screen résumés and guide applicants through the hiring process. Railway company Union Pacific offers a chatbot to assist applicants through the interview stage and, if hired, help them complete health and background screenings. When done right, A.I. can enable employees to prioritize more rewarding work.
“I see A.I. as a place for us to be able to really up the level of work that people are able to do, so they can develop and go into more interesting jobs over time,” says Beth Whited, chief human resource officer at Union Pacific.
At edtech company Texthelp, the HR team uses ChatGPT to rewrite job specs and draft new performance review questions, though its chief people officer, Cathy Donnelly, acknowledges: “You’re never going to lift something ChatGPT gives you and implement it.” Still, she says, it often provides nuggets of information that she finds useful.
A.I. can also create discriminatory outcomes. Amazon was forced to scrap its experimental A.I. recruiting tool after discovering it was more likely to favor male candidates’ résumés. Google has also made A.I. missteps, including labeling Black people as “gorillas” in an online photo service. In other words, hiring discrimination could become A.I.-enabled in short order. Federal and local regulations to address potential bias in recruitment are cropping up. The Equal Employment Opportunity Commission shared guidance in May warning employers that they could be held responsible for any discrimination created by A.I. technology in hiring, promoting, or firing. And a New York City law mandating transparency when employers use A.I. tools in hiring and promoting decisions will go into effect in early July.
“HR leaders, being the responsible people in the organization that they are, need to consider bias in A.I. decisions above and beyond what the law presents,” says Tracy Avin, founder of Troop HR, an educational platform and professional community for HR leaders. “If we start putting together the guidelines by which we want to use A.I. for talent and recruiting now, it will start to set some guidelines for what is acceptable and what is not.”
Potential for data leaks
A.I. tools pose privacy risks for employers. Several companies, including Apple and Goldman Sachs, have banned employees from using ChatGPT in the workplace, primarily citing security concerns. For HR professionals, this means greater vigilance on data output to third-party tools and closer partnerships with their cybersecurity functions.
Texthelp is still in the conversation stage of A.I. implementation, but Donnelly says she’s started recruiting for compliance and security roles in different regions, “so [that] we’ve got the expertise in-house to advise on how to use our data, but use it respectfully and legally.”
Third-party risk management should be a top priority, experts say. Questions to ask vendors include: How do you account for local and federal laws and regulations? Who else can access internal data?
Though external vendors carry risks, developing an A.I. tool is expensive. Nvidia’s A100 graphics processing units (GPUs), critical to developing and powering A.I. tools, cost as much as $10,000 each. Meta used 2,048 Nvidia A100 chips to train LLaMa, its open-source language learning model released in February. At $10,000 a chip and taking about 1 million GPU hours to train, that’s an eye-popping $2.4 million (according to an estimate from CNBC), and yet that’s a still significantly smaller cost than for other A.I. platforms that require more computing power to run. (One estimate places ChatGPT’s daily operations at $700,000.)
“Right now, OpenAI is paying for all that, and it’s free,” says Bersin. ”But if your company is licensing it, it could be expensive.”
Union Pacific opted to use offerings from SAP’s human capital management software SuccessFactors and build lower-cost tools like its digital assistant. Whited says she’s confident SAP will ensure its products are up to privacy and data protection standards before introducing an A.I. tool to clients. And her team is willing to pay more for a license if it offers better A.I. tools than what it could make in-house. “Maybe the tools that you use in your normal HR practice get a little bit more expensive, but [the tools from outside vendors] have more capability in terms of artificial intelligence,” she says.
Potential for relationship breakdowns
A.I. tools are designed with efficiency in mind and can free up hours of busywork for HR professionals, like sifting through hundreds of employment applications or internal documents and data.
“Companies are filled with things like documents and huge databases of information. Right now, the HR department has to take all that stuff and manually turn it into either courses or Q&A databases, or they have to sit in a call center and answer people’s questions on the phone,” says Bersin. Chatbots can aid in drafting corporate communications, for instance, although Bersin cautions against using them to announce layoffs.
A.I. can also help strengthen learning and development programs, which are laborious to create and sustain. In practice, an A.I. tool could assist in building a role-specific program for new hires with daily tasks to complete or serve as a personalized career coach. These are services that HR professionals want to offer employees but are often constrained by resources, says Julia Dhar, a director and managing partner at Boston Consulting Group. “Artificial intelligence lifts some of those constraints when it’s used well.”
But that efficiency could come at the cost of interpersonal connections. Imagine a scenario where A.I. tools fully administer the hiring and onboarding process: “If I’m a new employee and A.I. is getting my materials and my laptop, onboarding, and online tutorials, I don’t feel connected to the organization,” says Dustin York, a communications professor at Maryville University. That could spell trouble for retention. “I can easily leave and go somewhere else.”
Conversely, delegating rote tasks to A.I. presumably leaves HR professionals and managers with more time to focus on culture and employee satisfaction. “HR has become so much more people-centric than it was 20 years ago or even 10 years ago,” says Avin. “If we’re able to take care of these other features like onboarding, then it frees up the time to focus on the things that companies want to focus on.”
A path to adoption
Dhar says HR leaders should keep three key considerations in mind when adopting A.I. in the workplace. They include:
- Getting clear about A.I.’s value and business case.
- Becoming familiar with the many A.I. tools and applications available and vetting them.
- Identifying how A.I. can complement human decision-making and safely speed up processes.
“We are at an inflection point, and it is a real opportunity to take stock and shape the strategic direction of the organization; be clear about what it looks like to use A.I. in the organization and people processes responsibly; and connect that back to the organization’s values,” says Dhar.
Companies will also have to address one of the biggest A.I. adoption hurdles: the skills and training gap. Management services company ADP provides employees with several training opportunities, including storyboards to help visualize data and pop-ups for on-the-job learning. One successful example was ADP’s pay equity storyboard, which helped users identify potential pay gaps through data visualization.
“We try to do the education in a way that allows people to understand how machine learning or A.I. technologies are actually in their everyday life,” says the firm’s chief data officer, Jack Berkowitz. Drawing parallels to everyday applications that use A.I. can help employees become more comfortable with the technology in the workplace—for example, explaining to employees that the algorithm recommending trainings to them works similarly to Instagram’s recommendation algorithm.
Lastly, don’t ignore employee aversion to A.I. tools. Clarifying how the company uses A.I. can remove any mystery surrounding its seemingly sudden presence at work and related fears over data gathering.
“Thinking that the executives up there on the hill are using these A.I. tools to run things is scary [for employees],” York says. “Just pull back the curtain to explain, ‘This is why we do it’…A lot of employees would actually get behind it if you just communicated what it’s being used for.”