Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newsroom.co.nz
Newsroom.co.nz
Nikki Mandow

Employers' new challenge: AI's use through the employment life cycle

Partner at Bell Gully specialising in employment law Rosemary Wooders.

Bell Gully partner Rosemary Wooders takes Nikki Mandow through the use of artificial intelligence emerging throughout the employment life cycle in its series on the issues businesses adopting AI tools will need to think about | Content partnership

Artificial intelligence came screaming into the public consciousness last year with the launch of ChatGPT, but AI tools have existed for years. In fact, AI may have already had an impact on your job prospects.

“I am aware that a number of employers have been utilising AI tools for a few years now in the recruitment process,” says Rosemary Wooders, a partner at Bell Gully in Auckland specialising in employment law.

READ MORE:Playing regulatory chess with AI

AI can recognise and sort through basic patterns, so if it were asked to filter through a pile of CVs and filter out candidates without a particular qualification – it could have been your CV that was counted out. Sorry!

But the newer, more sophisticated forms of AI, dubbed ‘generative’ AI for their ability to create new writing, images or programmes from prompts, could take the recruitment process into a whole other dimension.

“You could give the generative AI tool a job description, all of the CVs, and ask the generative AI tool to make the recruitment decision.”

With that power comes the risk of what Wooders calls “baked-in bias”. If the AI’s developers or users had a prejudice against candidates of a certain gender or ethnicity, that could end up reflected – and perpetuated – in the AI’s algorithms.

Wooders says with the growing capability of generative AI tools for management tasks like assigning and assessing employees’ work, the case could be made that the AI tool is, itself, an employer.

Employers should have a workplace policy for the use of generative AI front of mind.

Any policy, Wooders recommends, should cover when AI tools are permitted, whether their use needs to be signed off by anyone else, whether the work created should carry an attribution to the use of AI, and how the company’s intellectual property or personal information from within the company will be protected.

She cautions that employees should still be using their own judgement to assess the accuracy of outputs created by AI, because generative AI can produce false information.

At the other end of the cycle, what happens if an employer is looking at an AI product which could impact the continued employment of any of its workforce if adopted? That employer will need to pay close attention to their obligations to consult all potentially affected employees, with timing a critical factor.

“An interesting question that arises is when, specifically, an employer should consult,” says Wooders. Should it be when training takes place? Or when the employer signs on the dotted line? “I think that is going to be a fact-specific type question that will need to be considered further,” she says. 

Bell Gully is a foundation partner of newsroom.co.nz

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.