How HR can mitigate the risks and reap the rewards of AI at work

Technology

Artificial intelligence technology has its uses at every stage of the employment relationship and is increasingly relied on by time-poor human resources teams. However, if not used carefully and transparently, employers are at risk of being at the sharp end of employment claims they may struggle to defend. Sanika Karandikar and Raoul Parekh offer guidance on how to avoid these pitfalls.

AI is the science of making machines smart. To avoid any confusion over the terms used when discussing this technology it’s useful to remember that “machine learning” is not quite the same thing: it’s a branch of AI where a programme will identify patterns, learn from data and make decisions (or reach outputs). The computer then continues to gain knowledge to improve processes and run tasks more efficiently (think about your Amazon Alexa at home getting to know your preferences more and more accurately).

Using AI in recruitment

Computer based intelligence instruments can help HR groups during the enlistment interaction, by shortlisting up-and-comers and in any event, leading video interviews. Be that as it may, these devices are just comparable to their human information sources, in any case, and if the information inputted is slanted or has a specific predisposition, the device may not understand any better compared to punish characteristics in up-and-comers’ applications which the business had not expected or planned.

For instance, on the off chance that a business accumulates duplicates of past effective CVs and utilizations this information to target who are probably going to find success up-and-comers – assuming past recruiting practice was to enlist for the most part men, it’s reasonable the AI device will signal CVs of any individual who isn’t a man as unwanted. For sure, Amazon had this definite issue and needed to scrap the utilization of the AI device being referred to 2018.

To moderate the gamble of prejudicial enlistment rehearses, bosses ought to:

• Ask lots of questions – before employers buy any AI software, they should ask the developers what is in the data set and how the tool works, whether any measures have been taken to prevent discrimination (for example, consultation with a diversity consultant/having a diverse team of developers) and whether the developer can give any references from similar employers who have bought their tool.
• Test – testing the product alongside implementing it; for example, by testing a sample of the live data to check what the results would be if the process had been carried out manually, which could help when defending allegations of unfairness and discrimination. This will help show that the results would have been the same or similar even if the AI tool had not been used.
• Carry out a pre-emptive equality impact assessment – there is no legislation requiring this for private sector employers but doing so would involve identifying any negative impact the proposed AI may have on certain groups, taking action to redress them or alternatively explaining why the employer has chosen to go ahead.

Monitoring employees

With working remotely now the norm, employers are increasingly using AI tools to monitor employees during the working day to measure productivity and performance, use of social media or to prevent the loss of confidential information. However, employees have a reasonable expectation of privacy in the workplace and so monitoring employees in this way can create both legal and employee relations risks if an employer’s behaviour is not reasonable and proportionate. To mitigate these risks, employers could:

• Tell employees what they’re doing and set expectations – make sure any data privacy documentation and/or contractual documentation explains to employees the ways in which they will be monitored, what exactly will be monitored (for example, internet use, telephone) and the information relating to them which an employer will hold and for what purpose.
• Don’t monitor for the sake of it – employers shouldn’t collect more information than is reasonably necessary. For example, a tool that monitors an employee’s every key stroke and mouse movements or switches on webcams is very likely to be excessive.
• Carry out an impact assessment – if employers are going to monitor on a large scale, they will need to carry out and document a data protection impact assessment.
• Question the data – employers should consider individual employee circumstances and not take the AI tool’s recommendations at face value. For example, there may be a reason why someone has been less productive on a certain day or at certain times of day – childcare responsibilities for example – and employers should be wary of not indirectly discriminating against employees.

Redundancy

If employers are carrying out large-scale redundancy process where there is a need to interview large numbers of employees, there are AI tools that can conduct interviews to assess employees as part of the redundancy selection process.

Similar to when using AI for recruitment, a tool can be programmed to pick up on and rate more highly certain words. The issues here can arise if employers let the AI tool make decisions without a human element or understanding of how the tool has come to its decisions, and then have to defend unfair dismissal claims where the burden is on the employer to show that they dismissed for a fair reason, followed a fair process and that dismissal was a reasonable response in the circumstances.

Indeed, Estée Lauder had this problem recently when two make-up artists brought a claim and the employer was not able to adequately explain how the AI tool had come to the decision to make them redundant.

To best protect themselves, employers could:

• Keep a human element – managers need to be able to explain why an employee out of a pool is being made redundant and should be involved at all stages of the process. Think of the AI tool’s outputs as a suggestion rather than a definitive answer.
• Keep a paper trail – this is key! Document the results of any testing you carry out and keep record of any information you can glean from the AI developer of how the tool comes to decisions and the data set that is inputted.

Leave a Reply

Your email address will not be published. Required fields are marked *