ArticlesFeatured

Is your AI truly unbiased?

today2023.02.19. 1366

Background

Mihaly Nagy, Partner, Head of Content, The HR Congress

WHY SHOULD YOU CARE?

Artificial Intelligence tools have the potential to help people professionals to boost productivity, efficiency and reliability. However, AI in HR is a simple as plug and play — there are serious risks and drawbacks that companies need to consider if they’re going to incorporate AI into their talent management practices. In particular, they need to address bias in AI.

Artificial intelligence (AI) is becoming increasingly common in every aspect of business – HR is no different. AI in HR has the potential to streamline processes, reduce bias, and improve the overall efficiency and effectiveness of HR practices.  “An emerging wave of AI tools for talent management have the potential to help organizations find better job candidates faster, provide more impactful employee development, and promote retention through more effective employee engagement. – identified in a recent HBR article by  Jessica Kim-Schmid and Roshni Raveendhran – But while AI might enable leaders to address talent management pain points by making processes faster and more efficient, AI implementation comes with a unique set of challenges that warrant significant attention.” 

Here is a certainly non-exhaustive list of ideas how talent and HR professionals can use AI: 

  1. Applicant screening: AI can help to screen resumes and applications to identify top candidates based on specific criteria, such as skills and experience, which can save time and improve the quality of hires. 
  1. Video interviewing: AI-powered video interviewing software can analyze facial expressions, body language, and voice inflections to provide insights into candidates’ personalities and communication styles, allowing HR to make more informed hiring decisions. 
  1. Performance management: AI can be used to provide managers with insights into employee performance and identify areas where employees may need additional support. This can help managers provide more targeted coaching and support to their employees, which can increase productivity. 
  1. Predictive analytics: AI can help to identify patterns and trends in employee behavior, such as turnover and absenteeism, allowing HR to take proactive measures to address these issues before they become problematic. 
  1. Chatbots: AI-powered chatbots can assist with answering employee questions and resolving HR-related issues, which can reduce the workload of HR staff and improve the overall employee experience. 
  1. Automating routine tasks: AI can be used to automate routine HR tasks, such as scheduling interviews or updating employee records. This can save HR staff time and allow them to focus on more strategic tasks. 
  1. Personalized coaching: AI can be used to provide employees with personalized coaching and feedback based on their performance. This can help employees improve their skills and increase their productivity. 
  1. Intelligent scheduling: AI can be used to optimize employee schedules based on their availability and workload. This can help ensure that employees are not overworked or overwhelmed, which can lead to decreased productivity. 

It’s important to note that while AI has the potential to improve HR practices, it is not without its limitations and potential drawbacks. For example, AI, while it can reduce bias may also perpetuate bias if it is not designed and trained appropriately. 

AI bias in HR – or the lack of it – determines the quality of AI, therefore it might be a good idea to explore possible biases in the application of how AI bias can manifest in HR: 

Data bias:  

Data bias is a significant concern in the application of artificial intelligence in HR. If the data used to train an AI algorithm is biased, then the algorithm itself will be biased, which can result in unfair or inaccurate decisions. Here are some ways data bias can occur in AI in HR: 

  • Historical bias: Data bias can arise from historical discrimination, such as past hiring decisions that have excluded certain groups. If an AI system is trained on such data, it may inadvertently perpetuate those biases by identifying certain groups as less qualified or more prone to certain behaviors. 
  • Incomplete or irrelevant data: If an AI system is trained on incomplete or irrelevant data, it may make biased decisions. For example, if an AI system is trained on resumes that have certain keywords or phrases, it may exclude qualified candidates who do not fit that specific criteria. Another example may come from performance review, , if the data used to train the system is based on past performance reviews that were biased against certain groups, the AI system may replicate those biases too. 
  • Sampling bias: Sampling bias can occur when the data used to train the AI system is not representative of the population it is intended to serve. For example, if an AI system is trained on data from a specific region or demographic, it may not accurately reflect the experiences and perspectives of a broader population. 
  • Lack of diversity in training data: If the AI system is trained on data that is not diverse, it may not be able to accurately assess candidates from diverse backgrounds. 

To address data bias in AI in HR, it’s important to start with identifying potential sources of bias in the data. This involves conducting a thorough audit of the data used to train the AI system to identify any gaps, biases, or inconsistencies.  

Let’s not forget, however, garbage-in-garbage-out. Therefore, It’s also important to ensure that the data used is diverse and representative of the population it is intended to serve.  

Ultimately, it’s crucial for organizations to be vigilant in addressing data bias in AI in HR to ensure that their systems are fair and equitable for all employees.

Algorithmic bias:  

Even if the data used to train an AI algorithm is unbiased, the algorithm itself can introduce bias if it is not designed or implemented correctly.  

Here are some ways algorithmic bias can occur in AI in HR: 

  • Over-reliance on specific criteria: If an AI algorithm is designed to prioritize specific criteria, such as education or work experience, it may inadvertently exclude qualified candidates who do not meet those specific criteria. This can perpetuate systemic biases and limit diversity in the workforce.  
  • Lack of transparency: If an AI algorithm is not transparent, it can be difficult to identify potential biases or explain the decisions it makes. This can make it difficult to understand how the algorithm arrived at its decisions, and may lead to mistrust or skepticism among employees. 
  • Feedback loops: If an AI algorithm relies on feedback loops, it can lead to self-reinforcing biases. For example, if the algorithm is designed to identify “cultural fit,” it may prioritize candidates who share certain characteristics with the current workforce. This can result in a homogenous workforce and limit diversity and innovation. 

To address algorithmic bias in AI in HR, it’s important to start with a thorough review of the algorithm’s design and implementation. This includes examining the specific criteria used to evaluate candidates, as well as the weighting and scoring system used to make decisions. It’s also important to ensure that the algorithm is transparent and explainable so that employees can understand how decisions are being made. “One way to reduce algorithm aversion is to help users learn how to interact with AI tools. Talent management leaders who use AI tools for making decisions should receive statistical training, for instance, that can enable them to feel confident about interpreting algorithmic recommendations .” – suggests Jessica Kim-Schmid and Roshni Raveendhran 

Human bias:  

Even if the AI algorithm is unbiased, human bias can be introduced at various stages of the process. As one study found while HR algorithms may remove human bias in decision-making,  people often mistrust AI because they don’t understand how AI works. 

However, too much trust in the algorithm may also backfire. Take for example, a human recruiter may over-rely on the AI algorithm’s recommendation, or they may unconsciously favor candidates who share their own characteristics. 

Let’s see a few examples of how human bias can occur in AI in HR: 

  • Bias in data collection: Garbage-in-garbage out, the data collected by the AI system may be biased due to human factors, such as managers who are biased in their evaluation of employee performance. 
  • Bias in data selection: Human bias can be introduced in the selection of data used to train the AI algorithm. For example, if a human selects data that is biased against certain groups – even with the best intention, unconsciously – the algorithm may perpetuate those biases. 
  • Bias in algorithm design: Human bias can also be introduced in the design of the AI algorithm. For example, if a human designs an algorithm that prioritizes certain criteria, such as education or work experience, it may exclude qualified candidates who do not meet those specific criteria. However, as we identified in a former article by The HR Congress, employers in the tech industry are shifting their focus from academic degrees to the right skills and rethinking the way they recruit and attract talent with the required skills. 
  • Bias in the interpretation of data: Humans are responsible for interpreting the data generated by the AI system, which can introduce bias if they are not trained to recognize and correct their own biases – or if they don’t trust AI as in this recent study has identified, “personnel decisions driven by algorithms are perceived to be less fair than identical decisions featuring more human involvement.” 
  • Bias in decision-making: Even if the AI algorithm is unbiased, human bias can be introduced in the decision-making process. For example, if a human relies too heavily on the algorithm’s recommendation, they may unconsciously favor candidates who share their own characteristics. 

These are just a few examples of how bias may infiltrate and jeopardize the reliability of AI in talent management. To address human bias in AI in HR, it’s important to start with the education and training of people-professionals as well as those executives who are involved in hiring and leading people.  

The fact is, that AI has a massive potential to transform many aspects of work – take this article, which was written in a few hours by relying on Chat GPT besides traditional research methords. AI has a massive potential also to help HR professionals and transform people management, from talent acquisition to employee engagement to performance management while drastically improving HR productivity and efficiency. 

However, it is important to use AI in a responsible and ethical manner and to address any potential biases that may be introduced by AI-powered systems. Ultimately, organizations need to be vigilant and proactive in addressing AI bias to ensure that their HR practices are fair and equitable for all employees. As such, it’s crucial for HR professionals to approach AI implementation with care and consideration.  

Written by: Mihaly Nagy

Rate it

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *

Previous post


Subscribe now and receive our weekly updates.