Artificial intelligence (AI) has entered our daily lives, from self-driving cars to writing our emails and detecting fraud. Across many organizations, AI applications are being piloted to help automate and streamline business processes, such as recruitment, onboarding, training and performance appraisals. However, AI’s role in workplace diversity and inclusion has garnered much controversy.   

On one hand, AI has the potential to mitigate biases and provide more equitable access to the job marketFor example, AI can be used in the recruitment process to screen resumes and identify the right candidates from a large pool of applicants. Proponents of AI promise that AI speeds up the hiring process and helps reduce unconscious bias by concealing candidates’ names and bypassing gender and racial references in resumesThere are even applications that can analyze a candidate’s facial expressions during interviews to gain insights on their personalities and potential fit for the company.  

On the other hand, AI is at great risk of perpetuating inequalities and amplifying stereotypes. Recent headlines in the news showcase examples of how AI technologies have learned to give preference to male applicants and, in one case, penalized resumes with the word “women’s” in them  

In order to ensure AI supports rather than suppresses equity in the workplace, we need to know where these biases stem from and what we can do to prevent them. Let’s examine three common types of bias that can impact AI applications:  

Bias in Data

For AI to work effectively, we need to feed the system a lot of data. For example, for a face-scanning mechanism to determine which applicants are the “best fit,” the system must be be based on what has been successful in the past. In the case of executive-level professionals, this past dataset likely consists of predominantly white middle-age males. If we are unaware of this bias while introducing that dataset to the system, the system’s preference will skew toward this demographic group. Organizations can mitigate this issue by proactively seeking greater representation and more varied sample datasets.  

Algorithmic Bias

Even when input data is unbiased, algorithms can produce biased outcomes. For example, in learning experience platforms (LXPs), learning content and activities are recommended to users based on attributes such as prior knowledge, test scores, demographic information, locations and learning preferences. As learners act on recommendations by selecting content, these actions are added to the system as a positive feedback loop. Over time, these recommendations reinforce bias coming from a certain group of users – perhaps younger workers who use the system more frequently. As a result, more content is recommended to all learners based on one group’s preference rather than the workforce’s diverse learning needs. One potential solution to this problem is to audit system activities regularly and increase diversity in recommendations.  

Human Bias

Believe it or not, a lot of manual work is required to create an AI system. Human inputs are needed to determine where to source and select data, what data to source and select, what is deemed important to measure, as well as how to interpret and present outcomes. These decisions are made by people – and not always by a heterogeneous group of people with diverse perspectives. To mitigate human bias, we need to think carefully about our intentions when configuring systems and be transparent about our processes. Start with clear documentation and communications to reveal any assumptions and explain the reasoning behind decision-making  

AI applications can harm our employees and our organizations if they are not designed, engineered and audited properly. As L&D professionals, we should be deliberate and responsible in shaping how AI is used in the workplace.