by: Cathy Ricketts
Atlassian’s brutally honest 2018 report the “State of Diversity and Inclusion in U.S. Tech” shows a need to re-engage around the topic of inclusion if we’re to make real change.
Is it time we humans stepped aside and embraced the power of Artificial Intelligence (AI) to drive inclusive recruiting?
The theory is that AI can accelerate the drive toward inclusion faster than has been achieved to date by humans. This is because the algorithms that sit behind AI can chew through information on future hires or employees far more quickly, and with far less bias, than humans.
AI based technology is being developed to help hire faster and better (Mya, HireVue, MontageTalent, Applied) and improve employee engagement (GrowBOT, ServiceNow). There are also a number of AI based solutions to help keep non-inclusive behaviours in check, such as Textio, which spots implicit bias in job ads, and JoonKo which calls out non-inclusive language in slack chats.
Are we putting too much trust into this AI?
Multiple instances have been reported of product design with inbuilt bias (reference the racist soap dispenser fervor), and of bias that has crept into AI, such as Google’s first generation of visual AI identified images of people of African descent as gorillas.
Is AI in danger of replicating the bias and non-inclusive behaviors that we see in humans?
No AI exists without human input. If the teams developing the code that sits behind the AI exclude under-represented groups, (which we know is the case in most tech firms), then there is a very real risk that the code that sits behind the AI will demonstrate the same bias as exists in the humans developing it.
This article by Tristan Green showcases the work of scientists whose research identifies 20 different cognitive biases that could potentially alter the development of machine learning rules and several de-biasing techniques for designers to implement in order to avoid bias in machine learning. It’s important stuff!
Given the clear danger of building bias into AI, we urgently need to attack the diversity problem in tech firms.
This means ensuring there are processes in place to remove any inbuilt bias in their code. One example of a tech startup doing just that is Pymetrics, a tool that predicts talent success, bias free. The team is very open about the measures it takes to remove bias from its models, which is something we should be demanding of all the teams behind AI solutions.
So, is (a)I for inclusion?
Recruitment and retention will always remain about people and relationships. But as more AI creeps into HRtech, creating an environment where a diverse workforce can deliver to the best of its abilities, and creates AI which we can trust, it means more now than ever, we need to develop inclusive leaders.
In the words of futurist Tracy Follows “Diversity is not just a tick box exercise or a compliance issue, but actually the key to unlocking trust in the tech of tomorrow.”
To find out more about how AI is revolutionising HR, and how PDT Global is helping technology firms create inclusive workplaces globally, please get in touch.