December 8, 2020
World first for ethical AI and workplace equity
Companies, organizations, and governments around the world face an incredible challenge to rebuild their workforce at scale, and with the confidence that the technology they use to make important decisions is fair, accurate and equitable.
Reejig, a leading workforce intelligence platform, partnered with Distinguished Professor Fang Chen, Executive Director Data Science at the University of Technology Sydney to deliver the 'non-biased talent shortlisting algorithm validation' project, a pioneering independent validation of Ethical AI.
Over two years, the research team lead by Professor Chen has developed, tested and iterated the ground-breaking assessment process before its use by industry partners to confirm that the AI outputs are fit for purpose and deliver actionable results.
"When you are talking about AI and workforce or HR data you are dealing with sensitive information about real people, so building trust into that process is critical. Combined, these two have the power to transform the way we think, engage and work. AI for good needs to be the standard. But there has been no way to properly assess that until this project," she said.
Reejig CEO and co-founder, Siobhan Savage said the benefits that data and AI are bringing to the professional workforce are phenomenal, but AI is not immune to bias in the data or in the algorithms. Previously, the decision making has been hidden in a black box, meaning until now, there has been no clear, defensible, independent, and objective validation demonstrating ethical AI.
"Frameworks provide guidance however we believe that's like marking your own homework. Boards, organizations, and decision makers are exposed to real risk that they may unwittingly be causing harm or bias. Given what's at stake, we were astounded that there was no independent assurance that the AI an organization adopts is ethical and unbiased," Ms Savage explained.
Mark Caine, Artificial Intelligence and Machine Learning Lead, World Economic Forum said it is imperative to minimize the risk of AI to humanity, otherwise the public will lose trust in AI and its capability to do good. Whilst globally, there are over 200 AI ethics frameworks and guidelines, few have been operationalised and this project is a milestone in bringing independently audited certification to an innovative AI product.
"A key barrier to the adoption of AI, and thus it's potential to do good, has been lifted. This is significant for organizations who want to do the right thing and minimize risk to their customers, their stakeholders, and their reputation," Mr Caine said.