Artificial intelligence (AI) has the power to enable a gender-balanced workforce in which gender-neutral recruiting, hiring, and advancement practices are the norm, and where the qualities offered by women and men are valued equally. On the other hand, AI also has the power to perpetuate gender disparity by enabling and reinforcing human biases that have been a roadblock to gender equality for decades. The determining factor is people.

In my previous post, I shared how AI can be both a gender-parity enabler and a gender-parity subverter, depending on how humans program the intelligent machines that show such promise for an enlightened and high-functioning workforce of the future. Identifying and eliminating bias is critical to this effort. In this post, I’m going to provide some guidance for how financial services (FS) firms can guard against the risk of bias when implementing AI. At the heart of the matter is how humans teach the machines they’ll be working alongside.

Teach your machines well

Accenture’s Technology Vision for Insurance 2018 report draws the parallel between developing and applying AI and raising and educating a child. Much like humans, socially responsible and unbiased AI must be able to make choices based on “thinking” rather than following a process in order to:

  • Differentiate between right and wrong
  • Behave responsibly
  • Be transparent and explain its decisions
  • Work well with others

It’s important that HR professionals—on their own and in partnership with technology providers—take active steps to identify and eliminate bias in recruiting methods and across the HR spectrum. This includes using AI to identify the bias in the first place. Given that even the best efforts to eliminate bias might fall short, HR professionals and their firms can also take steps to add an additional layer of human judgment in evaluating potential areas of bias. Setting gender quotas within the programming functions is one way to add such a layer.

Making the right choice

AI “done right” has the potential to accelerate the journey to gender parity, not only by eliminating bias by consciously excluding it from algorithms, but also by providing greater opportunities for women as skill needs change in an AI-enabled workforce. AI done wrong could set the industry back by reinforcing stereotypes and discriminatory practices.

Here are some actions FS firms can take to drive responsible AI and help reduce AI-generated bias:

  • Strive for transparency in AI decision making, so humans understand the basis for any decisions intelligent machines make.
  • Ensure the data sets that “feed” AI are free of inherent bias.
  • Periodically review AI systems and data for signs of bias and make adjustments as necessary.
  • Consciously apply AI to ferret out evidence of bias and to help create a gender-neutral environment.

It’s a conscious choice that’s ours to make.

For more information about AI and gender parity, please see:

Submit a Comment

Your email address will not be published. Required fields are marked *