Artificial intelligence (AI) is a force in the business environment, driving new workforce models and capabilities as intelligent machines work side-by-side with humans to deliver greater value to enterprises and their customers. Financial services (FS) as an industry stands to benefit from this transformation.

According to Accenture’s 2018 Workforce of the Future report, FS firms that commit fully to AI and human/machine collaboration could boost their revenue by an average of 32 percent. Banks that do the same could increase their revenue by 34 percent; insurers by 17 percent. Not only that, the industry is primed for AI adoption. Seventy-four percent of FS executives surveyed for Accenture’s report indicated they expect to implement AI to a significant extent over the next three years, particularly to enhance worker capabilities.

However, there’s more to making AI a constructive addition to the workplace than simply coordinating efforts between people and machines. AI implementation brings with it a whole host of considerations, including ethical ones—such as the risk of reinforcing human bias.

The danger of biased AI

At its essence, AI is a collection of advanced technologies and capabilities that enable machines to sense, comprehend, act, and learn. AI has the potential to drive greater efficiencies by taking over certain human tasks, doing them more quickly and accurately, and freeing humans to perform more strategic, interpersonal and judgment-based roles.

The question is, what happens when we make intelligent machines think like humans, but we don’t make them think “better” than humans? After all, it’s humans who provide the programming and data sets that teach these machines how and what to think. What if the data is flawed or discriminatory in some way? One of the risks of AI is that we inadvertently program our own biases into intelligent machines. Bias has a direct relationship with gender parity.

AI and gender parity

When it comes to gender parity, AI can be a double-edged sword. An article in the MIT Sloan Management Review poses the notion that AI is a potential cure for workplace gender inequality. According to the article, qualities more traditionally associated with women, such as empathy and persuasiveness, will become more valuable in the AI-enabled workforce of the future, as tasks become increasingly automated. FS firms and other organisations will need a workforce with emotional intelligence, which could put women at a significant advantage in the AI economy.

On the other hand, there’s evidence that AI is already behaving in discriminatory ways, due to human influence. Science Magazine points to studies that show some algorithms that rely on large amounts of human-written text display bias, much like humans do. One of the ways bias plays out in the workforce is in the area of recruiting, with women being discouraged from applying for certain jobs or deselected completely.

An article from The Conversation points out that until a while ago, LinkedIn’s AI-driven career platform displayed highly-paid jobs less frequently for searches made by women than by men. The system learned this bias from its initial use, which was predominantly by men, and then perpetuated the bias over time. In fact, a Bloomberg piece highlights how some AI tools are developing “blind spots” that have a negative impact on women and minorities to a disproportionate degree.

In this series, I’ll explore the ethical side of AI as it relates to gender parity and provide some guidance on making sure your firm applies AI to create an improved and gender-neutral workforce in the future. In my next post, I’ll take a deeper look at how AI can both enable and detract from gender parity in the workforce.

For more information about AI and gender parity, please see:

Submit a Comment

Your email address will not be published. Required fields are marked *