Other parts of this series:
Artificial intelligence (AI) is gaining prominence in the workforce, as firms seek to maximise the benefits of blending human capabilities with intelligent machines. It’s an exciting time, but one that requires some caution as well.
No matter how intelligent machines are, ultimately humans control them. That fact creates some ethical considerations, one of which is AI’s potential to negatively impact gender parity by entrenching bias. This could have a considerable impact in financial services (FS), considering that 74 percent of FS executives surveyed for Accenture’s 2018 Future Workforce report indicated they expect to implement AI to a significant extent over the next three years.
In my previous post, I mentioned that AI has both positive and negative implications for gender parity in the FS industry. In this post, I’ll take a deeper look at what those are.
AI, FS, and gender parity
The FS industry has much to gain from AI. Not only are intelligent machines useful for performing some of a firm’s more routine administrative and customer service tasks; they’re also capable of highly sophisticated data analysis. This analysis provides deep insights that drive better decisions and free humans to take on more strategic roles. In many ways, machines are being trained to think like humans, but faster and often better. Yet the potential for bias is a concern.
One of the great advantages for women in the AI transformation is that AI opens the door to greater opportunities for the application of the “soft” skills women excel at—skills that have considerable value in an industry like FS. In the workforce of the future, emotional intelligence is expected to play a more significant role.
AI also plays a critical role in eliminating the bias that causes gender parity. A Computer Business Review article points out that AI is being applied to identify bias in job descriptions. In fact, IBM’s Watson computer (known for gleaning insights from huge amounts of unstructured data) has shown how specific gender-charged words (such as “ninja” and “dominant”—which are subconsciously male) influence which gender applies for which jobs.
These types of insights can help HR professionals design truly gender-neutral recruiting strategies, allowing them to reach a wider and more diverse pool of qualified candidates. Additionally, firms that are interested in increasing the number of female applicants can use similar AI-delivered analysis to create job descriptions that are more appealing to women—a key step toward achieving their gender parity goals.
On the other hand, the failure to recognise how bias can infiltrate AI and the data sets it relies on (due to human input) can lead to its perpetuation and the reinforcement of traditional stereotypes. The same article explains that Google’s translation software promotes bias by converting gender-neutral pronouns into “he” or “she”, depending on what those pronouns are referencing. For instance, the software uses “he” when referring to a doctor and “she” when referring to a nurse. Ironically, it was AI that revealed these instances of bias.
The problem is people
As comprehensive and powerful as intelligent machines’ cognitive abilities are, they are only as unbiased as the humans who “raise” them. People must be aware of and take steps to eliminate bias when selecting data sets and programming intelligent machines. AI can highlight the issues, but humans must resolve them.
There is a deeper issue here as well. As machines learn from the results of their actions, there is a risk that they will slowly and steadily drift away from their intended purpose. Biased programming can be prevented up front and bad data can be addressed with good data management processes, but AI drift must be continually monitored and rectified.
In my next post, I’ll talk about steps HR professionals and their firms can take to help ensure AI is a gender-parity enabler, not a gender-parity subverter.
For more information about AI and gender parity, please see: