As artificial intelligence (AI) grows in its capabilities—and its impact on people’s lives—businesses must move to “raise” their AIs to act as responsible, productive members of society. A code of ethics and standards for AI will be important, but FS organizations will also need to successfully integrate human and machine intelligence, so that they coexist in a two-way learning relationship.

AI-based decisions and tools are starting to have a profound impact on people’s lives and insurers’ businesses. To learn, AI must consume a lot of data. But what if the data fed to an AI solution was biased? Can human-machine collaboration mitigate the potential risks?

Citizen AI: Raising AI to Benefit Business and Society is one of five trends highlighted in Accenture’s Technology Vision for Insurance 2018. For insurers it has immediate relevance.

Responsible AI

With AI growing its reach throughout society, any insurer looking to capitalize on AI’s potential must also acknowledge its impact. Much more than just a technological tool, AI has grown to the point where it often has significant influence on the people putting it to use, both within and outside the company.

For example, in the UK, an investigation by the BBC’s You and Yours program compared car insurance quotes from the five leading price comparison websites, first using the name of a white British BBC producer, and secondly a different common British name, ‘Muhammad Khan’. All five sites returned higher prices for Muhammad Khan.

There are also other types of bias that need to be addressed. Is it fair for an algorithm to decide someone with a low credit score will pay a higher premium, not because he is a less safe driver, but because he is more likely to file a claim for a small accident than a wealthier driver who can pay out of pocket?

These questions loom large as a combination of big data and smarter AI enables insurers to better calculate risk on an individualized basis.

Even AI solutions that drive efficiency need careful consideration.

An Accenture client in life and health insurance expects to reduce handling times for disability and illness claims from around 100 days to less than five seconds using machine learning, text analytics and optical character recognition. This powerful technology cannot be regarded as a simple software tool if it is to be trusted to make automated decisions that affect the lives of customers, employees and others in the insurer’s ecosystem.

A new imperative is becoming clear for FS organizations: they need to “raise” their AI systems so that they reflect business and societal norms of responsibility, fairness and transparency.

To transition to and confidently apply AI, FS organizations need to ready themselves intellectually, technologically, politically, ethically and socially.

Preparing for an AI future

As the division of tasks between man and machine changes, FS organizations need to reevaluate the type of knowledge and skills imparted to the workforce. Currently, technological education goes in one direction: people learn how to use machines. Increasingly, this will change as machines learn from humans, and humans learn from machines.

While technical skills will be required to design and implement AI systems, interpersonal skills, creativity and emotional intelligence will become increasingly important.

 

Teaching software to learn

AI is not a system that is programmed; it learns.

Regardless of the role AI ends up playing in society, it represents the company in every action that it takes. Much like a human decision-maker, AI needs to “act” responsibly, explain its decisions, and work well with others. It’s up to insurers to “teach” their AI to do these things.

Raising AI requires addressing many of the same challenges faced in human education and growth: fostering an understanding of right and wrong, and what it means to behave responsibly; imparting knowledge without bias; and building self-reliance while emphasizing the importance of collaborating and communicating with others.

Accenture’s recent global survey of more than 1,000 companies to understand how they use or plan to use AI, identified three new types of employment:

  • Explainers will help businesses understand and interpret the output of AI algorithms
  • Sustainers will optimize the effectiveness of AI systems
  • Trainers will feed AI systems’ capacity for language, empathy and judgment

Accenture’s Paul Daugherty further describes these roles in a short video.

Businesses that hesitate to consider their AIs as something that must be “raised” to maturity will be left struggling to catch up with new regulations and public demands—or worse, cause strict regulatory controls to be placed upon the entire AI industry.

Positioning for the full value of AI

To position the organization for AI, FS leaders must ask some tough questions about organizational readiness.

  • What are the potential legal, social and ethical impacts of AI?
  • Does the organization have a code of ethics that will govern its AI ecosystem?
  • What regulation is called for to responsibly make use of AI?
  • What new jobs will AI create in the organization? Does the business have a clear view of the knowledge, skills and mindsets required to work with intelligent machines in a way that creates real value? 

Join me next week as I explore another trend highlighted in Accenture’s Technology Vision for Insurance 2018, namely Data Veracity: The Importance of Trust, a core organizational risk that must be addressed by FS organizations.

In the meantime, for more on raising AI, take a look at Accenture’s Technology Vision for Insurance 2018.

Submit a Comment

Your email address will not be published. Required fields are marked *