Back to Resources

Viewpoint: Keeping the Human Element in AI Claims Management

Without a doubt, artificial intelligence (AI) is a valuable driver of innovation in today’s insurance industry. Unfortunately, the predominant attitude toward AI in our culture still hangs on a suspicion that people will lose their livelihoods as their jobs are taken over by autonomous machines. Many fear that we’ll lose some piece of our humanity as more and more important decisions are delegated to AI algorithms.

In fact, there is good reason to be cautious when it comes to AI, but that doesn’t mean we should shy away from using it. Indeed, we need to approach this technology with a keen attention to the importance of the human element.

Like so many other things, AI is what we make of it. It has the potential to improve our lives in very meaningful ways. If we don’t act with clear and thorough deliberation, though, it also has the potential to do harm. Insurers must take care that their AI initiatives are deployed and governed in ways that support people. That means enhancing the quality of life for the employers and front-line workers whose livelihoods we safeguard. It also means empowering the claims managers and other professionals who bring that critically important human touch to our business.

AI Misconceptions

Tom Warden

In the popular imagination, AI is imbued with almost magical powers — the ability to digest vast amounts of data, ponder the implications of that information, and draw meaningful conclusions from it. Pop culture portrays AI as being fully autonomous, assuming decision-making powers and depersonalizing our lives and our relationships in the process.

Yet even tech giant Facebook is learning that a “set and forget” approach to AI generally doesn’t work well. As early as 2014, the company was using AI to categorize images, sift through content, and identify material to be flagged as inappropriate. The company has been under fire from multiple directions for its sometimes overzealous policing of content as well as its apparent inability to flag truly objectionable material.

In the end, Facebook’s problem is a human one. Some critics argue that the company aims to maximize user engagement at the cost of all else and that its content must be regulated more carefully. Others argue that Facebook is too quick to block content that it doesn’t like. Both concerns speak to problems that require human solutions — not technical ones.

That leads to a fundamental question: “How can AI best serve human needs?”

AI Reality

Today, AI can very effectively support human decisions, primarily by shedding light on important matters that require attention. Most current AI applications consist of machine learning algorithms designed to perform clearly identifiable tasks based on a set of predefined business rules.

Those business rules are created and shaped by human beings. They must also be monitored and governed continuously, with a sharp eye toward the ethical implications of AI applications. Attention to the human element is essential.

The good news, though, is that human beings remain in charge. The future is in our hands. AI is a very powerful tool, with the capacity to dramatically improve people’s lives. We have the capacity to continue steering our AI initiatives in a direction that aligns with our moral and ethical priorities.

AI Supports the Human Element

Truly effective AI programs aren’t about replacing people. Like any other tool, AI can enhance the effectiveness and efficiency of the people who make our industry run smoothly.

In claims management, machine learning algorithms are most frequently deployed to aid in fraud detection, but AI is increasingly being applied in far more sophisticated ways as well, such as matching injured workers to the medical providers most likely to help them recover quickly and completely. It’s helping claims managers to effectively handle heavy caseloads by watching for meaningful changes to each case, flagging noteworthy changes, and bringing them to the attention of a human being who can assess them further and take action.

For heavily burdened claims managers, AI serves as a kind of intelligent assistant, relieving them of many of the tedious elements of monitoring cases while ensuring that nothing slips through the cracks.

Consider the case of an injured worker whose medical case has just taken a wrong turn. The details are buried in the physician’s notes, but the claims manager responsible for the case simply hasn’t had time to read the report yet. AI can spot that problem immediately and bring it to the claims manager’s attention. The vital human element is still there, but now it can be better informed and more effective. The claims manager can act promptly, steering the case toward a better medical outcome.

AI can match injured workers up with the providers most likely to deliver positive results. That’s not simply a matter of ranking physicians based on their overall track record, though. AI can digest the details, including the type and severity of the injury, the patient’s medical history, and other factors to deliver a nuanced recommendation as to which providers are most likely to help the employee recover quickly and completely.

The data fully supports this approach. When AI is applied to the task of matching insured workers with the best providers for each case, top-ranked recommendations result in under 28 days of missed work, whereas the lowest-ranked quintile shows an average of just over 570 days of missed work.

Think about what that means to an injured worker and their loved ones. It’s the difference between short-term injury and chronic pain. For many, it’s the difference between dignity and depression.

This, in the end, is what AI is capable of. It’s true that we should proceed with caution. Like any other technology, AI has the capacity to deliver tremendous benefits, but it also has the potential to be misused. We all have a responsibility to see that AI is done well, that it has a humanizing influence, not a dehumanizing one. In the process, we can improve the lives of insured employees, claims managers and other stakeholders.

About Tom Warden

Tom Warden is chief insurance and science officer for Clara Analytics, a workers’ compensation claims management provider based in Santa Clara, Calif.

Team CLARA Analytics

CLARA Analytics is the leading AI as a service (AIaaS) provider that improves casualty claims outcomes for commercial insurance carriers and self-insured organizations.

It's Easy To Get Started

Optimize claims outcomes with the power of AI