The Future of AI in Insurance
Artificial intelligence (AI) and machine learning have come a long way, both in terms of adoption across the broader technology landscape and the insurance industry specifically. That said, there is still much more territory to cover, helping integral employees like claims adjusters do their jobs better, faster and easier.
Data science is currently being used to uncover insights that claims representatives wouldn’t have found otherwise, which can be extremely valuable. Data science steps in to identify patterns within massive amounts of data that are too large for humans to comprehend on their own; machines can alert users to relevant, actionable insights that improve claim outcomes and facilitate operational efficiency.
Even at this basic level, organizations have to compile clean, complete datasets, which is easier said than done. They must ask sharp questions — questions formulated by knowing what the organization truly, explicitly wants to accomplish with AI and what users of AI systems are trying to find in existing data to get value. This means organizations have to know what problems they’re solving — no vague questions allowed. Additionally, companies must take a good look at the types of data they have access to, the quality of that data, and how an AI system might improve it. Expect this process to continue to be refined moving forward as companies attain a greater understanding of AI and what it can do.
AI is already being applied to help modernize and automate many claims-related tasks, which to this point, have been done largely on paper or scanned PDFs. As we look to the future, data science will push the insurance industry toward better digitization and improved methods of collecting and maintaining data. Insurtech will continue to mature, opening up numerous possibilities on what can be done with data.
Let’s look at some of the ways AI systems will evolve to move the insurance industry forward.
Models Will Undergo Continuous Monitoring To Eliminate Data Bias
AI will continue to advance as people become more attuned to issues of bias and explainability.
Organizations need to develop the means (or hire the right third-party vendor) to conduct continuous monitoring for bias that could creep into an AI system. When data scientists train a model, it can seem like it’s all going very well, but they might not realize the model is picking up on some bad signals, which then becomes a problem in the future. When the environment inevitably changes, that problem gets laid bare. By putting some form of continuous monitoring in place with an idea of what to expect, a system can catch potential problems before they become an issue for your customers.
Right now, people are just doing basic QA, but it won’t be long before we see them harness sophisticated tools that let them do more on an end-to-end development cycle. These tools will help data scientists look for bias in models when they’re first developing them, making models more accurate and therefore more valuable over time.
Domain Expertise Will Matter Even More
In creating these monitoring systems, they can become sensitive to disproportionate results. Therefore, organizations must introduce some kind of domain knowledge of what is expected to determine if results are valid based on real experience. A machine is never going to be able to do everything on its own. Organizations will have to say, for example, “We don’t expect many claims to head to litigation based on this type of injury in a particular demographic.” Yes, AI can drill down to that level of specificity. Data scientists will have to be ready to look for cases where things start to go askew. To do that, systems — and even the best off-the-shelf toolkits — have to be adapted to a domain problem.
Data scientists are generally aware of what technology options are available to them. They may not be aware of the myriad factors that go into a claim, however. So, at most companies, the issue becomes: Can the data scientists understand whether or not the technologies they know and have access to are appropriate for the specific problems they’re trying to solve? Generally, the challenge organizations face when implementing data science solutions is the difference between what the technology offers and what the organization needs to learn.
Statistical methods, on which all of this is based, have their limitations. That’s why domain knowledge must be applied. I watched a conference presentation recently that perfectly illustrated this issue. The speaker said that if you train a deep learning system on a bunch of text and then you ask it the question, “What color are sheep?” it will tell you that sheep are black, and the reason is that even though we know as humans that most sheep are white, it’s not something we talk about. It is implicit in our knowledge. So, we can’t extract that kind of implicit knowledge from text, at least not without a lot of sophistication. There’s always going to have to be a human in the loop to correct these kinds of life biases to close that gap between what you can learn from data and what we actually know about the world. This happens by inviting domain expertise into the data science creation process.
We’re getting better and better at democratizing access to AI systems, but there will always be an art to implementing them — where the data scientists have to be close to the subject matter experts in order to understand the underlying data issues, what the outcome is supposed to be, and what the motivations are for those outcomes.
Unstructured Data Will Become More Important
There is so much data at insurance companies’ disposal, but we have only tapped into a small percentage of it — and we’ve yet to cultivate some of the most significant assets. The integration and analysis of unstructured data will enable this to happen as it becomes more accessible.
Case in point, natural language processing continues to mature. This means that instead of pulling information from structured fields, like a yes/no surgery flag that could be interpreted pretty quickly by reading claim notes, adjusters could receive a more holistic view of the claim, going beyond the structured data and finding more and more signals that would have otherwise escaped the adjuster’s attention.
Images also provide all types of exciting and insightful unstructured data. The interpretation of scanned documents is a necessary part of claims. Advanced AI systems that can handle unstructured data would be able to read them and incorporate relevant data into outputs for evaluation. Theoretically, even further in the future, adjusters could look at pictures from car accidents to ascertain the next steps and cost estimates.
Systems that can interpret unstructured data also will be able to extract information in terms of drugs, treatments and comorbidities from medical records. In claim notes, sentiment analysis will seek out patterns from across many claims to identify the ones that yield the most negative interactions with claimants so that early interventions can occur to influence claim outcomes. We are just scratching the surface on unstructured data, but it won’t be long before it makes a profound impact on insurtech.
Feedback Loops Will Improve
Ideally, good machine learning systems involve feedback loops. Human interaction with the machine should always improve the machine’s performance in some way. New situations will perpetually arise, requiring a smooth and unobtrusive way for humans to interact with machines.
For example, claims adjusters may review data outputs and determine that possibly this sentiment wasn’t actually negative, or they might learn that they missed extracting a drug. By letting the machine know what happens on the “real-world” side of things, machines learn and improve — and so do claims adjusters! To reach this level and to be able to continually improve data analysis and its applications, undergoing a continuous improvement loop, is where AI will ultimately shine. It empowers adjusters with rich, accurate knowledge, and with each interaction, the adjuster can inject a bit more “humanness” into the machine for even better results the next time.
Companies are putting systems in place to do that today, but it will still take a while to achieve results in a meaningful way. Not a lot of organizations have reached this level of improvement at scale — except for perhaps the Googles of the world — but progress in the insurance industry is being made each day. AI systems, with increasing human input, are becoming more integral all the time. Within the next five-to-10 years, expect AI to fully transform how claims are settled. It’s a fascinating time, and I for one look forward to this data-rich future!
About Karin Golde
Karin Golde, vice president of data science, brings 20 years of practical experience to CLARA Analytics, during which time she honed her capabilities in sentiment analysis, information retrieval, ontology development, and document classification. She is responsible for conceiving and driving the development of a robust portfolio of AI and machine learning-based initiatives that accelerate the growth of CLARA’s business. Previously, Golde built and led large teams with the technical and collaborative skills needed to drive innovation in areas relating to natural language processing, data science, and computer vision. Karin holds a Ph.D. in linguistics from The Ohio State University. For more information on CLARA Analytics, visit www.claraanalytics.com and follow CLARA analytics on LinkedIn, Facebook and Twitter.
As first published in datasciencecentral.com