Aflac CIO on Finding Value in AI Model Deployments

An interview with Aflac CIO Shelia Anderson on how the insurer deployed AI models that were feasible and delivered value.

Deborah Yao, Editor

May 10, 2023

14 Min Read

Aflac CIO Shelia Anderson joined the AI Business podcast to talk about the insurer’s AI journey from idea to deployment and how the company found value for its business. She said AI models are not a ‘set it and forget it’ endeavor and explains Aflac’s thinking on generative AI.

Listen to the podcast below or read the edited transcript.

AI Business: Aflac is a different kind of insurance company. Can you tell us briefly what sets you apart from other insurers?

Shelia Anderson: We're actually what I call a well-known company, but not known well. Many people do know our brand, the Aflac duck. Aflac actually specializes in supplemental benefits – for such things as accidents, cancer, critical illness, hospitalization, life insurance − and we provide value to our policyholders by helping them to bridge the gap for unexpected costs that your traditional medical insurance doesn't cover.

Our products basically provide peace of mind or that financial assurance, knowing that Aflac is there in their time of need to focus on recovery and really, minimizing that financial stress. We were founded in 1955, as a cancer insurance company. And today, we're a roughly a $26 billion company operating out of both the U.S. and Japan, covering over 50 million customers.

AI Business: The insurance industry is famously data driven, which is a fitting backdrop for artificial intelligence. Can you tell me how long Aflac has been using AI and in what capacity?

Anderson: In the latter part of 2018 and early 2019, Aflac began researching artificial intelligence components, including machine learning models. (Then in 2020), we were all kicked into the pandemic and forced to work from home. … Around mid-2020, we made a decision to actually begin capitalizing on AI technology to augment, first, our claims adjudication process for our U.S. Aflac business.

AI Business: What were the business problems you needed to solve?

Anderson: We rolled out a claims automation platform in late 2022. That was leveraging a combination of artificial intelligence and machine learning to process our customer claims more efficiently and accurately. Our approach to AI came out of necessity to innovate during the pandemic. Aflac has traditionally been a very face-to-face interaction company, but digital enablement to support our customer experiences was vital during that time. So that came to the forefront of our business and was a huge focus for us.

Also, the (economic) contraction in the market was forcing companies to become more efficient. So we started looking to artificial intelligence and machine learning as a way to do both of those things: to improve our overall customer experience and then also to help streamline our operations.

I can give you some specific examples. Processing claims has often been complex and time consuming for any insurer. For us about 46% of our claims were fully automated at the time using straight through processing, or STP, primarily focusing on our wellness claims. Those could be automated because we didn't require additional proof of loss for those claims. We accepted basic answers to the claim form questions, with attestation from the policyholder, which would serve as sufficient proof. So that was prime for automation.

Forty-nine percent of our manual claims processing resulted in payouts that are lower in dollar value, so those are less complex claims. It represents only about 5% of our claims payouts but we still spend quite a bit of time manually managing those claims. For these simple claims that didn't require proof of loss, such as the wellness claims, we pay quickly so that our customers see the value and we also don't want our specialists having to spend a lot of time there. We would much rather they focus more care on insureds that are in more complex situations and often really need that more personal touch.

And of course, our goal for all of this is to ensure a frictionless customer experience and to pay our claims as quickly as possible to our insureds.

AI Business: How is AI specifically being applied in automation? What kind of data does it use and what kind of insights did it pull out that necessitated AI versus just rules-based automation?

Anderson: We actually have a combination of all of that. I'll talk you through a little bit of the solution. First of all, we identified those claims scenarios, initial and continuing claims that have the lower dollar value, less complex payouts. We're really seeking to do straight through processing for those. With our code-based processing platform, we've transformed the key stages of that claims process. And it's a mix of some workflow, and some machine learning and artificial intelligence.

The key components include, first, we have an AI-based document digitization pipeline − where we're doing the extraction, classification, annotation, and indexing, for example, proof of loss documents. Then we chose to implement, as a way to have a tighter connection, knowledge graphs that basically map all of that extracted information from the documents to get a better context of that process data and to define those relationships a little bit tighter. Next we have an end-to-end AI claims processing workflow for full adjudication across all of the different lines of business. Our full automation is a combination of all three of those things in our business.

Stay updated. Subscribe to the AI Business newsletter

We have actually two areas specifically where AI is applied in that. We have a built-in optical character recognition (OCR) capability that can read through, for example, handwritten documents, which we still get some of, and then convert those to digital text or data, which goes through various thresholds. And then, of course, as I mentioned previously, we have that claims adjudication engine, which is primarily a sequence of AI/ML models. It leverages a lot of our standard operating procedures. For example, it takes into account prior decisions and learnings on those types of claims to adjudicate policy coverage, to the claim benefit mapping and then ultimately a payment protocol recommendation coming out of that. That's a simplified view of what that process does.

AI Business: Can you share other learnings from your AI journey?

Anderson: Throughout our AI journey, we've of course encountered several valuable lessons that have shaped our approach and informed our subsequent stages and strategies, probably similar to many other organizations that are going through their AI journey.

I'll focus on a few categories. Data quality is paramount, and insurers have a lot of data. What's important is for us to be able to leverage it in the right way to make informed decisions. So here, ensuring that our data is accurate, complete and consistent, was essential for the development of the reliable AI model. There was a lot of focus on data quality in the beginning and understanding what data is needed and what data you have.

The other thing from an organizational perspective that's very important is collaboration between teams across the organization. We have cross-functional collaboration between, for example, data scientists that sit in a different team, with our business, domain experts, product owners, technology, and then of course, our engineers. All of that is crucial to understand your business requirements, contextualize your data and integrate those AI solutions into your existing workflow. You have to plug it into your business operations so that it drives the value.

Adopting an iterative approach

I'll talk a little bit more about value and user adoption as well. The other piece that's tied to the development lifecycle for us, that was key, is the iterative model development − adopting an iterative approach to model development allowing for continuous improvement and refinement based upon that feedback, helping to enhance performance and ensure alignment with your business objectives. A one-team approach with all of those very tight stakeholders is key to success and developing those relationships so you can get that critical feedback.

The other big thing for us is change management and user adoption. Navigating the transition to AI-driven solutions definitely requires thoughtful change management across your organization, effective communication, to ensure user adoption and buy in from employees and stakeholders. This is where it's so important to really look at the usefulness of that model in your critical business operation and make sure that it's fitting and driving the right type of value that's expected. Also, you want it to simplify and help to make better informed decisions.

Last would be the ongoing monitoring and maintenance of the models. We all have those production issues that we have to support; AI models are no different. You have to focus on continuous monitoring, continuous maintenance to address changes, whether it be in your data, or any data quality issues that may arise, changing user needs − you have to constantly assess the market conditions to make sure that you're keeping up with that and then ensuring that you have sustained performance and relevance of the model over time.

I would say the AI models are definitely not the ‘set it and forget it’ types of technology that some people may be accustomed to.

AI Business: Can you address the cost involved in deploying an AI model? What kind of investment is needed ? Is that something you can share?

Anderson: I don't have those specific numbers with me. But I will tell you a bit about the approach that we do take. We actually have a qualification approach to determine what we would even want to try to solve for. We think in terms of priority when we look at different projects that may come across the request cycle. We look at things like desirability. Is this actually needed and wanted in the business? What's the viability? Is it technically feasible? And does it solve our business problems? Is it financially cost effective? What is the value? The time to value also is important. So we look at the metrics around investment and return on investment is a huge piece for us.

AI Business: How long would you say it would take for a proof of concept to be deployed?

Anderson: It depends on what it is that you're doing. For us, we started doing initial research in 2018 and 2019. Then once we decided to actually deploy some production models, we were (up and) working in 2020. So it depends. Definitely less than a year, you can get fully scaled to start having models in production. And actually even a much shorter timeframe, depending upon what it is. It depends upon the complexity of the models, how much you're touching across the organization, how prepared your data is, if you have readily available data that's already clean − because that is what is going to take up a lot of time is focusing on the quality of that data and the cleansing of the data.

AI Business: How do you decide whether you're going to build or buy? Or do you do both?

Anderson: We actually take an approach of doing both. Build or buy for us is always a decision (we have to make). Of course, we prefer to buy if it's going to be a time-to-market capability where you need it fast. It depends on what your parameters are.

We actually do a combination. We look at our requirements, just as we do with any other technology solution. We assess the need. We put together a selection criteria for the solution and the target architecture. From there we make those decisions and start piloting.

AI Business: You talked about a successful use case. Can you share maybe a less successful one and how did you pivot?

Anderson: So the less successful example that I would use is actually … the first iteration of our claims model. We proved out that we could use AI for claims adjudication, but it wasn't consistent. We used historical datasets, for example, so that's where we learned very much how important data was. We looked at different benefit types; we actually had too much of what I'll call a black box concept. It was really difficult to explain and justify the beneficial outcomes.

We went back to the drawing board and re-looked at how do we segment the information? How do we clean up that data so that we can actually explain it? Because for people to trust models − especially when it's your first time doing this, even though you may have the faith this model is working, and the outcomes are correct − you have to be able to walk (other people through it) so they have a good understanding, and you can prove it out for them. Then the trust will come.

AI Business: One of the biggest concerns about AI models is that it could be inherently biased. You mentioned the black box problem. How do you put guardrails around the models?

Anderson: That is a huge concern, especially in AI in general. In our claims automation solution, we're leveraging a knowledge graph as part of our solution that enables a more structured and interconnected relationship of information and data points. That in itself is removing some of the bias by representing the relationships and connections between entities. The graphs basically provide the systems a more nuanced understanding of context, mitigating against inherent biases. We also continue to monitor and refine to avoid biases. And then when our AI solutions leverage different techniques, we look to add bias detections and mitigation algorithms built into our capabilities to help ensure that our systems remain fair and unbiased as we're making decisions. So we actually monitor and assess that and then build in some of the auditing and detecting.

AI Business: Let's talk about generative AI. Is there a place for ChatGPT and the like in Aflac?

Anderson: This is such an interesting topic today because so many people are using it. In general, I think it is driving a lot of value. It's also driving a lot of concern as you see also. I do think that our large language models can be very beneficial to Aflac marketing, our business day-to-day augmentation, communication, research, and even decision-making − and in technology, even co-generation. So we do see a potential for large language models at our company.

However, we're in the middle of assessing what that is, and we need to balance that with the inherent risks of exposing our intellectual property, or legal risks that may come from leveraging large language models. People may not exactly understand where that information is going once you enter it into that model.

We're in the middle of building out policies around responsible use within our company to include things like access control, data, privacy, auditability, version updates. Also, I think a big piece of this for any company is going to be that user training element − just like we do security training internally to keep our information and data safe and protect our customer information. We will do the same as it relates to generative AI.

We are actively looking at this now. But of course, our top concern is ensuring that our data is protected, and at the same time, ensure that there's a need to leverage it in our business for value.

AI Business: What's the next big project you're working on at Aflac?

Anderson: I'll talk about two things that I'm super proud of with our organization. One is we're actually in a journey to the cloud for Aflac. This is one of those areas where we have a great opportunity to leverage more pure AWS capabilities in the cloud. So we're on a 2-year journey to move all of our distributed capabilities to the cloud. I'm very excited about that. Of course, that includes upskilling, rescaling, training, looking at your organization differently. We're very excited about the potential that's going to open up for us as a company and in the future.

The other thing I'd love to talk about is very much tied to our brand. Aflac is a company that's focused very much on ‘care on purpose.’ An example is our funding support of the pediatric Cancer and Blood Disorders Center in Atlanta. We've contributed over $166 million since 1995. Then, we (developed) a 'My Special Aflac Duck,' which is a robotic companion and app designed to help young patients communicate their emotions with caregivers. We're super proud of that. We've given away more than 22,000 'My Special Aflac Ducks' to children with cancer and sickle cell disease. …

As a result of this, we're launching a refreshed flagship product: our individual cancer product. This will enable a more flexible offering; it meets our consumers’ needs while also providing enhanced protection against unexpected costs. We are focused on increasing our benefits to the policyholder by about 20% without increasing their premiums. There is a lot of technology that runs behind that, but we're most excited about the value and protection that it can bring to our policyholders.

To keep up-to-date with the AI Business Podcast, subscribe on Apple and Spotify or wherever you get your podcasts.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like