NHS Resolution
Interviews
Simon Hammond
Director of Claims Management
NHS Resolution
When considering the appropriate level and type of technology investment for our organisation and what Generative AI has to offer, the key for us - as with all advancements in technology - is not just to look at the benefit it will deliver for NHS Resolution, but also the wider system we operate in. We are in a slightly unique space because we operate within the health system and are also juxtaposed with the justice system. We are looking for advancements that will give us greater visibility of the ‘concerned’ space, ie; help us identity where something has gone wrong to a significant degree, which we can then investigate in greater detail and in a wider context, giving us valuable insights to share with other parts of the healthcare system. So when it comes to deciding on the right technology, for us it is not just about considering what gains we can make for ourselves operationally, (saving operating costs, improving consistency and fairness in our decision-making, with less resources), but more about whether or not it will benefit us in what we can deliver back to the health service both in relation to policy development and also in respect of safer clinical care. But that’s maybe where we come in with a unique perspective, because we are not profit making.
There is a distinction between Natural Language Processing (NLP) technology, (which has the ability to work with unstructured data and produce good, reliable and useful outputs), and true machine learning. NLP is a good starting point and brings many benefits, not least easing the burden on staff and making their jobs easier. We have found NLP extremely useful in supporting our staff in this way, and to garner insights and certainty in our environment, by investigating trends and patterns which we can then feed back into the wider healthcare community, with a view to delivering better clinical services to the public.
Whether integrating NLP or machine learning, (I see the two as quite distinct), you need to have not just the right technology platform to support it, but the right data platform as well. From our discussions with other indemnifiers we know that everyone is facing the same challenges around this, particularly around the issue of ‘legacy data’. Tech suppliers talk emphatically about the ‘Holy Grail’ of a complete and perfect data set, this being the only true way to be sure of reliable and consistent output, but most readily acknowledge the restrictions in data. More on this later…
Where we are right now is in providing a more efficient system with the integration of some aspects of ML, but this in in its infancy. Where we are moving to is the integration of true AI, to see what we can produce from our data that will help the wider system learn from what we see. We are currently updating our IT architecture so we can integrate AI for the benefits of both our internal processes and to provide external insights.
Where we see the benefits of bringing true Gen AI into our systems, is in using it to learn from our past experience in order to inform our staff more accurately about potential outcomes that may occur, and help with risk management in terms of flagging where the risks to the health service may lie, so we can then work with other health partners in avoiding those risks and seeing less harm in the overall system. There are probably multiple other uses of Gen AI for the future that we see, that will bring a variety of benefits – for example assisting with our pricing models and our actuarial forecasting in relations to our long term liabilities.
We have already launched the first iteration of our new case management system in parts of our business. And the area covering claims, the largest part of our business, is due to go live with a new case management system in the next two to three months. So this is very much the here and now for us! This has been a couple of years in development, as you can imagine. What it will provide us with is the platform for a true AI environment that we can actually start to operate with.
But there are many challenges: like everyone else in this space, and as referenced above, our biggest concerns are around the availability of reliable data – particularly the quality of data from legacy systems. We live in the real world, so that Holy Grail of a complete and perfect data set can only ever be an aspiration. Indeed, this is why we have seen the rise of ‘Data Scientists’. We use Data Scientists and there is a lot they can do. But there are natural limitations because at the end of the day, they are handling historic data sets and therefore the data does not hold the level of consistency or granularity that allow correlations to be drawn. This has potential to become a major issue when you start to apply machine learning.
You have to have the foundation for the right data platform in place, and also the right data sets, for AI to deliver its promised benefits and produce appropriate results that are accurate and can be relied upon. Conversely, if the data is flawed – and this is what I hear quite regularly from the supplier community - then the potential is that people could build AI systems that produce results from pseudo data or from a small sample that is not necessarily representative, which then becomes a challenge when applied to a wider data set. You may then have gaps and therefore cannot produce the same or similar results to prove the output is repeatable and reliable.
Another challenge is of course the people element: how staff and colleagues are responding to this drive to bring in AI. There’s a lot of excitement about it in our organisation, and what it can do, which is positive. But a challenge is to make sure everyone is on the same page in terms of realistic expectations. Some of course may be fearful of the machines taking over from the humans. For want of a better phrase, you need to reassure them that you are not building a ‘robot army’, with the end goal that all the decisions across the organisation will be dealt with by AI rather than human interaction. Others, however, will be at the other end of the spectrum, wanting AI immediately and perhaps not appreciating the need for a thoughtfully paced approach and reflection around the guard rails that might be needed, the regulatory issues that sit around it, nor the potential for unintended consequences. And there’s a whole range that falls between these two ends of the spectrum.
The AI has got to interact successfully with the organisation you are working with and its people, because it can drive so many benefits if used in an appropriate way – so you need to make sure the people in the business who will be using it and benefitting from its output, understand how it works and how it is to be used. It is about educating people so they understand the benefits to their own roles.
Another key element is having a clear data strategy - not just for operational efficacy but regulatory compliance. You need to be ethical in how you go about collecting, storing and using data, and how you intend utilising the outputs for any models you are building, whether for financial provisioning, decisions in relation to claims management, or for delivering those insights to the wider health community. Whatever those models are designed to do, you need to have a strategy in place to ensure the models themselves, and the way you are using the data, are appropriately assessed on a regular basis, to ensure that bias isn’t creeping in and that they are producing reliable results consistently.
There is a lot of discussion at the moment within the wider AI space about how it is going to be regulated. I think the risk is that the legislative framework will always trail invention and innovations in the tech space. We have seen this historically. I believe the key is for an organisation to understand and set its own risk boundaries, to remain within these regulatory frameworks, and adapt as the law evolves. There are likely challenges coming down the line, and across industries, in relation to how the Data Protection Act interacts with the potential of AI and the ingestion of different data models. I’m talking here about the wider environment, not just at organizational level but maybe even broader.
In addition, I think there will no doubt be frameworks which will be brought in to regulate how organisations can actually use AI in certain situations for decision making processes. I think it has to be down to the individual organisation to ensure it sets its own risk appetite accordingly, against those regulatory frameworks. This is a recurring conversation internally at NHS Resolution: how our risk appetite fits with the wider technological advancements that might be coming into our environment now and in the future. It’s a double-edged sword: on the one hand we want to stay well within the regulatory confinements and ensure that the way we use technology is both ethical and on the right side of regulation. On the other hand, we are balancing our risk appetite statement with really wanting to use the new technologies positively and deriving the benefits from it. This is a difficult line for any organisation to set, especially when technology (especially in the Gen AI space) is moving at such a dramatic pace. And all this needs to be reviewed continually. How regularly should organisations conduct these reviews? It is probably too simplistic to put a timeline against it, but if you did only an annual review you would soon find yourself out of date. You have to set it against what your ambitions are and what your investment strategy around your tech future looks like – and also against your operational processes, because every time you introduce new aspects of tech your operational processes are going to change and therefore your risks may change in either direction (less/more). They may improve because you are safer, for example advancement of some tech may make fraud detection and prevention easier and actually your risks decrease. But on the opposite side of that same coin is the fact that in ingesting more tech based decision making this may present greater risks, such as biases being present. For example, if pre-event detector mechanisms you choose to adopt ends up identifying the wrong categories of individuals for fraud investigations – this could lead to reputational damage, added operational cost and most importantly lead to a delayed claims process for genuine claimants. So, the nature of risk is going to change depending on your ingestion of tech within your organisation, and possibly improving one risk but at the very same time heighten another.
You can see exactly why decisions around the ingestion of Generative AI cannot be rushed!
The key is to get everyone to buy in to the appropriate pace of change, as well as the change itself. To do this you need to ensure people are kept informed about the timeline and the ‘art of the realistically possible’. It’s about allowing people to understand you can only move at a certain pace – and that moving at that given pace is a critical aspect: bringing people on the journey with you and dispelling myths along the way.
Of course, these characteristics are common to any change initiative. They are exacerbated when external pressures are at play, for example the Government wanting all its agencies to invest heavily in AI at the moment. Then people see technology transformation as something that has to be done, rather than a nice-to-have investment opportunity for the business.
This risk of algorithmic bias is one we talk about a lot and requires time for deep reflection: the issue of machines making assumptions on statistical evidence, even when data sets are strong, because the AI can’t understand the subtleties that lie behind the statistics. It comes back to how you see the future of AI in the decision-making processes in your organisation. These are conversations we are having continually. Would we ever get to a point where the machine is telling us everything and making decisions? The risks in this are far too huge. At NHS Resolution we are dealing with a very, very sensitive area of claims management – fatalities of people of all ages, some of the most sensitive health issues that occur in the population and some of the most severe injuries people can have, such as cerebral palsy and birth injuries where people have life-long impacts. So for us we accept there will always be a requirement for an element of human decision making in all that we do.
In essence, this is about the risk of unintended consequences. It also applies to our work providing insights for our external partners in the wider healthcare system. In looking to derive benefits for our members by identifying the sort of harm that has occurred and quantifying the risk, we need to be very careful in understanding these unintended consequences that could tarnish the information we are sharing. For us in this space, our ambition for AI is about looking at how it can support our staff, both in making decision and in helping the health system learn – as opposed to AI doing this in its entirety on its own.
Motor Insurance Bureau
Interviews
Andrew Wilkinson
Chief Claims Officer
Motor Insurance Bureau
The Motor Insurance Bureau (MIB)’s founding principle is that no-one injured by an uninsured driver, or hit-and-run incident, should be left without the support they deserve. Our long-term goal is to eradicate uninsured driving completely, and to achieve this we know we must find ways to go further and faster. So of course we are interested in exploring how Generative AI could help us. Our mission is not just about handling individual claims, but serving the wider community and making our roads safer.
We look at claims in the context of a value chain: before a claim arises, ie identifying geographical hotspots for uninsured driver incidents and hit-and-runs; when an incident and a claim occurs; instructing suppliers and partners (including lawyers); how we approach negotiating settlements; how we handle data and management information; how we manage workflows and time; and how we analyse data and draw conclusions. We certainly see a role for AI in the pre-claims process, working with the police and the DVLA, for example using cameras to predict hotspots. Also in the investigation of claims, digging to find insurers or the identity of drivers using ‘connected vehicle’ technology, such as - getting information where appropriate from satnavs, phones and other internet connectivity to pinpoint who was in a car at a particular time and place, but this is several years down the line. We see the role of AI in assisting decision-making, but not making decisions by itself. There will always be the need for humans to take responsibility for decisions. But AI can do a lot to assist claims handlers, collating information for example about the claimant’s medical records, the events surrounding the damage, and presenting summaries to bring handlers up to speed more quickly and then putting documents together for experts or partners to prepare them for negotiating settlements, even perhaps using estimating assistance tools. We can see a benefit here in improving consistency in our decisions.
Humans are unique and it takes a human to understand that. For example, our handlers are very much alive to the fact that different people react differently to a traumatic experience, impacting how they present to experts and even the ways their symptoms manifest. But if AI can remove a big part of our handlers’ admin load, they will then be freed up to spend more time on this human element of their jobs, which could bring significant benefits to their work and to claimants’ experience. Speaking personally by way of example, I love negotiating settlements - but all the painstaking admin involved in the run-up, not so much! We tell our handlers to think of AI as a virtual colleague sat next to them, or an Assistant Best Friend. It can also be a highly effective trainer, helping people pick up the relevant case law and legal complexities on the job. DACB’s excellent AI tool for credit hire is a great example of AI at its best: whereas previously a handlers’ learning was in large part by trial and error over time, this tool helps them give an offer and explains the rationale, so helping them as a virtual colleague / assistant BF whilst training them at the same time.
Concerns about bias in AI-driven analysis is less of an issue for us compared to policy-writing insurers, because our work is based on factual events rather than drawing conclusions from statistical data and pricing according to the likelihood people will behave in a certain way. But we are concerned about the handling of personal and sensitive data and what we are inputting into machine learning, which is why any AI we pilot or use is contained in a closed system and is not web based – and why we are very cautious to conduct any pilots in a safe environment. In any event the regulator will be involved in how AI is used in our industry, in terms of the customer journey and ensuring correct and appropriate outcomes. It will be interesting to see how the regulatory framework develops.
The people aspect of change is an essential part of our technology journey. Key to bringing our handlers and the wider business with us, is to show that what we are doing and trialling will make their jobs easier and less burdensome. We have all had the experience that the promised benefits from expensive technology investment come to nothing, so tangible results are necessary to show the benefits are real. We have also all seen and heard the claims by technology suppliers that the shiny new system will mean headcount can be reduced – but this is in my experience never true! Rather, good technology will change the way people go about their jobs.
There is a degree of excitement in the business around AI – new toys, new tools – and this has to be managed too, as we want the pace of AI adoption to be appropriate for the business. There’s a spectrum of course, with reticence and cynicism at one end, from people who have too often seen new technologies fail to live up to their promise; and with progressives at the other end of the scale, impatient to delegate their drudge work to machines so they can focus on the more interesting investigation and negotiation aspects of their role. As leaders we have to bring everyone along with us not just in the need for change, but the appropriate pace of adoption as well.
We are a small organisation relative to others in the insurance space, which means on the one hand we can be fast adopters, but budgets can be an issue. We are of course funded by levies from all motor insurers and ultimately from their customers’ premiums, so we have to be mindful of this when considering expensive outlay on bespoke technologies. However an option for us is to piggyback on systems developed and made available to the market by insurers taking their AI systems’ capabilities and adapting them for our own purposes. This is something we are exploring.
AI also has a role to play in fraud detection, particularly identifying exaggerated or false claims by looking at trends and patterns that trigger the need for a more detailed investigation. But given the nature of our work, with investigation at its core at the get-go, we are well set up for this. We see the potential for AI in our processes as very positive. We don’t see it as placing jobs for the humans under threat, but instead increasing opportunity for our people, making their jobs more skilled and interesting.
We live in exciting times.
Admiral Group
Interviews
Gabriel Biangolino
Value Creation, Head of Strategy
Admiral group
A number of common themes emerged from the one-to-one interviews. In particular:
1
Whilst many insurers are already using ‘traditional’ predictive AI and machine learning to help assess loss more accurately and identify patterns that point to potential exaggeration or fraud, Generative AI is seen as another level again. The insurers we spoke to are all now piloting use cases to assess the value this new iteration of AI could potentially bring in the future.
All interviewees talked about the importance of making sure humans remain in the loop and of perfecting the interplay between the roles of the humans and the AI. A number underlined the need for humans to retain responsibility for decisions, AI being deployed to assist this decision-making process, by providing support to claims handlers in a number of ways, freeing up their time to focus on the more complex and human elements of their jobs. Nobody interviewed for this study anticipated a future where the claims process would be fully automated. All described human interaction as essential, for the simple reason that an insurance claim is always a time of stress and emotion for customers. This is the reason why insurers’ focus for AI investment, and now Gen AI, is for now firmly on back-office functions, working behind the scenes to enable them to service customers better, rather than on the customer-facing part of their
Training was mentioned by all as another key to the successful integration of AI in the claims process, helping the humans in the process to understand not just what AI can do and how to use it, but crucially its limitations too, particularly regarding its interpretation of data. As a number of interviewees put it, if the data input isn’t perfect, (and given the industry is currently relying on ‘legacy data’, this is most of the time), the people working with these outputs need to understand that. A number talked about significant investments their companies have made in establishing ‘Data Academies’, to educate the people in the business about how to interpret the output from Gen AI what to watch out for in reviewing that output: anticipating assumptions the AI might be making when processing data and identifying patterns and trends; watching out for bias that could creep in to the process because machines don’t understand the nuances that may sit behind some numerical trends in the way that humans can; and being alert to the possibility of machine hallucinations. Everyone talked about the critical role for the humans in the claims process, in reviewing and challenging AI output and making sure the conclusions drawn from it make sense in the real world.
Most described the major potential benefit of Gen AI being its ability to ‘structure unstructured data’. As one interviewee said, ‘the moment you can do this, you are able not only to make significant improvements to existing models, but also create new models which were out of reach before’. The three areas of insurance business and claims handling that could benefit most from Gen AI in this way were listed as:
- Supporting claims handlers’ calls with customers - Gen AI has a lot to offer in terms of creating summaries of case information for our claims handlers at great speed and producing transcripts of calls with customers, freeing them up to focus on the more complex, human and interesting parts of their job.
- Processing the millions of incoming documents that insurers receive each year, having the ability to extract the most useful and relevant information much more quickly.
- verifying images - Increasingly customers’ claims are supported with images of damage, and in these days of ChatGPT image generation, it is easier for the unscrupulous to fabricate these. Gen AI is able to assess images submitted in a case to check where they have come from, whether they are genuine, or whether there is fraud at play.
2
3
4
Heading 2
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.
Section Title
Add you subtitle here
Heading 2
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.
Heading 2
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.