“All of AI, not just healthcare, has a proof-of-concept-to-production gap.”
Andrew Ng, co-founder of Google Brain & Cousera
At a recent event on Health & Work by the ABI , the question asked of me and the other panellists to argue For or Against the following proposition: A Data Dichotomy: will greater use of data erode or build Customer Trust? I was given the position of YES, greater use of data will erode Customer Trust – and I must say, it was a lot easier to make this argument than it was to stick to the two-minute time-limit they gave us to make our ‘bid’ for the audience vote!
The topic of Data and Trust is obviously enormous and so I chose to focus on just one aspect: AI. In the 2020s, using Data means AI, either yours, or someone else’s, and usually both. My contention was that there are some fundamental issues Insurance needs to get fully to grips with AI before plunging into the greater use of data NOW, or the outcome will inevitably erode Trust. The following is not the reportage of what I said – I had to pick and choose for that 2-minute limit – but this is an expanded version to fill in some of the detail on why I think Life and Health should think (and do) very hard before plunging in if what is really important to the firm is building Customer Trust.
- The existing evidence from General Insurance’s journey with algo-driven business and digital corporate risk management demonstrates the scope and scale of the gaps in agreed governance, standards, design guidelines and also underlying data quality. Life and Health could and should learn from this, and not gallop off in their footsteps!
- In the 2020s, the ethical dimensions of Technology choices matter more than they ever have done, as civic society unease and campaigning around ‘Surveillance Capitalism’ from within both the Technology communities, and Civic Society Organisations (CSOs) continues to grow
- Complicating things yet further, it turns out that with experience, issues with the underlying Tech have become clearer and clearer: facial recognition is the poster-child for the consequences of building and selling powerful AI-enabled products that were designed in a Diversity vacuum, and either do not work (as for facial recognition and people of colour) or do not work in the way they were sold. As time passes, and we accumulate lived experience with BigTech, we’re learning that bias in design and use is rendering important areas of pervasive technology suspect or essentially unusable. Andrew Ng’s mind-blowing quote at the top of this article should give everyone pause for deep thought: an indisputable globally-renowned Machine Learning guru is essentially saying, case unproven when it comes to reliably putting into practice that which is lauded in theory in Healthcare and ALL AI.
- Experience is catching up with the ‘shock and awe’ of the Digital ‘fireburst’ or ‘gooogleisation’ we’ve been living through since the 1990s. In the corporate landscape, ‘strategy by FOMO’ has dominated for so long now, it’s hard to see if it can ever be dislodged, unless except by ‘ESG FOMO’ which we’re going to see even more of in the year of COP 26. It’s really not surprising that Regulatory activity is starting in earnest now with the recent EU proposed legislation on AI that effectively puts an end to ‘AI in the wild’. It’s also noteworthy that the UK’s ICO has made the use of AI one of its top three strategic priorities.
- In Life and Health Insurance, as ever, we have some of our own special ‘flavours’ of issue to put into the mix: for way too much of the landscape for Life and Health products, the underlying data (for cover and pricing) isn’t trustworthy because it’s out-of-date: the Institute and Faculty of Actuaries-convened research project to update Diabetes data is the latest example. There are also very live debates on the quality of lead generation, especially in Life Insurance.
- It is obvious, isn’t it, that WHAT you do with that data through your firm’s underwriting philosophy really matters: you can essentially punish the customer with mental health issues for being in a programme of help by applying a higher loading – or not. There is no standard approach, it’s not even discussed that openly. Where’s the transparency for the Customer? Where’s the analysis to back up that decision? When was the last time that ‘model’ was interrogated? How long does a loading on the individual last – forever, regardless of their succeeding in achieving a state of mental health? And, where precisely is the human oversight in all of this?
- And finally, there’s an even more fundamental question here for everyone in Insurance whatever the specialty: what’s the end destination for Insurance in a world of personalisation and hyper-personalisation? How do we describe what we do: for some it’s ‘cross-subsidy’, for others ‘risk pooling’, for yet others it’s being compelled to be responsible for ‘Vulnerable’ customers (whatever that means) when all they’re interested in is their chosen customer segment. I subscribe to the view that to be human is to be vulnerable in many ways, and at many times in our lives, personally and professionally. I believe that Insurance is a social necessity and that if it did not exist, it would need to be invented. There is no such thing as one group of people who are destined to be ‘Vulnerable’ and then the larger group who are not, forever and unchangingly. Misfortune, accident and catastrophe are undiscriminating, respecting neither geography, income bracket or digital-savviness. People’s close acquaintance with Vulnerability as a result of Covid-19, adds a new dimension and urgency to the unresolved debates of What is Insurance for? Why are we – and what does that mean for our business models, our colleagues and our customers? What does ‘Fair’ and ‘Fair Value in the Digital Age’ mean in the 2020s?
By acquiescing with you, a customer is not giving you their trust: they’re giving you their conditional cooperation. You can’t trust something you don’t understand so the greater use of data without being able to address and explain the issues listed above to the Customer’s satisfaction will do the opposite of creating Trust. What all this means is that individual leaders, Board Executives and Independent NEDs don’t have simple choices here. The ‘halo effect’ of being like Amazon & Google is pretty much over as societal (and regulatory) trust in BigTech erodes fast. To earn the Social Licence to innovate with Data, Insurance as a sector (and each firm individually) must create a Trust ‘Brand’ for Data of our/their own, and that requires being loud and clear on Governance, Standards, Data transparency, model explainability, accountability, liability and redress for the Customer. As a quick check in your own firm, what are the answers to the following 4 questions:
- Where is the live inventory of internal algo models used for any purpose in the firm?
- When was the last time a model was decommissioned?
- Do you have access to an accessible (i.e. built for non-teccies) visual life-cycle of one piece of customer data (any piece) – from provenance to data cleanse?
- How long ago were key data supplier relationships audited for the above?
I’d argue these are vital questions for anyone sat on a Board to be asking, but in fact this clarity and transparency should be on-tap for any professional, in whatever role across the firm.
GreenKite can help you, your Board and your teams make sense of the Data and Trust landscapes, unpacking the need-to-know issues from AI to Data Strategy and Delivery, from Culture to Standards to managing outsourced services and selecting solutions providers.