The fields of medicine and life insurance underwriting are not all that different. The aim for both is to keep people happy and healthy for as long as possible. Moreover, success in life insurance is predicated on having timely, complete and accurate clinical data. Underneath the hood, the two fields face similar challenges regarding technology.
At the recent Metropolitan Underwriting Discussion (M.U.D.) 52nd Annual Conference, I had the privilege of discussing the rubric for tackling those challenges with Dr. Christy Lane from Health Gorilla and Jas Awla from MIB in a panel moderated by Carolyn McAvinn, FLMI, AALU, PMC-IV, also from MIB, titled “Digital Health Data-Alphabet Soup.” Our dialogue delved into the intricate world of healthcare data quality and interoperability, AI and predictive model integration, and the evolving role of underwriters and healthcare providers in this dynamic landscape.
Data Quality and Interoperability
In both the healthcare and life insurance sectors, we are continually challenged with data quality and interoperability issues associated with harmonizing disparate datasets. Electronic Health Record (EHR) systems, originally designed as billing platforms, often fall short in capturing the nuanced details of patient encounters.
Across many disciplines, end users of health-generated data have the expectation that a fully Socratic thought process goes on in a provider’s head and then all those rich details get put into the EHR. As a physician, I was trained to have an internal monologue to append probabilities to questions I ask and answers I receive from the patient and my clinical exam. If my patient comes in with chest pain, my questions and observations are geared at determining the cause. Perhaps it’s a 50-year-old male smoker with family history of heart disease and maybe a heart attack seems most fitting; maybe it’s an athlete who had chest wall trauma and a rib fracture moves to the top of my list; or maybe it is a young female smoker, on birth control, who recently traveled from Europe and my workup shifts to pulmonary embolism. All these thoughts and probabilities go through my head, but what really is documented in the EHR? Chest pain. Many of the details get thinned out or excluded completely.
In addition to the challenges associated with summarizing complex and nuanced concepts under one diagnosis and its diagnosis code, clinical data is subject to variability from EHR to EHR, variability from implementation to implementation of the same EHR, variability between providers, and even variability between patients. These discrepancies in data capture, presentation, and quality lead to challenges in weaving disparate data into one cohesive story requisite for not only underwriting, but even more critical, care delivery. To overcome these hurdles, interoperability solutions and structured data frameworks are imperative.
Workflow: Codes Instead of Prose and Gray Cases
For underwriting, there are two workflows. The first one leverages the power of vocabulary representing codes instead of prose. These codes are present in every clinical document (ICD10, LOINC, CPT, etc.) and represent clinical concepts critical to risk selection – concepts like diagnoses, tests and their results, and procedures respectively. They serve as a great table of contents of the sum of all the medical data in an applicant’s file. Underwriters can use these to triage long patient stories and delve into the most critical pieces of those stories. But more importantly, they can leverage them as the foundation for automation – this is to say that rules engines and predictive models can use them for broad-brushed accept and reject decisions. The second workflow involves the gray areas of those cases that are neither empiric accept or reject. EHR’s not only have the aforementioned structured/coded concepts, but also narrative text (unstructured) that provides adjectives and adverbs to the codes. Using the chest pain example above, a case can be flagged by a triage solution because it contains the code for chest pain, with the understanding that there is gray around the differential diagnosis and the corresponding risk of what the cause of chest pain actually is. An underwriter can explore the narrative text in the flagged application to determine whether musculoskeletal, cardiovascular, or pulmonary causes are the working diagnosis for a given applicant’s chest pain and thereby assign risk. This is a simple way of helping underwriters who are confronted with the daunting task of deciphering medical documents and summarizing them accurately; more critically, it leverages skilled underwriters to work at the top of their training on cases that require critical reads, instead of those that contain extremes of risk.
Embracing Technology, Data Science, and AI
It’s crucial to understand that the provision of care is facing scalability hurdles amidst a disengaging provider workforce and the escalating complexity of patient care. Yet, history teaches us valuable lessons. Just as the stethoscope faced initial resistance before becoming a ubiquitous tool in medical practice, technology, AI, and data science are surmounting their adoption hurdles to become the integral components of the modern healthcare toolkit. Using these tools effectively, medical underwriters are transformed into the human judgement for the “gray,” complex cases, but also are poised to play a huge role in being the subject matter experts that write the blueprints for explainability of the automation algorithms. While there is a palpable fear in the industry and in the clinical practice of medicine that technology could eventually overtake human judgment, I believe that is farfetched from reality. A human in the loop will always be necessary – to define extremes and break ties. I feel that technology will never replace underwriters or clinicians, but those professionals who embrace and efficiently use technology will replace those who do not.
Responsible use of technology, premised on optimized data, is the perfect combination to enhance predictive modeling and foster responsible AI deployment. This iterative process entails right-sizing inputs and outputs for specific communities and ensuring clinical explainability to mitigate biases. It avoids black box solutions that obscure human understanding and transparency. It is foundational for equitable, contextual, accurate, and ethical decision making in both the clinical realm and the underwriting risk-selection realm that are so inextricably linked.
In conclusion, navigating the complexities of digital health data in the life insurance industry requires a formalized collaboration between underwriters, informaticists and technologists. It also requires a semantic understanding of how health data is captured during an episode of care, the drivers of provider documentation, and the variability in how it is represented – the underwriter needs to be on the same wavelength as the documenting provider. It requires attention to quality, embracing flexibility, and adoption of a change management culture. An interdisciplinary collaboration that is premised on interoperability, data transparency, and outcome explainability ushers in an evidence-based, personalized, proactive, and innovative approach to the underwriting of today and the future.