Is there any sector more ripe for Artificial Intelligence (AI) transformation than healthcare? Over the past decade, we’ve seen health systems around the world experience significant strain due to increasing population numbers and age. There is a clear need to maximise efficiency, lighten the burden of stretched human resources and harness the medical sector’s mountains of data. It comes as no surprise that global spending on healthcare AI is expected to reach anywhere from $644 million to $126 billion by 2025.

Of course, the overriding focus should not be what monetary value is brought to the market, but how AI can improve human lives. It is not an exaggeration to say that the potential for this technology to enhance the way we manage and treat illness is almost unimaginable today.

Today’s smart-tech monitors everything from blood sugar to heart pressure. It is not unreasonable to think that by 2050, the laboratory of the future will not be in a pharmaceutical lab but a patient’s own body, where AI highlights health risks, enabling humans to make better care decisions.

Whilst the opportunity to drastically improve healthcare is too compelling to ignore, it does raise a host of complex questions around how we should approach AI’s healthcare revolution — and more significantly, how quickly we can come to trust the guidance of a machine when it concerns the wellbeing of a human.

Laying foundations

Although it’s clear to see the future revolutionary potential, AI in healthcare is still in its infancy. Before we progress too far in implementing this technology, we must ask ourselves — how much oversight are we willing to surrender to a machine? Managing AI’s deployment responsibly, in a way which prevents its capabilities rapidly spiralling beyond human oversight, should be paramount.

We should not view this technology as a silver bullet solution to healthcare’s varied range of challenges, nor is it a wonder drug to fix patients or radically overhaul healthcare provider conduct. Let’s instead think about how we utilise technology to augment human behaviour — where patients, caregivers, healthcare professionals and payers can make better adherence decisions.

But what does a balanced AI deployment look like in practice? In 2017 I was part of a Medtech team examining how allergic reactions in children could be prevented in the absence of a caregiver. We deployed a digital healthcare system that nudged an adolescent allergist, through a mobile device, to pass on dietary information to others during meal times, while reminding them to scan food to assess the potential for allergic reaction. This resulted in two behaviour changes being observed — the adolescent became more receptive to dietary advice, while caregivers (the adult) became less overprotective once they saw a change in the adolescent’s adherence behaviour.

We must constantly ask whether our goal in incorporating AI into healthcare is to empower people — whether healthcare professional, caregiver or patient — to better manage illness. The temptation to act on aspiration, overhaul entire systems or embed AI where it is not needed, should be resisted.

Measured — and measurable — responsibly

One of AI’s biggest opportunities in medicine is ‘predictive care guidance’ which allows both patient and healthcare providers to leverage technology and make better decisions when diagnosing patients. But when put into practice, how do we balance AI’s capabilities with responsibility? Who has the final say? In essence, what is AI’s role and responsibility?

Again, the power of this technology is how much it augments the human. The intelligence comes from monitoring and translating a situation — this patient is at risk of heart failure, for example — and the healthcare professional observes and acts accordingly.

This ‘translation’ aspect should be a cornerstone of how AI is deployed, as it has real potential to solve one of healthcare’s main challenges — interpretability. Healthcare is awash with data — the US system alone generates one trillion gigabytes of information every year — but approximately 80% remains unstructured and split between various silos, from clinical notes and laboratory results to medical research.

When this information is not regarded as a whole, healthcare professionals simply do not have the full picture, meaning the latest treatment recommendations may be missed. AI can be used to collate this information and provide professionals with clear translations — logging the decisions and actions being taken, looping this information back into the system for continuous learning.

Putting AI in the hands of all who will use it

With healthcare resources stretched, it is tempting to imagine AI as a tool for busy healthcare professionals. Although this is a clear benefit, we should be more ambitious in developing this technology for all areas of the healthcare ecosystem.

Systems should be accessible by everyone and help all involved to work together for the benefit of the patient — doctors and nurses, parents or other caregivers, payers, as well as the patients themselves. Providing a new piece of technology to a patient with a chronic disease will not cure them — instead, it’s how this technology is used that will define the impact.

To put it another way, AI could tell a doctor that a patient is at risk of illness, but once the patient has left the hospital, how can they ensure medical adherence will be achieved? What if this technology could be used to remind the patient, or pass on the healthcare professional’s guidance to the primary caregiver? Similarly, could AI prompt a patient with advice — such as reminding an asthmatic to avoid pollen-heavy regions — and also flag this to a caregiver?

AI may be useful in reminding a patient when they are due for a check-up or further treatment, but will every patient heed the advice of a machine? It is more effective for the machine to flag this information to a healthcare provider or caregiver so they can contact the patient directly, providing a more humble, conscious and empathic direction.

Saving the healthcare provider-patient relationship

For all this noise, the most promising aspect of AI in healthcare is how it empowers the human element and rekindles the crucial relationship between caregivers, healthcare professionals, payers, and patients. At a time when resources are stretched, these relationships are difficult to maintain — how much attention can a doctor with hundreds of patients pay to one individual? Perhaps technology’s true value is in plugging this gap?

If this is the case, it will be particularly pertinent to consider the increasing influential role of explanatory artificial intelligence — Explainable AI (XAI). The complexity of machines and algorithms means it can often be difficult to glean insights into their thought processes and how this influences behaviour. So how do we ensure XAI enables more transparent AI systems, protected from bias?

At the Women in AI Dinner in Boston this May 21, Alison O'Connor, Senior Data Scientist at QuantumBlack will be sharing her work in probabilistic modelling and statistical modelling. QuantumBlack we put human, machine and performance at the centre of all of their projects by bringing together data science, engineering and design teams from the outset to combine with insight from healthcare sector specialists. "This fusion encourages us to continuously question and experiment with the bigger picture — not just how an algorithm will sit in a human’s life or in a wider business, but where it sits in the broader healthcare industry, how sector professionals can interpret the algorithm’s results and its impact on the ecosystem."

Time will only tell, but this big picture approach may result in technology being incorporated in a measured yet ambitious way that prioritises health adherence — to deliver better outcomes for everyone.