Surgical intelligence: human, artificial, emotional.

The burgeoning field of artificial intelligence (AI) is garnering more and more attention but will it enhance or undermine health? Current AIs are already powerful but in very defined domains. Most involve machine learning and are able to process extremely large volumes of data, draw out patterns, and make predictions. However, the task at hand must be extremely well defined. Only a clear and limited objective allows the machine to determine the optimal way to solve the problem.

“Computers are useless. They can only give you answers”      Pablo Picasso

Human behaviour is neither clear nor limited. As such, AIs are a long way from understanding or replicating even modest aspects of human behaviour. That said, their capacity to have an influence on behaviour is undeniable.  We have seen just how powerful social media-incorporated AIs can be (consider in particular Cambridge Analytica’s insinuation into Facebook and its downstream impact on geopolitics).

Perhaps healthcare can be a beneficiary of the power of AI. Superficially at least, the aim of enhancing health seems a purist pursuit. But imagine tasking an AI with optimising human health. One would need to carefully define what ‘health’ is before asking a computer to optimise it. There are nearly 8 billion people on the planet, all with differing outlooks. Whilst the WHO tells us health is “a state of physical, mental and social well-being in which disease and infirmity are absent”, what then constitutes well-being? We are organisms with a finite lifespan – we cannot have health forever.

Some cultures and people value life above all, tolerating severe disability and ill-health. Others value quality over duration. Taking a neurosurgical example, the operation that saves a life but that leads to an existence of disability and dependence is intolerable to many. Could an AI ever come to appreciate the nuance here, or would it simply compute to save a life? Given their growing power, however, perhaps it is in just such blurred areas that AI may come into its own; constantly asking questions and observing behaviour on a global scale and providing a route to better patient-centred care?

These ideas can become esoteric, circuitous and seemingly unanswerable. Whilst it easy to say that time will tell, as a profession we need to be fully engaged. We need to ensure that appropriate questions are asked of this technology, that it is driven towards positive goals, and that the power here does not rest only with industry.

Mark Hughes

Director, eoSurgical

Skull-base fellow, Leeds General Infirmary

 

Email: mark.hughes@eosurgical.com

Twitter: @eosurgical