Catholic Medical Quarterly Volume 73(1) February 2023

Artificial intelligence - AI: what is it and what does it mean for Catholics in healthcare?

Dr Caroline Zdziarska

Artificial Intelligence (AI) is very much here, mushrooming at breakneck speed, and in all walks of life - like it or not! Prof John Wyatt, Professor of Neonatal Paediatrics and senior researcher at the Faraday Institute for Science and Religion, Cambridge, describes it as:

"one of the greatest challenges that will face the human race in the coming decades. As technology advances it raises the age-old question of ‘What does it mean to be human?’, but in new and surprising ways. What will it mean to be human in a world of intelligent machines? And if the machines can take over most of the roles and tasks which we thought were uniquely human, ‘What are human beings for?’ "

The concept of using computers to simulate intelligent behaviour and critical thinking was first described by Alan Turing in 1950. In the book 'Computers and Intelligence', Turing described a simple test, later known as the “Turing test,” to determine whether computers were capable of human intelligence. In 1956 John McCarthy described the term artificial intelligence (AI) as “the science and engineering of making intelligent machines". AI started with simple “if, then rules”, advancing massively over the decades to include more complex algorithms that some would argue perform similarly to the human brain.

There are subfields in AI. In basic summary:

  • Machine Learning (ML) is pattern identification and analysis; machines can improve with experience from provided data sets.
  • Deep Learning (DL) is composed of multi-layer neural networks which enable a machine to learn and make decisions on its own.
  • Natural Language Processing (NLP) is the process that enables computers to extract data from human language to make decisions based on that information.
  • Computer Vision (CV) is the process by which a computer gains information and understanding from a series of images or videos.
  • Convolutional Neural Network (CNN) is a type of DL, a network architecture for deep learning that learns directly from data.

So how is, and how will AI be used in healthcare?

A few examples:

  • Google’s DeepMind has been working with Moorfields Eye Hospital since 2016 to help clinicians improve diagnosis and treatment, by identifying sight-threatening eye conditions within seconds, and ranking patients in order of urgency for treatment.
  • In dermatology, skin cancer could be detected more accurately by an AI system (which used a deep learning convolutional neural network), than by dermatologists. It was reported in an oncology journal in 2018, that dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.
  • In cardiovascular medicine, AI algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease. In gastroenterology, carrying out endoscopies with AI, clinicians can more rapidly identify diseases, determine their severity, and visualise blind spots.
  • Early trials in using AI detection systems of early gastric cancer, have shown sensitivity close to expert endoscopists. Oncology, radiology and pathology are all hot areas where AI is potentially enabling much earlier diagnosis.
  • In oncology, machine learning algorithms could rank drug therapies, hence medication would become 'truly personalised'. In psychiatry, AI applications are still in a phase of proof-of-concept, but evidence is growing on predictive modelling of diagnosis and treatment outcomes, including use of chatbots and conversational agents that imitate human behaviour, in the study of anxiety and depression.
  • There are even brain-machine interface (BMI) systems that enable the treatment of neurological disorders including cognitive, sensory, and motor dysfunctions.

It all sounds rather positive doesn't it. Arguably mention of AI in psychiatry and BMI’s may be expected to raise eyebrows, but otherwise, are there concerns? Our Symposium on 22 April in Nottingham provides an opportunity for you to hear about current AI research and practice, as well as a scan of some of the ethical considerations likely arising from this in both the short and longer term.

Meanwhile, a few thoughts:

Pope Francis has shown great engagement on the question of the ethics of AI. In Jan 2019 he wrote to the Pontifical Academy for Life, in a letter, Humana Communitas, that such technology has “enormous implications,” as it “touches the very threshold of the biological specificity and spiritual difference of the human being.”

In 2020 the Pontifical Academy for Life sponsored the ‘Rome Call for AI Ethics’, to “promote a sense of shared responsibility among international organisations, governments, institutions and technology companies in an effort to create a future in which digital innovation and technological progress grant man his centrality.”

Our own Government, UNESCO and many others have produced recommendations, and it is absolutely an area of ongoing work..

The chief issues of concern already being flagged include, bearing in mind the incredible speed of developments, that the enthusiasm for clear benefits and utility could overtake attention to security and reliability, data collection, and privacy. Even with the best will, it is being acknowledged there is the risk of bias entering systems, exacerbating existing inequalities and leading to discrimination. Then there’s accountability. Who will actually be responsible for errors that may occur? Are clinicians at risk of becoming deskilled? At present clinicians envisage working in parallel with AI. But in the future, AI could be the decision maker. And as AI advances, there’s a real risk of lack of transparency: after all, can the machine explain its decision, and will we understand?

At a more fundamental level, a machine, even with the most advanced “deep learning” is not and never will be a human being “made in the image and likeness”. All machines remain devoid of the eternal meaning that entails.

Dominican Father Ezra Sullivan, an expert in the philosophy of science, metaphysics and Abrahamic religions at the Pontifical University of St. Thomas Aquinas has warned AI could potentially lead humans to “treat each other like machines and also to lose a sense of their own dignity.” People could then see themselves “no longer as meaningful agents, but rather as passive objects of manipulation by machines or elites with power.”

Prof John Wyatt has written,“As we reflect on the possibilities that technology offers, we must ask how we can build a future in which physically embodied human beings can flourish, the vulner­able can be protected, and face-to-face embodied relationships be celebrated and protected.”

Finally .. a quote from Alan Turing, himself in The Times newspaper 11th June 1949, "This is only a foretaste of what is to come, and only the shadow of what is going to be".

Please save the date Sat 22 April - our National Symposium in Nottingham “Ethical Questions on AI in Healthcare”.

Dr Caroline Zdziarska is a retired community paediatrician and former GP.

Reference