ARTIFICIAL intelligence (AI) is a term that has catapulted into the public’s consciousness in recent years, as the technology has become easily accessible for the first time.
Media coverage has focused on the negative impacts AI can have on political discourse, but it is already being used in Scotland’s NHS.
For our Solutions for Scotland series with The National, The Ferret explains how AI works, how it is being used in healthcare and what the future might hold.
What is AI?
Artificial intelligence (AI) is an umbrella term that describes technology that enables computers to complete tasks that would normally need human intelligence and problem-solving abilities.
Generative AI is when artificial intelligence is used to create something new, like text, images, audio and video, while machine learning describes the use of data and algorithms to imitate the way that humans learn, thus improving the accuracy of AI over time.
Algorithms are the lists of rules or instructions that are used to perform certain tasks, like problem-solving. These are the basis of AI and tell the program how to operate on its own.
What uses could it have in healthcare?
AI has the potential to assist healthcare professions in numerous ways.
One key application is in medical imaging. This is when pictures are taken of the human body (like X-rays) to help diagnose or identify issues. There are multiple uses of AI in this field.
AI can detect and categorise types of tumours, infections such as pneumonia and identify bone fractures.
It can help doctors to identify benign tumours from malignant ones in tissue samples and can diagnose skin cancers from pictures of abnormalities.
AI can speed up time-consuming administrative tasks, like note-keeping and data input. Tools exist for appointment scheduling, predicting missed appointments and rescheduling. It can be used to improve efficiency in things like bed management and staffing.
Prevention of illnesses is another area where AI is believed to have significant use.
This could be through the management of long-term conditions, preventing illnesses and identifying people at risk. It is hoped this could help to reduce the number of people coming to hospital and reduce pressure on service. There are also automated chatbots that could be used to advise patients on how to manage conditions independently.
How is it being used in healthcare in Scotland?
In 2021, the Scottish Government stated it will be a “leader in the development and use of trustworthy, ethical and inclusive AI”.
However, a pledge to deliver a new AI life sciences hub has stalled, with the Scottish Government having
“re-prioritised resources elsewhere due to fiscal pressures”, according to a report on AI by the Scottish Parliament Information Centre (Spice).
Powers over the use of AI in healthcare are split between the UK and Scottish Parliaments. While health and economic development is devolved, the regulation of medical devices, including AI tools, is still reserved to the UK, as is data protection. This means any development of AI tools in Scottish healthcare must follow UK-wide rules.
There are currently a limited number of AI tools being used within Scotland’s NHS, according to Spice. These include bone scanning in children and assisting in radiotherapy for cancer patients.
There is also widespread use of AI to improve imaging. CT scanners in the Golden Jubilee Hospital in Clydebank have in-built AI that allows clearer images to be produced in a shorter time.
Numerous pilots are currently ongoing that aim to bring AI into various parts of the NHS.
These include projects using AI in breast cancer screenings, chest x-rays, helping self-management of long-term conditions and attempting to cut waiting times through improved scheduling.
What are the potential issues?
Concerns have been raised over the use of AI in some aspects of healthcare.
There are concerns that over-reliance on AI could put patients at risk through misdiagnosing or recommending incorrect treatment. This would only become an issue if the AI was working independently without being overseen by human experts.
Systematic bias of AI has also been identified. This happens when the data used to train an AI programme is incomplete or badly balanced. There is evidence that marginalised groups in society can be less accurately represented in medical data, which could potentially influence machine-learning AIs.
This could potentially increase existing inequalities that are already felt in healthcare.
AI tools require a huge amount of data for training, and public health services often rely on private companies to produce and manage AI products.
Privacy and data protection have long been highlighted as issues within AI, and many experts have raised concerns about the potential misuse of patient data by companies providing AI solutions in healthcare.