The Nuffield Council on Bioethics has released a briefing note asking what it sees as the big ethical questions on the use of artificial intelligence in healthcare. The note acknowledges the huge potential of AI in making healthcare more efficient and friendly, as well as making faster and more accurate diagnoses, but its focus is on the serious questions about the use of AI that the healthcare industry needs to tackle head on if public trust in AI in healthcare is to be gained.
AI is gaining ground in many healthcare fields including the detection of disease, treating long-term cases, delivering health services and in the identification of new medicines. Tech giants like Microsoft, IBM and Google are plowing investment into healthcare AI research.
Among many questions, the note asks:
- what is the danger of AI making the wrong decisions?
- who is responsible when those decisions are made?
- what is the potential for the malicious use of AI?
- will AI cause human contact in healthcare to decrease?
- what does AI mean for the skill requirements of health professionals?
The briefing note cites one 2015 trial when an app was built to predict complications arising from pneumonia. The app made the mistake of telling doctors to discharge patients with asthma "due to its inability to take contextual information into account." On the other hand, it suggests AI can also veer towards being overly cautious, increasing the demand (and therefore the cost) of tests and treatments which aren't needed.
Further, the note raises concerns that issues with AI can be hard to identify and fix, either because the underlying code is proprietary (and therefore secret), or just too complicated to understand. These issues can make it hard to verify the quality of an AI's decision-making, or to identify biases or errors in the data its using.
As an example, the UK's House of Lords Select Committee on AI has raised concerns that, if the data used to train AI is not representative of the whole population, it could make unfair or even discriminatory decisions. Alternatively, it may be that the benefits of using AI aren't even felt across society.
The note highlights the potential isolating effects of AI, if it's used to replace contact with healthcare professionals or even family, especially when it comes to care in the home. Conversely, the note does point to the potential benefits of AI in home treatment, benefiting patients' independence, dignity and quality of life.
It also raises concerns about the skills of healthcare staff. If the expertise is transferred to the machine, and less-skilled workers are hired as a result, what happens when those machines fail?
"Our briefing note outlines some of the key ethical issues that need to be considered if the benefits of AI technology are to be realized, and public trust maintained," said Director of the Nuffield Council Hugh Whittall in a press release. "The challenge will be to ensure that innovation in AI is developed and used in a ways that are transparent, that address societal needs, and that are consistent with public values."
"AI technologies have the potential to help address important health challenges," the note concludes. "But might be limited by the quality of available health data, and by the inability of AI to possess some human characteristics, such as compassion."
You can read the full briefing note online.
Source: Nuffield Council on Bioethics