Wellness & Healthy Living

Tough questions posed for the use of AI in healthcare

Tough questions posed for the use of AI in healthcare
Does AI in healthcare jeopardize the human touch?
Does AI in healthcare jeopardize the human touch?
View 1 Image
Does AI in healthcare jeopardize the human touch?
1/1
Does AI in healthcare jeopardize the human touch?

The Nuffield Council on Bioethics has released a briefing note asking what it sees as the big ethical questions on the use of artificial intelligence in healthcare. The note acknowledges the huge potential of AI in making healthcare more efficient and friendly, as well as making faster and more accurate diagnoses, but its focus is on the serious questions about the use of AI that the healthcare industry needs to tackle head on if public trust in AI in healthcare is to be gained.

AI is gaining ground in many healthcare fields including the detection of disease, treating long-term cases, delivering health services and in the identification of new medicines. Tech giants like Microsoft, IBM and Google are plowing investment into healthcare AI research.

Among many questions, the note asks:

  • what is the danger of AI making the wrong decisions?
  • who is responsible when those decisions are made?
  • what is the potential for the malicious use of AI?
  • will AI cause human contact in healthcare to decrease?
  • what does AI mean for the skill requirements of health professionals?

The briefing note cites one 2015 trial when an app was built to predict complications arising from pneumonia. The app made the mistake of telling doctors to discharge patients with asthma "due to its inability to take contextual information into account." On the other hand, it suggests AI can also veer towards being overly cautious, increasing the demand (and therefore the cost) of tests and treatments which aren't needed.
Further, the note raises concerns that issues with AI can be hard to identify and fix, either because the underlying code is proprietary (and therefore secret), or just too complicated to understand. These issues can make it hard to verify the quality of an AI's decision-making, or to identify biases or errors in the data its using.

As an example, the UK's House of Lords Select Committee on AI has raised concerns that, if the data used to train AI is not representative of the whole population, it could make unfair or even discriminatory decisions. Alternatively, it may be that the benefits of using AI aren't even felt across society.

The note highlights the potential isolating effects of AI, if it's used to replace contact with healthcare professionals or even family, especially when it comes to care in the home. Conversely, the note does point to the potential benefits of AI in home treatment, benefiting patients' independence, dignity and quality of life.

It also raises concerns about the skills of healthcare staff. If the expertise is transferred to the machine, and less-skilled workers are hired as a result, what happens when those machines fail?

"Our briefing note outlines some of the key ethical issues that need to be considered if the benefits of AI technology are to be realized, and public trust maintained," said Director of the Nuffield Council Hugh Whittall in a press release. "The challenge will be to ensure that innovation in AI is developed and used in a ways that are transparent, that address societal needs, and that are consistent with public values."

"AI technologies have the potential to help address important health challenges," the note concludes. "But might be limited by the quality of available health data, and by the inability of AI to possess some human characteristics, such as compassion."

You can read the full briefing note online.

Source: Nuffield Council on Bioethics

1 comment
1 comment
Juanjo
Difficult questions. But they are easier to answer than the AI development needed. The criteria used by the main players (critically few as everybody knows) is comercial. Compassion is not part of the specs (how does it measure?). Also when you get code, any type of code, you have some percentage of success in trespassing security. So isolation and more isolation and more isolation is needed. Human contact will decrease, of course and skills and human knowledge also. They will be sucked by those main players, as it is now on many subjects. Strong ethical tests are needed to clear any healthcare AI. Also human backups are needed...