Computers

The case for a code of conduct for AI in healthcare

The case for a code of conduct for AI in healthcare
The rise of generative AI has prompted calls for the introduction of a framework to regulate its use in healthcare
The rise of generative AI has prompted calls for the introduction of a framework to regulate its use in healthcare
View 1 Image
The rise of generative AI has prompted calls for the introduction of a framework to regulate its use in healthcare
1/1
The rise of generative AI has prompted calls for the introduction of a framework to regulate its use in healthcare

The rise of generative AI has prompted an AI ethicist to propose a framework to mitigate the risks of using the ever-developing tech in the healthcare space. This coincides with the chief executive of ChatGPT’s OpenAI urging US legislators to start regulating AI for the safety of humanity.

Science fiction writer Isaac Asimov introduced his Three Laws of Robotics in the 1942 short story “Runaround”. He died in 1992, well before being able to witness the rise of generative AI that’s taken place in recent years.

Generative AI includes algorithms such as ChatGPT or DALL-E that can be used to create new content – including text, images, audio, video and computer code – using the data it's trained with. Large Language Models (LLMs) are a key component of generative AI, neural networks trained on large quantities of unlabeled text using self- or semi-supervised learning.

The abilities of generative AI have expanded exponentially. In healthcare, it’s been used to predict patient outcomes by learning from a large patient dataset, has diagnosed rare diseases with incredible accuracy, and has passed the United States Medical Licensing Exam, achieving a 60% grade without prior learning.

The potential for AI to enter the healthcare space and replace doctors, nurses and other health professionals prompted AI ethicist Stefan Harrer to propose a framework for generative AI use in medicine.

Harrer, who is the Chief Innovation Officer at the Digital Health Cooperative Research Center (DHCRC) and a member of the Coalition for Health AI (CHAI), says that the problem with using generative AI is its ability to generate convincing content that’s false, inappropriate, or dangerous.

“The essence of efficient knowledge retrieval is to ask the right questions, and the art of critical thinking rests on one’s ability to probe responses by assessing their validity against models of the world,” Harrer, who is based in Melbourne, Australia, said. “LLMs can perform none of these tasks.”

Harrer considers that generative AI has the potential to transform healthcare, but it’s not there yet. To that end, he proposes the introduction of an ethically based regulatory framework of 10 principles that, he says, mitigate the risks of generative AI in healthcare:

  1. Design AI as an assistive tool that augments human decision-makers' capabilities but doesn’t replace them.
  2. Design AI to produce metrics around performance, usage and impact that explain when and how AI is used to assist decision-making and scans for potential biases.
  3. Design AI that’s based on, and will adhere to, the value systems of target user groups.
  4. Declare the purpose and use of AI from the outset of conceptual or development work.
  5. Disclose all data sources used for training the AI.
  6. Design the AI to label AI-generated content clearly and transparently.
  7. Regularly audit the AI against data privacy, safety and performance standards.
  8. Document and share audit results, educate users about the AI’s capabilities, limitations and risks, and improve AI performance by retraining and updating algorithms.
  9. When employing human developers, ensure that fair work and safe work standards are applied.
  10. Establish a legal precedent that clearly defines when data may be used for AI training, and establish copyright, liability and accountability frameworks governing training data, AI-generated content and the impact of human decisions made using that data.

Interestingly, Harrer’s framework coincides with calls by the chief executive of ChatGPT’s Open AI, Sam Altman, for US legislators to introduce government regulation to prevent the potential risks AI poses to humanity. Altman, who co-founded OpenAI in 2015 with backing from Elon Musk, has suggested that the government introduce licensing and testing requirements before more powerful AI models are released.

Over in Europe, the AI Act is set to go to a vote next month at the European Parliament. If passed, the legislation could see bans on biometric surveillance, emotion recognition, and some AI systems used in policing.

Harrer’s fairly general framework could be applied to many workplaces with a risk that AI will replace humans. And it seems to have come at a time when people, even those responsible for creating the technology, are asking the world to hit pause.

Is healthcare more at risk than other employment sectors? Is a framework like this beneficial, and, importantly, would it indeed mitigate the risks given the speed at which AI is improving? Only time will provide us with answers to these questions.

The paper was published in the journal eBioMedicine.

Source: Digital Health Cooperative Research Center

3 comments
3 comments
anthony88
Is asked GPT about the ethics and implications of using AI in ICU monitoring to make life-and-death decisions, recently. It said it would do whatever humans wanted. That worries me.
*Joe*
Can we get these two applied to human doctors? It would be nice to know which doctors don't meet performance standards, and maybe retrain them. I know I've met a few of them.

7. Regularly audit the AI against data privacy, safety and performance standards.
8. Document and share audit results, educate users about the AI’s capabilities, limitations and risks, and improve AI performance by retraining and updating algorithms.
DOC HOLLYWOOD
The initial coding for AI was done by humans with hardwired biases. Most of those humans were/are white and asian. Not a stretch to say that the "colorblind" AI algorthims are likely to deal with black people in a negative/hostile manor. Of course the hostility can now be carried out without a human to point to as a the culprit.
Which was probably the plan all along.