The rise of generative AI has prompted an AI ethicist to propose a framework to mitigate the risks of using the ever-developing tech in the healthcare space. This coincides with the chief executive of ChatGPT’s OpenAI urging US legislators to start regulating AI for the safety of humanity.
Science fiction writer Isaac Asimov introduced his Three Laws of Robotics in the 1942 short story “Runaround”. He died in 1992, well before being able to witness the rise of generative AI that’s taken place in recent years.
Generative AI includes algorithms such as ChatGPT or DALL-E that can be used to create new content – including text, images, audio, video and computer code – using the data it's trained with. Large Language Models (LLMs) are a key component of generative AI, neural networks trained on large quantities of unlabeled text using self- or semi-supervised learning.
The abilities of generative AI have expanded exponentially. In healthcare, it’s been used to predict patient outcomes by learning from a large patient dataset, has diagnosed rare diseases with incredible accuracy, and has passed the United States Medical Licensing Exam, achieving a 60% grade without prior learning.
The potential for AI to enter the healthcare space and replace doctors, nurses and other health professionals prompted AI ethicist Stefan Harrer to propose a framework for generative AI use in medicine.
Harrer, who is the Chief Innovation Officer at the Digital Health Cooperative Research Center (DHCRC) and a member of the Coalition for Health AI (CHAI), says that the problem with using generative AI is its ability to generate convincing content that’s false, inappropriate, or dangerous.
“The essence of efficient knowledge retrieval is to ask the right questions, and the art of critical thinking rests on one’s ability to probe responses by assessing their validity against models of the world,” Harrer, who is based in Melbourne, Australia, said. “LLMs can perform none of these tasks.”
Harrer considers that generative AI has the potential to transform healthcare, but it’s not there yet. To that end, he proposes the introduction of an ethically based regulatory framework of 10 principles that, he says, mitigate the risks of generative AI in healthcare:
- Design AI as an assistive tool that augments human decision-makers' capabilities but doesn’t replace them.
- Design AI to produce metrics around performance, usage and impact that explain when and how AI is used to assist decision-making and scans for potential biases.
- Design AI that’s based on, and will adhere to, the value systems of target user groups.
- Declare the purpose and use of AI from the outset of conceptual or development work.
- Disclose all data sources used for training the AI.
- Design the AI to label AI-generated content clearly and transparently.
- Regularly audit the AI against data privacy, safety and performance standards.
- Document and share audit results, educate users about the AI’s capabilities, limitations and risks, and improve AI performance by retraining and updating algorithms.
- When employing human developers, ensure that fair work and safe work standards are applied.
- Establish a legal precedent that clearly defines when data may be used for AI training, and establish copyright, liability and accountability frameworks governing training data, AI-generated content and the impact of human decisions made using that data.
Interestingly, Harrer’s framework coincides with calls by the chief executive of ChatGPT’s Open AI, Sam Altman, for US legislators to introduce government regulation to prevent the potential risks AI poses to humanity. Altman, who co-founded OpenAI in 2015 with backing from Elon Musk, has suggested that the government introduce licensing and testing requirements before more powerful AI models are released.
Over in Europe, the AI Act is set to go to a vote next month at the European Parliament. If passed, the legislation could see bans on biometric surveillance, emotion recognition, and some AI systems used in policing.
Harrer’s fairly general framework could be applied to many workplaces with a risk that AI will replace humans. And it seems to have come at a time when people, even those responsible for creating the technology, are asking the world to hit pause.
Is healthcare more at risk than other employment sectors? Is a framework like this beneficial, and, importantly, would it indeed mitigate the risks given the speed at which AI is improving? Only time will provide us with answers to these questions.
The paper was published in the journal eBioMedicine.