Technology

NVIDIA to create AI ‘agents’ that outperform human nurses

NVIDIA to create AI ‘agents’ that outperform human nurses
NVIDIA has partnered with Hippocratic AI to develop AI-powered healthcare agents
NVIDIA has partnered with Hippocratic AI to develop AI-powered healthcare agents
View 4 Images
NVIDIA has partnered with Hippocratic AI to develop AI-powered healthcare agents
1/4
NVIDIA has partnered with Hippocratic AI to develop AI-powered healthcare agents
Providing patient healthcare depends on collaboration
2/4
Providing patient healthcare depends on collaboration
An example of the generative AI healthcare agents from Hippocratic AI
3/4
An example of the generative AI healthcare agents from Hippocratic AI
Nursing includes performing non-nursing duties
4/4
Nursing includes performing non-nursing duties
View gallery - 4 images

NVIDIA has partnered with Hippocratic AI to develop AI-powered ‘healthcare agents’ that have already been shown to outperform other large language models and human nurses in specific tasks. Will these agents help ease the global healthcare worker shortage, or will they lead to more problems?

Workforce shortages are simply an imbalance between need and supply. In healthcare, this equates to not being able to provide the right number of people with the right skills in the right places to provide the right services to the right people.

The World Health Organization (WHO) has estimated a projected shortfall of 10 million health workers by 2030, mostly in low- and middle-income countries. However, the pinch produced by shortages is already being felt in rural and remote settings in high-income countries like the US and Australia. Ostensibly addressing the global healthcare staffing crisis, NVIDIA recently announced their partnership with Hippocratic AI to develop generative AI-powered ‘healthcare agents’.

“With generative AI, we have the opportunity to address some of the most pressing needs of the healthcare industry,” said Munjal Shah, cofounder and CEO of Hippocratic AI. “We can help mitigate widespread staffing shortages and increase access to high-quality care – all while improving outcomes for patients. NIVIDIA’s technology stack is critical to achieving the conversational speed and fluidity necessary for patients to naturally build an emotional connection with Hippocratic’s Generative AI Healthcare Agents.”

The agents build on Polaris, Hippocratic’s safety-focused large language model (LLM), the first designed for real-time patient-AI healthcare conversations. Polaris’ one-trillion parameter ‘constellation system’ combines a primary LLM agent that drives patient-friendly conversation and several specialist support agents focused on healthcare tasks performed by nurses, social workers, and nutritionists to increase safety and reduce AI hallucinations. They’re connected to NVIDIA Avatar Cloud Engine (ACE) microservers and use NVIDIA Inference Microservice, or NIM, for low-latency inferencing and speech recognition, and were unveiled at the recent NVIDIA GTC.

Always Available, Real-Time Generative AI Healthcare Agents

Having super-low latency is key to providing a platform that allows patients to establish a natural, responsive connection with the AI agents they’re interacting with. Hippocratic is working with NVIDIA to continue refining its tech so it can deliver that.

“Voice-based digital agents powered by generative AI can usher in an age of abundance in healthcare, but only if the technology responds to patients as a human would,” said Kimberly Powell, vice president of Healthcare at NVIDIA. “This type of engagement will require continued innovation and close collaboration with companies, such as Hippocratic AI, developing cutting-edge solutions.”

Hippocratic has already tested Polaris’ capabilities, recruiting US-licensed nurses and physicians to perform end-to-end conversational evaluations of the system by posing as patients. The company says that, compared to OpenAI’s ChatGPT-4 and LLaMA-2 70B and flesh-and-blood nurses, Polaris outperformed them all in terms of safety benchmarks. The results are available on the open-access repository arXiv.

The announcement that Hippocratic AI and NVIDIA are partnering to develop AI-powered healthcare agents elicited a wide range of comments on Reddit. Perhaps the most telling was the comment by Then_Passenger_6688: “AI will be doing the ‘human’ stuff of bedside manner, humans will be doing the ‘robot’ stuff of inserting tubes and injecting stuff.”

An example of the generative AI healthcare agents from Hippocratic AI
An example of the generative AI healthcare agents from Hippocratic AI

It’s important to remember that, at least at this stage, the healthcare agents are limited to talking to patients by phone or video to assist patients with things like health risk assessments, management of chronic illnesses, pre-op check-ins, and post-discharge checkups. At this stage. As we all know, AI advances at an astonishing rate.

The media has also highlighted the cost of using an AI healthcare agent compared to the cost of actual nurses. Hippocratic’s website advertises all its agents at less than US$9 per hour. By contrast, as of 2022, the US Bureau of Labor Statistics listed the estimated mean hourly wage for registered nurses as $42.80. But what does a patient get for that higher amount?

As a former registered nurse working in a hospital setting, I can say there are a lot of non-nursing duties that don’t appear in any job description: Housekeeping chores, delivering and removing meal trays, putting flowers into vases and discarding them when they’ve died, transporting non-critical patients for surgical and medical imaging procedures, connecting the TV, buying a particular newspaper or magazine for a patient, answering relatives’ questions about their loved one's progress (usually more than once a shift), troubleshooting lights/toilets/pan-flushers that aren’t working … I could go on. Then there’s the ‘dark side’ of nursing: Talking down an agitated or angry patient or relative, and receiving verbal and/or physical abuse from any number of visitors.

Nursing includes performing non-nursing duties
Nursing includes performing non-nursing duties

I’m in two minds about the introduction of AI healthcare agents. Obviously, the main purpose of any valuable technology, such as AI, is to solve problems or make improvements. Do healthcare agents do this? Probably, yes. AI can improve medical record keeping and the quality of service in terms of efficiency and, potentially, safety, and simplify workload. But consider this: AI does a patient’s pre-op check-in, which removes a fairly straightforward administrative task and frees up some of the nurse’s time. However, what familiar face does the patient look for when they’re admitted to the hospital? It can’t be the AI’s. That pre-admission interaction is more than collecting information; it’s crucial to establishing rapport and trust, calming anxiety, and obtaining a holistic view of the patient.

In benchmark testing, Polaris performed slightly better than human nurses in identifying a laboratory reference range (96% versus 94%). However, all the lab results I’ve looked at include a reference range, and nurses know how and where to look for one, even if it’s not included. When assessing knowledge of a specific medication’s impact on lab results, Polaris scored 80%, while human nurses scored 63%. That’s all well and good, but it neglects the nurse’s ability to check for themselves whether a deranged lab result has been caused by medication. And this autonomy has a flow-on effect. The out-of-kilter result is communicated to the treating doctor and/or the nurse in charge, which serves two purposes: It ensures the patient’s well-being and informs the treating team of a potential issue.

Providing patient healthcare depends on collaboration
Providing patient healthcare depends on collaboration

My fear is that the introduction of AI-powered health agents will contribute to the compartmentalization of healthcare, particularly nursing. Medicine and surgery have already separated into specialized groups and even smaller subgroups within those groups, isolating these disciplines so that the knowledge and experience gained in one is not readily transferred to another.

By partnering with NVIDIA, it’s arguable that Hippocratic AI’s heart is in the right place.

“We’re working with NVIDIA to continue refining our technology and amplify the impact of our work of mitigating staffing shortages while enhancing access, equity, and patient outcomes,” said Shah.

Let’s hope they stay true to their motto, "Do no harm." The motto is based on the phrase, "First, do no harm," a dictum often attributed to Hippocrates but doesn’t appear in the original version of the Hippocratic Oath.

Source: Hippocratic AI, NVIDIA

View gallery - 4 images
6 comments
6 comments
Ancliff
"However, what familiar face does the patient look for when they’re admitted to the hospital? It can’t be the AI’s. That pre-admission interaction is more than collecting information; it’s crucial to establishing rapport and trust, calming anxiety, and obtaining a holistic view of the patient." Ideally the AI would have the face and even the voice of the staff person also on the ward. Or perhaps better still, face and voice of a matron in charge of the ward, to whom queries would filter up if neither AI or nurses could resolve an issue.
akarp
You need nurses with these AI tools. It's the Doctors that are going to be replaced.
dave be
The question isn't how well it performs. Software based tech has been able to outperform humans for decades. Note they say outperform, they don't say the machine learning agents are perfect, which of course nothing can be. The question is who assumes liability for mistakes that do happen.

Individuals in our system take (or are assigned) responsibility for their actions. Make a mistake and you can get sued for compensation. When an 'ai agent' makes a mistake who do you sue? The doctors office? They aren't going to want to assume that because they didn't make the mistake. So they'll want to forward that liability on to the contracting software company. They won't be able to take endless blame pay out compensation for their actions so either try to put that liability back on the doctors who will stop purchasing the system, or go out of business so the system can no longer be bought.
paul314
This sounds like a recipe for a mess, at least as proposed. Sure, the AI can do some of the conversational stuff, but unless it's "conversing" with a patient who is using a keyboard, it's going to miss all kinds of nuance that a human practitioner picks up automatically -- tone of voice, shrugs, pained grimaces etc. And that means when a human comes in to do the necessary physical stuff (because robots still can't do that, apparently) the human will have much less context to work with and likely do a worse job. It's already bad enough in many practices where a desk attendant takes the initial information, an intake nurse does the vitals and initial exam, a doctor or nurse practitioner does the detailed exam and diagnosis, and it's the patient's job to make sure that no useful bit of information falls into the gaps between them or gets misrecorded.

Where AIs might be able to be useful would be keeping in better touch with patients who have chronic conditions, who often don't have anything serious enough to trigger an office visit but still need monitoring and potential adjustment of care.
Smokey_Bear
Ancliff - "Ideally the AI would have the face and even the voice of the staff person also on the ward."
That's a terrible idea, then patients would see you, and think you know them, when you've never met them in your life. They would think they have already spoken to you about their condition, making future conversations awkward.
Daishi
I see valid use-cases for this if it is handled well but that is not a guarantee that it will be. On the opposite side of the spectrum from @dave be's point about liability it could be so overly cautious that it is annoying or almost useless to interact with. "I'm sorry I can't answer that Dave". Some LLM's are so tied down with safeguards, disclaimers, and censorship it is one of the biggest complaints people have about them and that is without the medical liability component. Some people have heard the disclaimer, are aware of the limitations, and don't need to hear it 5 more times per question asked. Every time a physician uses the thing it will answer with telling them to consult a physician resulting in an infinite loop. I think a lot of companies are still trying to find the right balance there. The medical industry is broken and I welcome any attempt to improve it though. I absolutely believe AI is capable of improving outcomes for a lot of people.