While artificial intelligence is a monumental aid to the business world, there are downsides. Two of the scarier ones: A.I. delivering false information from unreliable sources and flat-out “hallucinations” where A.I. invents information and its sources.
For life-science firms, almost nothing could be more alarming. Both patients and healthcare providers can use A.I. applications to learn more about physical ailments, pharmaceutical treatments, and medical devices—and in such cases, incorrect information can be dangerous.
Of course, pharmaceutical and medical-device companies that develop their own A.I. chatbots can control the datasets informing those chatbots. But with ChatGPT, Gemini, and other general A.I. applications easily accessible to healthcare professionals, life-science firms should be proactive in educating HCPs on proper usage when researching a specific drug or device.
The Worst-Case Scenario
According to a bluntly titled article in Harvard Business Review (https://hbr.org/2024/07/the-risks-of-botshit), researchers presenting at the 2023 annual meeting of the American Society for Health-Systems Pharmacists found that about 75 percent of responses generated by ChatGPT for questions related to prescription drugs were inaccurate or incomplete. Even worse, when researchers asked the tool for references to support responses, it generated fake citations—a hallucination.
The article also details an actual case: Doctors in the United Kingdom found that an A.I.-powered app called GP at Hand, created by start-up firm Babylon Health, was often incorrectly advising users on whether their symptoms required a medical intervention. And BBC’s Newsnight showed a doctor demonstrating how the app suggested that two specific conditions do not require emergency treatment, when in fact the symptoms the user typed in could be indicators of a heart attack. The app was shuttered and Babylon Health closed down.
“This speaks to the value of practice-specific chatbots instead of general-purpose ones such as ChatGPT,” the authors wrote.
What to Do
With their own chatbots, life-science firms can implement internal checks such as a human expert testing outputs for veracity and identifying flaws and limitations—and then inform HCPs of this security feature.
As for the use of general A.I. apps, firms can teach HCPs to verify with the company any answers they find about a specific drug or device.
In any case, firms should run product and device training for HCPs in a way that minimizes their need or desire to use general A.I. apps for more information, as well as provide physician support through a proprietary chatbot or other interactive tool that can deliver timely responses.
Don't miss a thing: Sign up for our e-newsletters HERE.