There is a lot of mental work currently done within the pharmaceutical, medtech and broader healthcare industries, on how we – the companies and providers – could use Artificial Intelligence (AI) best. Fair enough.
But there is a challenge created by AI that the industry seems not to have properly on its radar.
This is … healthcare professionals (doctors, nurses, et.) and patients using AI. Asking AI about our products. Being pre-informed, if not biased by AI-provided statements, before they speak to us.
- Imagine a medical scientific liaison colleague (MSL) or a sales rep meeting a physician, and this physician might have built an opinion on our product based on asking an AI before. Will the MSL or rep be prepared for this situation?
- Imagine patients reaching out with a product inquiry to our medical information services, and having consulted a 3rd party AI before, and assuming to know already.
- Imagine doctors and patients visiting our digital touchpoints and finding information not consistent with what the AI told them before. Who will they trust more?
Some facts & figures …
- Two-thirds of U.S. physicians (66%) use some form of AI in their practice, this includes administrative and clinical support tasks but also treatment or medication inquiries , according to the American Medical Association.
- Among primary care physicians, about 39% use AI daily according to MedicalEconomics.
- 25% of US hospitals use AI-driven predictive analysis accroding to AI in Healthcare Statistics: Comprehensive List for 2025.
- About 30-50% of patients have already used AI for health-related questions with trust increasing according to a couple of studies (e.g., Deloitte, TheIntake).
- And now, in January 2026 both ChatGPT as well as Anthropic have released specific health-consulting AI tools. Which will, without a doubt, boost development, as access to understandable answers on disease-, care-, and treatment-specific questions is easier than ever before.
Colleagues, we are talking about losing the sovereignty over information on our products. AI is competing with us in this aspect. And we are talking about potentially losing control in our interactions with professional customers and patients. At least getting in a more defensive position than ever before. Compared to this ‘express train’ (already hitting us), “Dr. Google” has been a handcar.
Where the general challenge might not be unique to the healthcare industry. But the impact and relevance might be more drastic, as the right information on disease treatment and management is key for patients receiving the best, safe, and efficacious therapy. And the information provided by AI is certainly influencing levels of trust and confidence in products, if not in the companies providing them.
I am afraid to say that, to a certain extent, the industry has just missed the boat on this development, despite internal and external warnings, and has remained in its comfort zone for too long. And didn’t really show eagerness to change how information is provided (except for some truly exciting pilots, e.g., on customer-driven content).
Investing in building a higher reputation for being a trusted partner might generally not hurt. And it is finally time to fundamentally shift the way the industry provides information to patients and HCPs.
I think strategies and approaches still need to be well elaborated. But definitely not to be ignored anymore. If the pressure for change has not been strong enough so far, it is definitely there now.
You can also read this article at Medium.
(Editor’s note: This article has been updated on this site (only) on January 21st, 2026.)
Additional reading …

