As AI use in European health systems expands rapidly, WHO Europe warns that the region lacks fundamental legal and ethical frameworks putting patient safety, privacy, and care equity at risk.
Glimpse:
A new WHO Europe report, surveying 50 of its 53 member states, reveals that while many countries are already using AI in diagnostics and patient engagement, only a handful have a dedicated national AI-health strategy. The WHO is calling for legal clarity, redress mechanisms, and AI literacy to catch up with the technology’s rollout.
The World Health Organization’s European branch issued a major caution: legal and ethical safeguards must be strengthened as artificial intelligence becomes more entrenched in healthcare systems across the region. Their report on AI readiness based on responses from 50 of 53 member states highlights serious gaps in regulation, accountability, and strategy. g to the report, almost two-thirds of countries are already deploying AI in diagnostics (especially in imaging), and half have adopted AI chatbots for patient engagement. Despite this, only four countries (8%) have a formal AI-in-healthcare policy, while just seven more are developing one.
One of the biggest hurdles: legal uncertainty. Roughly 86% of the surveyed countries identified it as a key barrier to wider AI adoption in health care. WHO Europe argues that without clear legal liability, patients may have no recourse if AI tools err, and clinicians may be hesitant to fully trust them.
To address these risks, WHO Europe is urging its member states to set up accountability frameworks, redress mechanisms for harm, and rigorous testing for AI systems to ensure they are safe, fair and effective in real-world use.
“Either AI will be a force to lighten our health-system burden or, without guardrails, it could compromise patient safety, privacy, and equity.”
By
HB Team
