Dr VK Paul, Member (Health), NITI Aayog, delivered a strong message to healthtech innovators at a recent forum: AI solutions for healthcare must be co-created with frontline doctors, rigorously validated in real-world Indian settings, and never rolled out without clinician trust and evidence of safety. He emphasized that skipping these steps risks patient harm, erodes confidence, and slows meaningful adoption across public and private systems.
Glimpse:
Dr Paul highlighted three non-negotiables for AI in Indian healthcare: deep collaboration with practising clinicians from the design stage, mandatory multi-site clinical validation (not just accuracy metrics), and transparent, explainable models that doctors can understand and override. He warned against “technology-first” approaches that ignore clinical realities, urging startups to prioritize safety, equity, and trust to gain widespread acceptance in hospitals and public health programs.
Dr VK Paul, one of India’s most influential voices on public health policy, issued clear guidance to the healthtech and AI startup community during a high-profile session focused on scaling artificial intelligence in Indian healthcare. Speaking candidly, he stressed that while AI holds immense promise for diagnostics, triage, predictive analytics, and workflow efficiency, rushed or poorly designed deployments can cause harm particularly in a diverse, resource-variable country like India.
Key takeaways from Dr Paul’s remarks:
Co-creation with doctors is essential AI tools must be built in partnership with practising clinicians who understand real workflows, patient diversity, and local epidemiology. Top-down or tech-only designs rarely succeed in clinical settings. Validation must be rigorous and India-specific Accuracy on benchmark datasets is not enough. Tools must undergo multi-centre clinical trials across urban, rural, public, and private facilities to prove safety, efficacy, reduced errors, and positive patient/clinician outcomes. Explainability and trust are non-negotiable Doctors must understand why the AI is making a suggestion (e.g., clear reasoning, confidence scores, guideline references). Black-box models that cannot be questioned will face resistance. Human-in-the-loop always AI should augment, never replace clinical judgment. Every high-stakes decision must allow easy override and escalation to a human expert. Equity & bias safeguards Models must be trained and validated on diverse Indian datasets (regional, socioeconomic, linguistic) to avoid disadvantaging underserved populations.
Dr Paul also praised early successes where startups followed this path such as AI tools co-developed with public-sector doctors for TB screening and sepsis prediction but cautioned against “pilot-and-forget” approaches that generate hype without sustained evidence.
He urged innovators to align with national frameworks (ABDM, DPDP Act, upcoming AI-in-health guidelines) and engage regulators early, rather than treating compliance as an afterthought. The message was clear: responsible, clinician-led AI adoption will be rewarded with trust and scale while shortcuts will lead to rejection and setbacks.
The remarks have resonated strongly across the healthtech ecosystem, reinforcing that India’s path to AI-enabled healthcare must be led by clinical safety and trust, not speed alone.
“Co-create with doctors, validate rigorously in Indian settings, and always keep the clinician in control. That is the only way AI will truly transform healthcare in our country.”
By
HB Team
