The question "will AI replace doctors?" gets searched tens of thousands of times per month. The answer, at least in 2026, is straightforward: no. But the longer answer is more interesting than a simple no, because AI is genuinely changing what doctors spend their time doing — and some of those changes are happening faster than most people expected.
The FDA cleared 295 AI-enabled medical devices in 2025 alone. By mid-2025, over 1,200 AI/ML devices had received marketing authorization, with 956 of them in radiology (Innolitics, 2025). That's not a pilot program. That's an industry. But clearing a device and replacing a doctor are very different things, and the gap between the two tells you most of what you need to know about where this technology actually stands.
"Even as regulations advance to enable AI to fully take over certain clinical tasks, the most significant barrier to widespread clinical AI adoption isn't technology — it's payment models." — Bessemer Venture Partners, State of Health AI 2026
What AI does well right now
AI has found its footing in a handful of medical areas where the task is well-defined, the data is structured, and the output is a probability rather than a judgment call.
Medical imaging
This is where AI has made the most progress. Radiology accounts for roughly 77% of all FDA-cleared AI medical devices. Algorithms can flag suspicious nodules on chest CTs, identify diabetic retinopathy from retinal scans, and detect fractures that radiologists miss on first pass. A 2024 systematic review in Nature Digital Medicine found AI achieved 85.4% sensitivity in skin cancer detection, compared to 76.4% for non-specialist dermatologists (p < 0.001 for both sensitivity and specificity).
These numbers are real and reproducible. But they come with a caveat: AI performs best when the question is binary (malignant or benign, fracture or no fracture) and the input is a single image. Clinical medicine rarely works that way.
Administrative work and documentation
The less glamorous but arguably more impactful use of AI in 2026 is paperwork. Clinicians spend roughly two hours on documentation for every one hour of patient care. AI-powered scribes that listen to patient encounters and generate clinical notes are now deployed across major health systems. This doesn't replace a doctor — it gives them back time they were spending on typing.
Triage and preliminary screening
AI chatbots and symptom checkers can sort patients by urgency before they see a clinician. Emergency departments are testing systems that analyze intake information and suggest triage levels. The technology works reasonably well for common presentations but struggles with atypical cases — exactly the cases where triage decisions matter most.
Drug interactions and clinical decision support
Checking a new prescription against a patient's existing medications and allergies is pattern matching at scale. AI handles this well and has been doing so for years, though earlier versions were rule-based rather than machine learning.
What AI cannot do
The list of things AI can't do in medicine is longer and more fundamental than the list of things it can.
Physical examination
You can't palpate an abdomen through a screen. You can't hear a heart murmur through an algorithm. Physical examination requires hands, spatial awareness, and the ability to adjust technique based on what you find as you go. No AI system can perform a physical exam, and none is close to doing so.
Complex clinical reasoning
Medicine involves synthesizing incomplete, contradictory, and ambiguous information from multiple sources — lab results, imaging, patient history, family context, social determinants, medication adherence patterns — and making a decision under uncertainty. AI models can process individual data streams, but integrating them the way an experienced clinician does remains beyond current capabilities.
How far beyond? Stanford and Harvard researchers found that AI medical models produce severely harmful clinical recommendations in up to 22.2% of cases, with even top-performing models making between 12 and 15 severe errors per 100 clinical encounters (SoapNoteAI, 2025). That error rate is not compatible with autonomous practice.
Empathy and the therapeutic relationship
A significant body of evidence shows that the patient-doctor relationship itself is therapeutic. Patients who trust their doctor adhere to treatment plans more consistently, report symptoms more accurately, and have better outcomes. AI can simulate empathetic language. It cannot actually care about a patient, and most patients can tell the difference.
Ethical judgment
Should a 92-year-old with advanced dementia receive aggressive cancer treatment? Should a teenager's parents be told about a confidential conversation? These decisions involve values, cultural context, and moral reasoning that AI systems are not designed to handle.
AI capabilities compared: what works and what doesn't
| Capability | AI performance (2026) | Human physician performance | Who does it better? |
|---|---|---|---|
| Chest X-ray abnormality detection | High sensitivity, FDA-cleared devices available | Variable, depends on experience | AI as screening tool, physician for interpretation |
| Skin lesion classification | 85.4% sensitivity (Nature, 2024) | 76.4% (non-specialists), higher for dermatologists | AI matches or beats non-specialists |
| Clinical documentation | Fast, accurate transcription available | Slow, error-prone when fatigued | AI, clearly |
| Multi-system diagnostic reasoning | 12-15 severe errors per 100 cases | Varies, but adapts to context | Physician |
| Physical examination | Not possible | Standard of care | Physician |
| Patient communication and trust | Simulated, inconsistent | Variable but irreplaceable | Physician |
| Drug interaction checking | Comprehensive, instantaneous | Limited by memory | AI |
| Rare disease identification | Pattern matching across large datasets | Depends on specialist access | Complementary |
| Vital sign monitoring (contactless) | Emerging via rPPG and camera-based systems | Standard equipment required | Evolving — each has trade-offs |
Sources: Innolitics (2025), Nature Digital Medicine (2024), Stanford/Harvard (SoapNoteAI, 2025).
Where contactless vitals and rPPG fit in
One area where AI and clinical measurement are converging is contactless vital sign monitoring. Remote photoplethysmography (rPPG) uses standard cameras to detect subtle skin color changes caused by blood flow, extracting heart rate and respiratory rate from video without any physical sensors.
This is a practical example of AI doing something useful without replacing anyone. A camera in a telehealth session, a waiting room, or a patient's home can capture vital signs passively. The clinician still interprets the data, still makes the diagnosis, still decides on treatment. The AI handles the measurement step — one that traditionally required physical contact and dedicated equipment.
Circadify is developing contactless vital sign technology using rPPG. The company's camera-based approach is designed to extract physiological signals — heart rate, respiratory rate, and other metrics — without physical contact, supporting clinical workflows in settings where traditional monitoring creates friction. This includes telemedicine consultations where clinicians need vitals but patients aren't in the office, and screening environments where attaching sensors to every person isn't practical.
The technology doesn't diagnose anything. It measures. The distinction matters, because the measurement-to-diagnosis pipeline is where human judgment remains essential.
What the next few years actually look like
AI in healthcare is not following the trajectory that breathless headlines suggest. It's not going to replace your doctor by 2030. What it is going to do, and what it's already doing, is restructure how doctors spend their time.
The pattern across every healthcare AI deployment that has actually worked is the same: AI takes over a specific, well-defined task, and the physician shifts attention to higher-order work. AI reads the scan, the radiologist interprets the AI's output alongside clinical context. AI transcribes the visit, the physician reviews and signs off. AI flags abnormal vitals from a contactless monitoring system, the clinician decides what to do about it.
According to NVIDIA's 2025 healthcare AI survey, 70% of healthcare organizations are now actively using AI, up from 63% the year before. But "using AI" mostly means administrative automation and imaging assistance — not clinical autonomy. The payment models, liability frameworks, and regulatory structures needed for AI to act independently don't exist yet, and building them will take years.
The more honest framing is this: AI is replacing specific tasks, not specific people. Doctors who use AI will likely replace doctors who don't. And the patients will probably not notice the difference, except that their appointments might run a little more smoothly.
Frequently Asked Questions
Will AI replace doctors in the next 10 years?
No. AI is increasingly handling specific tasks like image analysis, administrative documentation, and preliminary screening, but it cannot replicate clinical judgment, physical examination, or the patient-doctor relationship. The consensus among researchers and medical associations is that AI will augment physician capabilities rather than replace them.
What can AI do in healthcare right now?
AI currently performs well in medical imaging analysis (radiology, pathology, dermatology), administrative tasks like clinical documentation, drug interaction checking, and preliminary triage. The FDA had cleared over 1,200 AI-enabled medical devices by mid-2025, with the majority used in radiology.
What can't AI do in medicine?
AI struggles with complex multi-system clinical reasoning, physical examination, building patient trust, navigating ethical dilemmas, and adapting to unusual presentations that fall outside its training data. Stanford and Harvard research found that AI models produce harmful clinical recommendations in up to 22% of cases.
How does contactless vital sign monitoring relate to AI in healthcare?
Contactless vital sign monitoring uses camera-based technology (rPPG) combined with AI algorithms to extract heart rate, respiratory rate, and other physiological signals from video. It represents one practical application of AI that supports clinical workflows without replacing the clinician interpreting the data.