Paramedics have roughly 8 to 12 minutes of transport time on an average urban ambulance run. During that window, they need to assess the patient, start interventions, communicate with the receiving hospital, and document everything. Attaching a blood pressure cuff, clipping on a pulse oximeter, and placing ECG leads eats into that time. For a two-person crew managing a critical patient in a moving vehicle, those minutes matter.
The idea of using a camera to passively capture vital signs during ambulance transport has been circulating in EMS research circles for a few years. Remote photoplethysmography, which detects blood volume changes under the skin using standard video, works well enough in controlled lab settings. The question is whether it can function in the back of an ambulance doing 60 mph over potholes.
"Contactless vital sign monitoring using near-infrared time-of-flight cameras with motion compensation demonstrates feasibility for in-vehicle applications, though challenges remain in dynamic, real-world environments." — Guo et al., Applied Sciences (2022)
The prehospital monitoring problem
EMS providers face a monitoring challenge that hospital clinicians don't. In an emergency department, the patient is stationary, the lighting is consistent, and a nurse can spend several minutes getting sensors placed correctly. In an ambulance, none of that applies.
The National Association of EMTs has documented that prehospital providers spend roughly 30% of patient contact time on equipment setup and vital sign acquisition. That's time not spent on airway management, IV access, or medication administration. For time-critical conditions like stroke and STEMI, where treatment windows are measured in minutes, any reduction in assessment overhead has direct clinical value.
Current ambulance monitoring relies on the same contact-based sensors used in hospitals: pulse oximetry clips, blood pressure cuffs, and 3- or 12-lead ECG electrodes. These work, but they have practical problems in transport. Blood pressure cuffs give intermittent readings, not continuous ones. Pulse oximeter clips fall off during patient movement. ECG leads require skin prep and electrode placement that takes 2 to 4 minutes for a 12-lead.
A camera mounted in the patient compartment could, in theory, begin capturing heart rate and respiratory rate data the moment the patient is loaded. No contact required, no setup time, continuous trending from the first second.
How camera-based systems compare to current prehospital monitoring
| Monitoring method | Setup time | Contact required | Continuous data | Works during movement | Lighting sensitivity | Evidence in prehospital |
|---|---|---|---|---|---|---|
| 12-lead ECG | 2-4 minutes | Yes, electrode adhesion | Yes | Moderate, motion artifacts | Low | Gold standard |
| Pulse oximetry (finger clip) | Seconds | Yes | Yes | Poor, falls off easily | Low | Gold standard |
| NIBP cuff (automated) | 30 seconds | Yes | No, intermittent cycling | Moderate | Low | Gold standard |
| rPPG camera (HR, RR) | None after mounting | No | Yes | Poor currently, improving | High | Early research |
| Radar-based monitoring | None after mounting | No | Yes | Moderate | None | Early research |
| Wearable biosensor patch | 30-60 seconds | Yes, adhesive | Yes | Good | Low | Growing (Rovenolt et al., 2023) |
The comparison is honest: contact-based methods are more reliable right now. But they also require hands-on time from paramedics who have too little of it.
Motion compensation is the core technical challenge
The biggest obstacle to ambulance-based rPPG isn't the algorithm. It's the vibration. A moving ambulance produces constant low-frequency vibration plus intermittent jolts from road irregularities. Both create motion artifacts in video that are difficult to separate from the physiological signal.
Guo et al. published work in Applied Sciences (2022) on a contactless vital sign monitoring system using a near-infrared time-of-flight camera with built-in motion compensation for in-vehicle use. Their system used depth data alongside RGB video to distinguish between actual blood volume changes and motion-induced pixel shifts. The approach showed reasonable heart rate tracking in a vehicle, though under more controlled conditions than a real ambulance call.
Nowara et al. at Rice University developed SparsePPG (2018), an algorithm specifically designed for driver monitoring that handles the kind of motion artifacts common in vehicles. Their approach uses robust principal component analysis to separate the rPPG signal from noise caused by vibration and head movement. The work was published at CVPR 2018 and released alongside a dataset of face videos collected in both RGB and near-infrared, giving other researchers a benchmark for vehicle-based rPPG.
The FaCare system, evaluated by researchers in a 2025 study published in Anesthesia & Analgesia, tested camera-based photoplethysmography against a GE HealthCare CARESCAPE B850 monitor during surgical anesthesia. While not a prehospital study, the results are relevant: 88.1% of heart rate correlation coefficients between the camera system and the contact monitor exceeded 0.8. The study demonstrated that camera-based systems can achieve clinical-grade accuracy when the patient is relatively still. The prehospital challenge is achieving that same accuracy when neither the patient nor the vehicle is stationary.
Multi-sensor fusion as a workaround
Some research groups have started combining camera data with other contactless sensors. A survey published in RSC Digital Discovery (2024) documented camera-radar fusion approaches that pair rPPG with millimeter-wave radar for simultaneous heart rate and respiratory rate monitoring. Radar handles lighting changes better and can read through blankets or clothing, which covers some of rPPG's blind spots.
This matters for ambulances specifically because patients are often bundled in blankets, wearing oxygen masks, or turned in ways that hide their face from the camera. A camera alone won't work for every patient. But a camera plus radar plus accelerometer data from the vehicle itself starts to look more practical.
What EMS technology trends are enabling this
Camera-based ambulance monitoring doesn't exist in isolation. Other EMS technology shifts are creating the infrastructure it would plug into.
AI-assisted clinical decision support
ImageTrend and other EMS data platforms have been integrating AI into prehospital workflows. A 2026 report from ImageTrend documented how predictive analytics are being used for dispatch triage, resource allocation, and early identification of high-acuity patients. The American Journal of Health Care Sciences published an analysis by researchers examining AI applications in EMS strategy, including speech-pattern stroke detection and AI-driven ECG analysis. These systems create a data pipeline that contactless vital signs could feed into.
Telemedicine in the field
The National Library of Medicine's StatPearls resource on EMS telemedicine documents the expansion of real-time video consultation between ambulance crews and hospital physicians. These telemedicine links already use cameras pointed at patients. Adding rPPG processing to existing telemedicine video feeds would require no additional hardware, just software.
The Veterans Affairs Video Connect platform has already demonstrated that rPPG can extract vital signs from standard video call footage. Haque et al. published a 2024 usability study in JMIR Formative Research showing high patient acceptance of smartphone-based rPPG during telehealth visits. Applying the same principle to EMS telemedicine video is a logical extension.
Connected ambulance platforms
Modern ambulances increasingly have onboard computing, cellular connectivity, and camera systems for security and documentation. Some agencies already record patient compartment video for quality assurance. Using that existing camera feed for vital sign extraction would add clinical capability without adding equipment.
Who benefits most from prehospital contactless monitoring
Not every ambulance call needs passive camera monitoring. A patient with a twisted ankle doesn't need continuous heart rate trending. But a few patient populations and scenarios would get outsized benefit.
Mass casualty incidents
When a bus crash or building collapse produces 20 or 30 patients simultaneously, paramedics can't individually monitor everyone. Camera-based triage stations at the scene could screen multiple patients passively while crews focus on the most critical. The START triage protocol currently relies on visual assessment and basic interventions. Adding objective heart rate and respiratory rate data, even approximate data, would improve sorting accuracy.
Pediatric patients
Children are notoriously difficult to monitor with contact sensors. They pull off pulse oximeter clips, cry when blood pressure cuffs inflate, and won't hold still for electrode placement. A camera that captures vital signs without touching the child would reduce patient distress and give paramedics data they sometimes can't get at all during transport.
Infectious disease transport
The COVID-19 pandemic demonstrated the value of reducing physical contact during patient care. Transporting patients with highly contagious respiratory infections means every sensor attachment is an exposure risk. Contactless monitoring eliminates that risk for vital sign acquisition.
What still needs to happen
The gap between lab-validated rPPG and ambulance-ready rPPG is wide. A few specific problems need solving before camera-based monitoring works in a moving ambulance.
Vibration compensation algorithms need validation on actual ambulance platforms, not just passenger vehicles. The frequency profile of an ambulance on emergency response, with siren-related vibration, hard braking, and rough road surfaces, differs from normal driving.
Lighting in patient compartments varies between daytime and nighttime calls, and between agencies with different ambulance configurations. Algorithms need to handle the shift from bright overhead LEDs to near-darkness when crews dim lights for patient comfort.
Skin tone accuracy remains an unresolved issue across rPPG research. Nowara et al. at Rice University documented reduced accuracy on darker skin tones in multiple studies. For a technology intended for emergency use across diverse populations, this limitation is particularly problematic.
Regulatory clearance for prehospital vital signs devices requires FDA 510(k) approval in the United States. No camera-based rPPG system currently has FDA clearance for clinical vital sign measurement in any setting, let alone the uncontrolled prehospital environment.
Circadify has developed camera-based vital sign measurement technology and is exploring applications in prehospital and transport settings. The technical foundation for contactless heart rate and respiratory rate measurement exists. Translating that capability from controlled environments to the back of a moving ambulance is where the field is heading.
Frequently asked questions
Can rPPG measure vital signs accurately inside a moving ambulance?
Research is still early. Vehicle vibration and patient movement create motion artifacts that degrade rPPG signal quality. Studies like Guo et al. (2022) have demonstrated contactless vital sign monitoring systems with motion compensation for in-vehicle use, but clinical validation in ambulances specifically remains limited.
What vital signs can camera-based systems measure in prehospital settings?
Camera-based rPPG can estimate heart rate, respiratory rate, and in some cases blood oxygen saturation from facial video. Heart rate measurement has the strongest evidence base. Blood pressure estimation from video remains experimental and less reliable in uncontrolled environments.
How could contactless monitoring help paramedics during transport?
Paramedics currently spend significant time attaching and managing contact sensors during transport. A camera-based system could provide continuous trending data without physical contact, freeing paramedics to focus on interventions and reducing the time from patient contact to first vital sign reading.
Is camera-based monitoring ready for use in ambulances?
Not yet. The combination of vehicle vibration, variable lighting, patient movement, and the high-acuity patient population makes ambulances one of the most challenging environments for rPPG. Current research focuses on motion compensation algorithms and multi-sensor fusion to address these challenges.
Related articles
- Camera-Based Vital Signs in Emergency Triage — How rPPG is being tested in emergency department triage to speed patient assessment.
- What is rPPG Technology? — A complete overview of remote photoplethysmography and how it measures vital signs from video.
- Contactless Heart Rate Monitoring — Detailed analysis of camera-based heart rate measurement accuracy and applications.