Skip to main content

Smart Glasses & Wearable Optics

Smart Glasses & Wearable Optics

Custom rPPG for Smart Glasses & Wearable Optics

We develop custom rPPG extraction algorithms for smart glasses form factors — periorbital and temple-region cameras with extreme close-range optics. Every model is built from scratch for your specific lens, sensor, and optical path.

Standard rPPG models fail on smart glasses because they were designed for full-face, arm-length video. We build purpose-built pipelines that extract reliable vitals from narrow fields of view, unusual skin regions, and wearable-grade processors.

Talk to Our Engineering Team →

What We Build

Periorbital signal extraction
Temple-region preprocessing
Custom lens calibration
Ultra-low-power inference
Narrow FOV optimization

What We Build

1

Periorbital Signal Extraction

Custom algorithms that extract blood volume pulse from the periorbital region — the skin around the eyes captured by downward-facing smart glasses cameras.

2

Temple-Region Preprocessing

Signal extraction from temporal artery area using side-mounted cameras, with custom filtering for the unique optical characteristics of temple-region skin.

3

Custom Lens Calibration

Models calibrated to your specific lens focal length, aperture, and distortion profile — close-range optics require fundamentally different preprocessing than standard cameras.

4

Ultra-Low-Power Inference

Model architecture optimized for wearable SoCs — Qualcomm AR2 Gen 1, MediaTek Dimensity, or custom ASICs. Sub-milliwatt inference that preserves battery life.

5

Narrow FOV Optimization

Preprocessing tuned for the 30-60 degree FOV typical of smart glasses cameras, maximizing signal quality from a small facial region.

6

Continuous Passive Monitoring

Always-on vital sign tracking that runs as a background service, with intelligent duty-cycling to balance data quality and power consumption.

Technical Specifications

Built for Wearable Hardware

Camera Types

Periorbital RGB, temple-region NIR, micro CMOS sensors

Processing Targets

Qualcomm AR2 Gen 1, Qualcomm XR2+ Gen 2, MediaTek Dimensity, custom ASICs

Power Budget

Sub-5mW continuous inference target

Measurement Cadence

Continuous passive with 10-second active measurement cycles

Output

BLE health service profile, companion app SDK, custom JSON

Form Factor Support

Monocular, binocular, temple-arm mounted

Smart Glasses rPPG FAQ

Common questions about custom rPPG for smart glasses and wearable optics

How does rPPG work from periorbital cameras on smart glasses?

Smart glasses with downward-facing cameras capture the periorbital region — the skin around the eyes. This area has rich superficial vasculature, making it viable for blood volume pulse (BVP) extraction. However, the signal characteristics are fundamentally different from full-face rPPG. The field of view is narrow, the distance is extremely close, and the skin region is small. We build custom preprocessing and extraction models specifically trained on periorbital data, which is a completely different pipeline than adapting a generic full-face model.

What power consumption does the rPPG inference add?

Our target is sub-5mW for continuous inference, which is negligible relative to the display and wireless subsystems on smart glasses. We achieve this through aggressive model quantization, intelligent duty-cycling (active 10-second measurement windows with passive monitoring between them), and architecture choices optimized for the specific SoC in your glasses. The rPPG inference typically adds less than 2% to total device power consumption.

Can you work with our proprietary camera module?

Absolutely — per-customer camera tuning is our core differentiator. We build custom preprocessing for your specific lens, sensor, and optical path. We have worked with prototype and pre-production optics where final specifications were still being finalized. We calibrate to your exact focal length, aperture, distortion characteristics, and noise profile. Send us your camera module specs and sample captures, and we will scope the build.

How accurate is rPPG from such a small facial region?

The periorbital region is surprisingly rich in vascular signal — the supraorbital and supratrochlear arteries provide strong pulsatile flow close to the skin surface. Custom models trained specifically on this region of interest consistently outperform generic full-face models that have been cropped or adapted to small regions. The key is purpose-built preprocessing and training data captured from the actual camera geometry, not repurposed full-face datasets.

What's the integration path for our existing glasses firmware?

We deliver optimized C/C++ inference libraries compiled for your target SoC, with a minimal API surface. Integration typically involves initializing the inference engine, feeding camera frames, and reading vital sign outputs. We also provide BLE health service profiles for streaming data to companion devices and a companion app SDK (iOS and Android) for visualization and data export. Our engineering team works directly with your firmware team through integration and validation.

Related Custom Builds

Request A Demo

See how contactless vitals can transform your healthcare delivery.