Phone cameras can now measure your pulse and breathing

The technique could expand the capabilities of telemedicine.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

Telemedicine has taken off since the pandemic, since healthcare through a camera carries no COVID-19 risk. 

There are limits to what doctors and nurses can learn remotely, however — diagnostic tests and medical measurements that may require physical contact, or close examination of nuances that can be lost over the internet.

Researchers at the University of Washington have devised a way to measure patients’ respirations rates and pulse via a computer or smartphone’s camera, in an attempt to make telemedicine more accurate and useful.

According to UW News, the system, called MetaPhys, works by using a machine learning algorithm to identify and track changes in how light reflects off a patient’s face. Those subtle differences correlate with changes in blood flow, from which pulse and respiration rate can be deduced.

“Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it,” Xin Liu, the study’s lead author and a Paul G. Allen School of Computer Science & Engineering doctoral student, told UW News. 

“But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time.”

The team’s original iteration of the system was presented this past December at the Neural Information Processing Systems conference; this second version, which the researchers presented on April 8 at the Association for Computing Machinery’s Conference on Health, Interface, and Learning, avoids some of the pitfalls that snared its predecessor.

“There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging,” the researchers wrote in their arXiv-available paper. “Existing machine learning systems struggle to generalize well to unseen subjects or contexts and can often contain problematic biases.”

For instance, because the algorithm is looking at reflection of light off the skin, different backgrounds, lighting, and skin color can throw MetaPhys off. 

The first version of the system was trained using a dataset filled with people’s faces, as well as their independently measured pulse and respiration rates. But presented with data too different from this dataset, its performance declined.

MetaPhys improves on these issues by creating personalized machine learning models for each patient, picking out places to focus on while under different circumstances, including lighting and skin color. Even still, however, the system needs work. 

“We acknowledge that there is still a trend toward inferior performance when the subject’s skin type is darker,” Liu said. 

“This is in part because light reflects differently off of darker skin, resulting in a weaker signal for the camera to pick up. Our team is actively developing new methods to solve this limitation.”

That issue, which is obviously a cause for concern, slots into the category of what the authors call “individual differences,” one of four major challenges facing MetaPhys and other camera-based telehealth measurement systems. Facial hair is something else to overcome.

“Environmental differences” include variations in lighting, while the “sensor differences” between cameras can mean differences in sensitivity. Finally, there are the “contextual differences” in a video, if the patient is performing an action that the algorithm did not encounter in its training data.

Despite the challenges, systems like MetaPhys will likely be needed if telehealth is going to fully live up to the hype. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Has AI made “learn to code” obsolete?
Freethink talks to the creator of the world’s most popular AI coding assistant to find out whether learning to code is still worthwhile.
AI is already in the classroom. It’s time colleges caught up.
Rather than banning AI, schools should adapt by designing assignments that promote responsible use and keep the focus on learning.
Google AI exec: “The mistake would be thinking this is hype.”
Bestselling author and Google Labs’ Editorial Director Steven Johnson talks about the future of AI at Freethink’s Great Progression event.
Siri co-founder: “No matter how smart AI gets, it’s not going to solve all our problems by itself.”
Adam Cheyer, co-founder of Siri and VP of AI Experience at Airbnb, talks about the future of AI at Freethink’s Great Progression event.
A call to innovators in Silicon Valley and beyond to help chart the new way forward
Peter Leyden sums up the key themes and big ideas of his new series at a Freethink Conversation in San Francisco.
Up Next
oral insulin
Subscribe to Freethink for more great stories