Phone cameras can now measure your pulse and breathing

The technique could expand the capabilities of telemedicine.

Telemedicine has taken off since the pandemic, since healthcare through a camera carries no COVID-19 risk. 

There are limits to what doctors and nurses can learn remotely, however — diagnostic tests and medical measurements that may require physical contact, or close examination of nuances that can be lost over the internet.

Researchers at the University of Washington have devised a way to measure patients’ respirations rates and pulse via a computer or smartphone’s camera, in an attempt to make telemedicine more accurate and useful.

According to UW News, the system, called MetaPhys, works by using a machine learning algorithm to identify and track changes in how light reflects off a patient’s face. Those subtle differences correlate with changes in blood flow, from which pulse and respiration rate can be deduced.

“Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it,” Xin Liu, the study’s lead author and a Paul G. Allen School of Computer Science & Engineering doctoral student, told UW News. 

“But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time.”

The team’s original iteration of the system was presented this past December at the Neural Information Processing Systems conference; this second version, which the researchers presented on April 8 at the Association for Computing Machinery’s Conference on Health, Interface, and Learning, avoids some of the pitfalls that snared its predecessor.

“There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging,” the researchers wrote in their arXiv-available paper. “Existing machine learning systems struggle to generalize well to unseen subjects or contexts and can often contain problematic biases.”

For instance, because the algorithm is looking at reflection of light off the skin, different backgrounds, lighting, and skin color can throw MetaPhys off. 

The first version of the system was trained using a dataset filled with people’s faces, as well as their independently measured pulse and respiration rates. But presented with data too different from this dataset, its performance declined.

MetaPhys improves on these issues by creating personalized machine learning models for each patient, picking out places to focus on while under different circumstances, including lighting and skin color. Even still, however, the system needs work. 

“We acknowledge that there is still a trend toward inferior performance when the subject’s skin type is darker,” Liu said. 

“This is in part because light reflects differently off of darker skin, resulting in a weaker signal for the camera to pick up. Our team is actively developing new methods to solve this limitation.”

That issue, which is obviously a cause for concern, slots into the category of what the authors call “individual differences,” one of four major challenges facing MetaPhys and other camera-based telehealth measurement systems. Facial hair is something else to overcome.

“Environmental differences” include variations in lighting, while the “sensor differences” between cameras can mean differences in sensitivity. Finally, there are the “contextual differences” in a video, if the patient is performing an action that the algorithm did not encounter in its training data.

Despite the challenges, systems like MetaPhys will likely be needed if telehealth is going to fully live up to the hype. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected]

The biggest AI breakthroughs of the last year
Last year saw breakthroughs from AI tools such as ChatGPT, DeepMind, and DALL-E, which generate text and code.
ChatGPT-like AI creates new bacteria-killing proteins
Using a large language model AI, biotech startup Profluent has created new antimicrobial proteins.
Why 2023 will be “the year of mixed reality”
Mixed reality, in which immersive virtual content is seamlessly combined with our physical world, is set to transform the world.
Simple tweak to cancer treatment reduces relapse risk by 28%
Delivering chemotherapy to colon cancer patients before and after surgery — instead of just after — reduces their risk of recurrence by 28%.
New AI will teach soldiers to dress wounds, fly helicopters in AR
DARPA is developing AI assistants that will display instructions to military personnel through augmented reality headsets.
Up Next
oral insulin
Subscribe to Freethink for more great stories