By Louis DiPietro  

Using a miniature camera and a customized deep neural network, Cornell researchers have developed a first-of-its-kind wristband that tracks the entire body posture in 3D. 

Called BodyTrak, it is the first wearable to track the full body pose with a single camera. If integrated into future smartwatches, BodyTrak could be a game-changer in monitoring user body mechanics in physical activities where precision is critical, said Cheng Zhang, assistant professor of information science and the paper’s senior author. 

 “Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback,” Zhang said. “That’s handy, affordable and does not limit the user’s moving area.” 

A corresponding paper, “BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband,” was published in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable and Ubiquitous Technology in September and presented at the Ubiquitous Computing (UbiComp 2022). 

It’s the latest body-sensing system from the SciFiLab – based in the Cornell Ann S. Bowers College of Computing and Information Science – a group that has previously developed and leveraged similar deep learning models to track hand and finger movements, facial expressions, and even silent-speech recognition.  

Cornell researchers have developed a wristband that tracks the entire body posture in 3D.

The secret to BodyTrak is not only in the dime-sized camera on the wrist but the customized deep neural network behind it. This deep neural network – a method of artificial intelligence that trains computers to learn from mistakes – reads the camera’s rudimentary images or “silhouettes” of the user’s body in motion and virtually recreates 14 body poses in 3D and in real time. Put another way, the model accurately fills out and completes the partial images captured by the camera, said Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.   

“Our research shows that we don’t need our body frames to be fully within camera view for body sensing,” Lim said. “If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body.” 

Maintaining privacy for bystanders near someone wearing such a sensing device is a legitimate concern when developing these technologies, Zhang and Lim said. They said BodyTrak mitigates privacy concerns for bystanders since the camera is pointed toward the user’s body and collects only partial body images of the user. They also recognize that today’s smartwatches don’t have small nor powerful enough cameras and adequate battery life to integrate full body sensing just yet, but could conceivably in the coming years.  

Along with Lim and Zhang, paper co-authors are Yaxuan Li of McGill University; Fang Hu of Shanghai Jian Tong University; and Matthew Dressa ’22, Jae Hoon Kim ’23, and Ruidong Zhang, a doctoral student in the field of information science, all of Cornell. 

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.