Srinath Sridhar Wins An NSF CAREER Award For Robotic Perception Of Human Physical Skills
- Posted by Jesse Polhemus
- on March 30, 2022

"Everyday human activities," says Brown CS Professor Srinath Sridhar, "are impressive feats of physical intelligence – from careful placement of feet to avoid obstacles when walking to the precise and highly coordinated movement of fingers to type a sentence. Robots with even a fraction of human physical intelligence could revolutionize lives by automating repetitive tasks. Despite advances, however, robots with such physical abilities remain elusive."
But not forever, and Srinath has just received a National Science Foundation (NSF) CAREER Award that will take a step toward these more capable robots. CAREER Awards are given in support of outstanding junior faculty teacher-scholars who excel at research, education, and integration of the two within the context of an organizational mission.
"This project moves in that direction," he tells us, "by building 3D computer vision and machine learning algorithms for automatically analyzing human skills from large-scale image and video collections readily available on the internet or captured in the wild. It will produce a large repository of high-level physical skills that can then be transferred to robots. Robotics4All, the education and outreach component of the project, will integrate theory and practice by acquiring several cameras and robot arms to teach an advanced course, a semester-long undergraduate research experience program, and a virtual workshop program. Furthermore, the project will lead to advances in computer vision-based understanding of human physical skills, in-the-wild capture of a significant amount of skills data, and help to solve problems outside of CS."
To meet the research goals, Srinath explains, the project will advance the state of the art in computer vision-based modeling and estimation of human physical skills from large-scale visual data. Existing methods are limited to operating in structured environments and cannot capture interactions in unconstrained visual data taken in cluttered environments like homes. To address this limitation, the project will build:
- neural networks to model and estimate human physical properties such as shape and articulation from unconstrained data,
- neural networks that model and estimate human motion and interaction from videos, and
- methods for gathering and analyzing large amounts (10,000 person-hours) of unconstrained videos of human activities to build a repository of physical skills. This repository will inform the transfer of skills from humans to robots.
In brief, the long-term aim of Srinath's research is to demonstrate that learning from images and videos is a viable path for robots to gain human-like physical abilities.
He joins multiple previous Brown CS winners of the award, including (most recently) Malte Schwarzkopf, Daniel Ritchie, George Konidaris, and Theophilus A. Benson.
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.