Eickhoff, Ritchie, Tellex, And Tompkin Win OVPR Seed Awards
- Posted by Jesse Polhemus
- on May 14, 2019
Click the links that follow for more content about OVPR Seed Awards, Daniel Ritchie, Stefanie Tellex, James Tompkin, and other recent accomplishments by our faculty.
Professor of Medical Science and Computer Science Carsten Eickhoff and Professors Daniel Ritchie, Stefanie Tellex, and James Tompkin of Brown CS have just received Seed Awards from Brown’s Office of the Vice President for Research (OVPR) to help them compete more successfully for large-scale, interdisciplinary, multi-investigator grants. They join numerous previous Brown CS recipients of OVPR Seed Awards, including (most recently) Ian Gonsher, Jeff Huang, and Stefanie Tellex.
Carsten Eickhoff
"Delirium is common after acute stroke," Carsten says, "and likely represents an impediment to recovery. However, the concrete manifestations of delirium comprise a spectrum, and it is unclear whether various patterns of symptoms may have differential effects on outcomes."
Many of these symptoms, he says, are intimately connected, including arousal, attention, and activity level, and as a result, delirium phenotypes have been traditionally labeled as hyperactive, hypoactive, and mixed. Unfortunately, patients with hypoactive delirium are known to be underdiagnosed using standard screening tools, and the presence of pre-existing neurological symptoms only magnifies this challenge.
In his research, Carsten proposes an innovative approach aimed at diagnosing and categorizing delirium using wearable sensors capable of measuring activity on a granular scale. Activity data will then be analyzed using machine learning techniques to identify delirium phenotypes corresponding to patient activity patterns. He hypothesizes that such patterns may also be predictive of early motor recovery after stroke, and proposes to apply similar machine learning techniques to identify activity-based phenotypes corresponding to post-stroke functional outcomes.
Daniel Ritchie
"People spend a large percentage of their lives indoors," Daniel explains, "in bedrooms, living rooms, offices, kitchens, etc. The demand for virtual versions of these spaces has never been higher, with virtual reality, augmented reality, online furniture retail, computer vision, and robotics applications all requiring high-fidelity virtual environments."
To be truly compelling, he says, a virtual interior space must support the same interactions as its real-world counterpart: VR users expect to interact with the scene around them, and interaction with the surrounding environment is crucial for training autonomous robots (e.g. opening doors and cabinets). Most object interactions are characterized by the way the object's parts move or articulate. Unfortunately, it's difficult to create interactive scenes at the scale demanded by the applications above because there do not exist enough articulated 3D object models. Large static object databases exist, but the few existing articulated shape databases are several orders of magnitude smaller.
Daniel intends to address this critical need by creating a large dataset of articulated 3D object models: that is, each model in the dataset has a type and a range of motion annotated for each of its movable parts. This dataset will be of the same order of magnitude as the largest existing static shape databases. Hr plans to accomplish the goal by aggregating 3D models from existing static shape databases and then annotating them with part articulations, conducting the annotation process at scale using crowdsourcing tools (such as Amazon Mechanical Turk) by developing an easy-to-use, web-based annotation interface.
James Tompkin and Stefanie Tellex
"Teleoperation is an important means of robot control," James and Stefanie say, "with virtual reality (VR) teleoperation being a promising avenue for immersive control or ‘first-person’ view control. VR teleoperation is also especially promising for teaching robots how to learn from demonstration, as it enables a human to control robotic arm end effectors via VR wands to demonstrate how to conduct a task."
However, for the sense of sight, they explain that there's often a large difference between the freedom of view movement afforded to the human operator by the VR headset tracking system and the freedom of view movement afforded by the camera system: the robot typically has one or two cameras mounted on slowly-moving articulated heads or limbs, whereas the human’s head-mounted VR display provides rotation, binocular stereo, and motion parallax with very fast head pose changes (six degree of freedom, or ‘6DoF’). In effect, the robot is a camera operator trying to mimic the movements of the human to always match the desired view onto the scene. Even if the robot is mobile and articulated, it can rarely ‘keep up’ with the human’s fast motion and show the correct view to the VR headset. This mismatch makes it very easy to induce VR sickness, which limits comfort and teleoperation duration.
Their research will attempt to use deep learning methods to generate photorealistic image synthesis methods. These will adapt the fixed camera views to the human's current VR view by understanding the scene geometry and plausibly filling in any view discrepancies. It also aims to be fast to finally keep up with the human and improve teleoperation quality and reduce sickness. This will extend Brown's ROS Reality open source package for Unity-based VR robotic teleoperation.
For more information, please click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.