Brown CS News

Distinguished Lecture: Josh Tenenbaum On Building Machines That Learn And Think Like People

    None

    Click the link that follows for more Brown CS content about our Distinguished Lecture Series.

    Professor Josh Tenenbaum of MIT visited Brown CS last month to deliver the thirty-eighth lecture ("Building Machines That Learn And Think Like People") in the Distinguished Lecture Series.

    After an introduction by Brown CS Professor Daniel Ritchie, who hosted the lecture, Tenenbaum moved quickly to the central theme of his talk, saying that we have artificial intelligence technologies "but no artificial intelligence, no flexible, general-purpose common sense". The challenge for scientists, he said, is to "reverse-engineer how intelligence works" in the human brain, and despite all our advances, we have yet to create artificial intelligence that can match the model-building abilities of an eighteen-month-old child.

    Inspired by human cognitive development, which he called "the only known scaling path to intelligence that actually works," Tenenbaum explained his belief that the goal of artificial intelligence isn't mere pattern recognition but to grow into sophisticated behaviors (understanding what we see, explaining it, imagining the unseen, making plans) through small steps. A large crowd that more than filled CIT 368 listened as he walked through some of the tools he's employed in the pursuit of machines that learn and think like people: a bi-directional loop of CS and engineering to "reverse-engineer the common-sense core," probabilistic programming to create an intuitive psychics engine, and work with psychologists to understand social interactions.

    "The child is the ultimate coder," he said, "and learning is programming the game engine of your life....The goal of learning is to make your code more awesome." 

    Afterward, we caught up with Professor Ellie Pavlick of Brown CS, who attended the talk, and she highlighted Josh's eagerness to tackle difficult problems as well as his unorthodox perspective. "I think the point that Josh hit on at the end –the question 'what do we build in versus what do we learn from scratch'– is possibly the biggest question right now in AI and cognitive science," she says. "Deep learning, taken as a whole, is often seen as representing the 'build nothing, learn everything' stance. Josh’s work exemplifies something closer to the other side, where learning happens on top of a foundation of existing structured knowledge about the world. I’ve never met anyone who doesn’t know and admire the work Josh is doing. It’s theoretically well motivated and the empirical results are impressive."

    For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.