Brown CS News

Yu Cheng Joins Brown CS As Assistant Professor

    None
    Click the link that follows for more news about our history-making CS With Impact expansion.

    “Solving problems is thrilling for me,” says Yu Cheng, “and the exciting thing about being a theoretical computer scientist is knowing that algorithms and theory will always have a part to play in the field.” This fall, he joins Brown CS as assistant professor. He’s the latest hire in the multi-year CS With Impact campaign, our largest expansion to date. 

    “I use the analogy of a high school science project, which I heard from Jason Lee of Princeton,” Yu says. “You build a rocket that goes up a hundred feet, everyone’s happy, and you go home. But to get to the moon, you need to understand the engine, the aerodynamics. You need to find out how these things work. If you look back at the history of computer science, there have been exciting times when theoreticians were leading the curve. Cryptography is an excellent example: complexity theory played an important role in the development of modern cryptography.”

    Yu was nine years old when his family bought their first computer, and as the years went by and the entertainment they offered became more sophisticated (first Minesweeper, then Chinese chess, then Doom), his interest grew. Not just in playing games but in learning how to create them, which he saw as a mathematical question.

    “My parents were testing me by putting me in all kinds of classes,” he says, “but I never wanted to learn to draw and so on. Even the chess one I went to, I stayed with it because I was hoping to one day code such games. I was already into math, and I did pretty well at the programming class, so my teachers got excited. They had me take something more advanced at a local high school, and got me into programming contests.” 

    Yu did well enough in these competitions that the government waived the required university entrance exam: he left home for Shanghai Jiao Tong University, eventually receiving his B.S. in Computer Science. And he kept competing.

    ​“My first year in college,” Yu explains, “I started training for the ACM International Collegiate Programming Contest, or ICPC. You form a three-student team, and they put everyone in a huge basketball stadium: clearing the floor, laying the wires, setting up all the computers. I met people of my age, and I got to travel abroad, so there were a lot of opportunities that might not have been accessible to the average college student back then. It helped make CS feel like a natural path for me. I had always loved problem solving, and for me it’s really meaningful both in competition and research when I feel something just click.”

    Initially interested in computer vision, then data mining, the course of Cheng’s research was forever changed by a talk given by Shang-Hua Teng, his future advisor at the University of Southern California (USC): “I didn’t even realize how famous he was, but the talk was mind-blowing. His talk was on solving graph problems faster than ever before and the solutions were so elegant. I was so impressed that when it came time for questions, I stood up and asked if he was taking on grad students.” 

    Initially rejected, Yu persevered, and eventually accepted a TA offer from Teng and two other faculty members that brought him to USC for his PhD. “And that’s how I got into theory and algorithms,” he says, explaining that his doctoral work was largely theoretical, including some work in game theory. His adventures with the ICPC continued, both as student and coach. (Some of his exploits are detailed here, here, and here.) After an internship at Google, it became clear that academia was a possible option, so Cheng began postdoctoral work at Duke University, switching his focus to machine learning theory. 

    Asked about his current research, Yu explains that he considers himself a lot of different things: a theoretical computer scientist, a mathematician, someone with wide interests. One of his favorite themes, he says, is robustness in machine learning, developing algorithms with provable guarantees. Along with colleagues from University of Chicago and Northwestern University, he recently organized a special quarter on the subject at the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL), a Chicago-based multi-institution collaborative institute.

    Much of his work is in non-convex optimization, and for his research in this area, Yu was invited as one of approximately thirty scientists to visit the Institute for Advanced Study for the Special Year on Optimization, Statistics, and Theoretical Machine Learning. “One of the important messages of the field,” he says, “is that some functions are non-convex but there are no bad local optima. Even though a function is non-convex, you can still efficiently compute a global optimal solution. It’s pretty exciting: while non-convex optimization is NP-hard in the worst case, in modern machine learning you can actually explain why practical people can run a gradient descent and be happy about the results.”

    Yu is also interested in robust statistics: “Can we design algorithms that are robust when a small fraction of the data is adversarially corrupted? For instance, what if one percent of your data is arbitrarily changed? Can you output something reliable?” As just one example, he cites researchers who have shown that gluing small amounts of black and white sticks onto stop signs can confuse self-driving vehicles. His work in this area focuses on developing faster and simpler algorithms for high-dimensional robust statistics, and was recently mentioned in a news story from the Simons Institute for the Theory of Computing.

    “High-dimensional data poses challenges,” he says, “but eventually we want to get to these problems, open these black boxes, and have robust estimators and learning algorithms that are tolerant to errors, contamination, and misspecification in data.” 

    Yu expects that at Brown there won’t be any shortage of students who are excited about opening those black boxes with him. “These might be famous last words,” he says, “but I’m also hoping to have more time to do research. I’m currently in a math department, and I was trained with a computer science background from my undergraduate days, so I’m more familiar with the CS environment.” He’s looking forward to starting a programming contest at Brown, hoping to give students some of the same opportunities and excitement that he’s experienced. 

    When socially responsible computing is mentioned, Cheng lights up. “It’s a really important topic,” he says, “and robustness is very much related. Think about the canonical example of facial recognition software that’s been trained on predominantly white people. Accuracy with other races is sacrificed because the software is trying to maximize accuracy with white faces. To get facial recognition right, you need to have other objectives than accuracy, a single score – you need to care about other things. Robustness shares that aspect.”

    “Make no mistake,” he says, “algorithms are deciding if people can get loans or not, or how the criminal justice system decides sentences. We have to know that they’re doing the right thing in circumstances like these. I think robustness and theory have a big role to play.”

    On the topic of theory’s applications, Cheng finds it interesting that he’s found so much fertile ground in machine learning, a research area where theory historically hasn’t led the curve: “Look at something like AlphaGo! It just worked, and deep neural networks became the dominant approach for many computer vision and natural language processing tasks. Even though neural nets were proposed by people in the 1970s, I’d say that the theory community had little contribution to this new era of deep learning. Many theoretical machine learning researchers are now taking things that already work for practical problems and proving to you why they work. Understanding the foundation of deep learning is critical for moving forward, and sometimes good heuristics can arise from the intuition behind mathematical proofs.”  

    “And as a theory person,” Yu says, “I’m not too worried about what I’m proving – as long as I have that aha moment! If I’m proving something that someone else couldn’t prove or that I couldn’t prove three months ago, I’m happy.”

    For more information, click the link that follows to contact Brown CS Communications Manager Jesse C. Polhemus.