Brown CS News

The Nature Of Life, The Nature Of Thinking: Looking Back On Eugene Charniak’s Work And Life

    None

    After more than forty years at Brown, after publishing four books and dozens of papers and earning some of the highest honors in his field, Eugene Charniak has retired. 

    “We’ve been extremely fortunate to have Eugene as a colleague,” says Professor John Savage. “His passion is a rare gift, and when you have that kind of drive, it makes you more inventive. He’s contributed very pointedly to computer science over the years.”

    In the pages below, we ask Eugene to look back with us on both work and life, and we begin in Chicago. It’s 1946, the year of his birth, and something in the classical repertoire is playing in the Charniak abode: Beethoven or Rimsky-Korsakov, maybe Respighi’s “Pines of Rome”. Music recurs in this story, along with philosophy, devouring entire shelves of library books, a love of family, the need to critique ourselves and our own thought. And then the desire to begin again, to think along new lines. Supported by familial love, sustained through hard work: realistic, but tree-climbing and questing for the moon, an optimism.  

    “There was no television in the house,” Eugene says. “My father wouldn’t allow it. I grew up with a lot of classical music and I played both clarinet and piano as a kid. I was a halfway decent clarinetist – first clarinet in my high school band, then third clarinet at University of Chicago. My father painted and had artistic aspirations, but life didn’t work out that way for him.”

    The Charniaks had hopes for their son, too, but dreamy, unexpected ones. “My father wasn’t your typical Jewish father,” Eugene says. “We were lower middle class in terms of income, but my parents had upper middle class ambitions. He wanted me to be a poet or novelist.” 

    Charniak’s first decade passed, he says, without giving his career much thought, but the start of the space age (Sputnik launched a few months after his eleventh birthday) brought a change: “I think a lot of us were aware that science had become really important for the country. That was where my abilities lay, and I always assumed I’d become a scientist.” With any exposure to computers still three years away, Eugene headed to the University of Chicago to study biology and genetics. And backspin.

    “I was once a fairly good ping-pong player,” he says. “Believe it or not, I was a member of a frat, and we had a ping-pong table in the basement. By my senior year, I was the second best undergraduate ping-pong player at the university. Later, at MIT, I was the best non-Asian player. We had a ping-pong table in the CIT for a while, but people complained that it began to smell like a gym. I’m not sure if I was top in the department, but I think I was one of the best.”

    Charniak finds a postscript that’s more teasing than competitive. “Way better than Andy [van Dam],” he adds.

    Just A Lark

    One of the initial factors that led Eugene toward CS was a misplaced but well-intended recommendation from a biologist. “The world had just started learning about DNA and RNA,” he remembers, “and I went to my first-semester biology professor and told her I thought this was incredibly interesting. She made a huge mistake and said that if I wanted to become a biologist, I should also become a physician, so she steered me toward a course on the anatomy and physiology of vertebrates. It was the least pleasant course I’ve ever taken – I hated memorizing the bones in cats! I decided that biology wasn’t for me and dropped the idea.”

    The whole decision reminds Eugene of a story about Francis Crick, the famed biologist and biophysicist: “At the start of his career, he asked himself what the two most important problems were, and he decided that one was the nature of life and the other was the nature of thinking. He told himself that the nature of thinking was too difficult and that he’d work on the nature of life. In retrospect, perhaps I did something like that, but I decided that the nature of life was too much like the wet biology that I hated, so I’d work on the nature of thinking.”

    Charniak and a computer first met in his junior year. One of his dormmates was taking a programming course in FORTRAN, which at the time was taught in the School of Business, and Eugene thought it might be worth a try. “It was just a lark!” he laughs. “I thought we’d be moving wires around, or something like that. And the amusing part is that this guy, who gave me my occupation, ended up marrying the woman I was dating at the time.”  

    It would be unfair to say that Eugene got the better end of the deal, but what a trade! A summer job with the Argonne National Laboratory’s high-energy physics group followed: they’d just gotten a computer and were trying to record streaming data off an experiment, so Charniak’s ability to program proved useful. Another of the puzzle pieces that eventually led Eugene to a career in computer science was in place. 

    One of the last was his decision to join an undergraduate physics journal club. “The word was that if you gave a talk at the club, the guy who organized it would write you a good letter of recommendation for grad school. I said I’d talk about what I did over the summer, and as part of the preparation, I went to the library to read up about computing. It was a shelf about this wide–” Eugene holds his hands about two feet apart. “–and I took them all out.”

    One of the books, focusing on Arthur Samuel’s now-legendary computer program that could play checkers against a human opponent, was a lightning rod for Charniak’s imagination. The idea that you could get a computer program to do something as humanlike as playing a strategy game proved to be the most fascinating thing that he’d ever seen. A few months later, Scientific American devoted an entire issue to computers, and it included an article on artificial intelligence by Marvin Minsky of MIT, one of the field’s pioneers.

    “I changed my mind,” Eugene says. “I wrote to MIT, asking them if I could apply late, and they said yes.”

    I Wasn’t Considered A Star

    As he starts to reflect on the move eastward, Eugene’s thoughts immediately return to his family. “One of my friends once told me how lucky I was with my parents,” he says. “And I think even as she was saying it, I realized it was true! My parents loved me greatly and almost never criticized me. Their kneejerk reaction was to support me. But they were horrified at my decision: MIT as a trade school was all they knew, and computer science instead of physics, what’s that? They didn’t realize what a smart move it was.”

    Though already aware of himself as an experimentalist (“I realized I could never be a theorist”), Eugene says that he still felt at home in his mother and father’s world of ideas and values, where family friends who worked as a math professor and a chemistry professor were held up as exemplars who lived a life of the mind. Offbeat but unpretentious, perhaps they were the ideal antecedents for someone who found his own path. “My parents were considered slightly weird,” Eugene notes, “but I didn’t know any better.” 

    And even if it’s more difficult to judge, science fiction may have played a role: “I was a fan almost from the first moment I could read. I remember going to the library, and I hadn’t quite figured out that fiction was alphabetical by author, but in this one section, there were these incredibly interesting books. I’ve probably read the entire Heinlein corpus up until Stranger in a Strange Land.”

    Arriving at MIT for his doctorate, Charniak immediately began working in Minsky’s AI lab, where his interest in language processing began. “I wasn’t considered a star,” he says. “Put it this way: I was good, and a few years later, I was good enough to get tenure at Brown, but I think in my mind and in the minds of people around me, I wasn’t good enough to have a named professorship. I didn’t hit my stride until my forties, when the field changed and I caught the wave of the change.”  

    Much more on that later. Eugene met his wife, Lynette, in his first year in Cambridge, and they’ve been together ever since. “She’s not an academic in the traditional sense,” he says, “but she does the New York Times Sunday crosswords in ink, and I can’t do them at all. She’s tremendous with languages, and she was a great nurse.” (For years, the two were colleagues at Brown, and Eugene’s students would come up to him with stories: “I went to the infirmary yesterday, and I met your wife!”)

    After graduating from MIT, Charniak took an offer at a research institute in Lugano, Switzerland. He was there for two years, then spent another two in Geneva, but he and Lynette had always planned a return to the United States. Jim Hendler, now Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Institute, met Eugene during Charniak’s one-year visiting position at Yale that followed. A few years later, he became one of Eugene’s grad students.

    “Eugene cares immensely about his students,” he says, “and I give him all the credit for teaching me how to work with mine. Learning how to do just what your advisor says is easy, but asking yourself what you might be doing wrong and what someone else might come up with is hard. When my students and I talk about what natural language can and can’t do, I still use examples Eugene came up with. He taught us how to think.”

    Eugene came to Brown in 1978, co-founding the Department of Computer Science with Tom Doeppner, Steve Reiss, John Savage, Robert Sedgewick, Andy van Dam, and Peter Wegner. Andy notes that Eugene is the only one of his colleagues whose courses he’s attended. “I followed his AI course in the early 80’s for practically the whole semester,” van Dam remembers. “Eugene commanded the material, paced things well, and I liked his informal lecturing style. He excelled at answering questions on his feet.”  

    Were things different back then? Eugene’s offhand response (“to some degree I’ve sleepwalked through life”) reveals a focus on neither the counterculture nor the zeitgeist: “In retrospect, it’s obvious that the 60’s and 70’s were a tumultuous time, but somehow I didn’t notice. I always loved it here – I like the fact that I can live near campus and walk to work every day. I’ve always thought that students at Brown are simply great, and any time I read an article that uniformly portrays Ivy League students as snobs who’ve never had to work for their privilege, I get angry, because I know that our students don’t take things for granted. They work very hard.”

    Life and career intersected in 1981, when Eugene received tenure and the couple’s first child was born. The momentous event brought only a single day of stress, when Eugene made a trip to the cinema to avoid thinking about the tenure committee’s decision. “I’ve gone through life mostly oblivious to impostor syndrome and worries about whether I was a star or not,” he says. “I’ve been very fortunate that I could just go to school every day and enjoy myself, and I wish everybody could be that fortunate. My parents had a huge hand in all of it: they believed in me and assumed things would work out fine, so I always assumed the same thing.”

    And thanks to their efforts, the passion that John Savage mentioned was already in full flower. “I did what I found interesting,” Eugene says. “If I don’t find it interesting, I thought, why should I become a professor?”  

    BS And Everything After 

    Talking about the early portion of Eugene Charniak’s career is difficult for a single reason: he thinks it’s BS.

    “Arguably the only good quip I ever made,” he says, “was that I spent my life in two phases: the statistical paradigm that I’m currently in, which began around 1990, and the heuristic programming paradigm, which was Before Statistical, or BS.”

    Uninteresting, unprofitable, not entirely regretted, but mostly forgettable: Eugene is completely candid about his early research, the Marvin Minsky approach that he’d been trained in. “By 1990, I’d reached the end of my tether,” he says, “and I couldn’t make any progress with my research agenda, which was trying to get computers to understand language. Today, with modern machine learning, we’re so much further along: we’re better off intellectually, and it’s real science. I cringe when I remember how we explicitly rejected the idea that AI should be approached by machine learning. It wasn’t that we hadn’t heard of ML, it’s that we’d heard of it and we rejected it.”

    Charniak explains that the worldview he shared with his early peers began with a question: who’s smarter, us or a computer? We are, they said, and therefore if humans want a computer to figure out how to perform a task, humans are better teachers, and our time is better spent teaching a computer than letting it teach itself. “That’s a very good argument,” Eugene says, “and it’s not obvious where the hole in it is.”

    The most notable problem for his research agenda at the time was with what Eugene calls common-sense reasoning. To understand language, he explains, you need to reason about the things you’re discussing. 

    “If I look out the window,” he says, “and I say it must have rained last night and my wife says no, she turned on the lawn sprinkler, you can understand that conversation perfectly well. But how did you do that? The only way is if you understand that a sprinkler makes things wet, that I looked outside, saw wetness, made an assumption, and then my wife corrected me. And so for many years, I’d been trying to approach that issue by working in the heuristic programming tradition of AI – trying to get the computer to go through those mental operations, as it were.” Eugene couldn’t do it. Nobody else could, either.

    There was no obvious way forward with the heuristic paradigm, but new inspiration was on the wing. In 1988, Eugene went to a DARPA meeting where a colleague shared his belief that probabilities were the right way to think about language. Friends at Brown felt similarly: Stuart Geman, a mathematician, offered ideas about the use of probability in AI, and Mark Johnson, working in cognitive science, shared research where scientists were trying to get a computer to learn grammar by analyzing large amounts of text. 

    It struck a chord, Eugene says, and his two-part response will only seem remarkable to those who don’t know him well. First was his admission of insufficient knowledge (“At that point I didn’t even really understand probability theory – I never had a course in it, although in physics I learned a bit about it in statistical mechanics”) and second was his decision to hit the books and start learning more. Once again, he was reading everything he could get his hands on. 

    At Least I’ll Get Some Bananas

    “I just gave up!” Eugene says. “Starting in about 1990, I decided that I had to approach language from another point of view. I’d been reading a lot of articles, and that switch was one of the proudest moments of my life. There I was, 47 years old and way past my scientific prime, and i just completely switched paradigms. That’s not common, and I’m very proud of that.”

    Skepticism among Charniak’s peers was widespread. Christopher Riesbeck of Northwestern University attended one of his first talks after the paradigm shift and bluntly told Eugene that he resembled a monkey climbing a tree in an attempt to reach the moon. Charniak’s response was unruffled, optimistic: “Well, at least I’ll get some bananas!”

    Jim Hendler explains that a gift for improvement through self-critique has been one of Eugene’s hallmarks: “He’s known in the field as the man with the counterexample to everybody’s theory. Eugene always praised what his students were doing right, but he taught us how to evaluate our work and know when we’re cheating, when our Theory of Everything is really just a Theory of Something. I took that forward and now my students joke that I’m the guy who always has the counterexample. It’s crucial to their success.” 

    The best way to probe the edges of his own ideas was in his teaching, and so probability became a focus of Eugene’s lectures soon after the 1990 shift. Brown CS Professor Michael Littman, once a student in Charniak’s advanced language processing course, ended up as a coauthor on research that involved using probabilities to determine parts of speech. “Taggers for Parsers”, published in Artificial Intelligence in 1996, was Michael’s sixteenth publication.

    Andy van Dam says that he’s always prized Eugene’s ability to change direction: “He was a pioneer! He was agile, not over-specialized, and he always stayed on the cutting edge. And he wasn’t doctrinaire, which served him well when he was department chair, because he was able to be fair-minded and flexible. I’ve seen Eugene in a variety of roles, all executed faithfully and professionally, and I really appreciate his effort.”  

    “In retrospect,” Eugene notes, “I’m pretty proud of making a switch that sizable twenty years into my career. I’m not the best judge of my own talents, but certainly I pride myself on my ability to look at facts straight and not be too biased by what I was previously taught. In my politics I’m that way, and in my research I’m that way.” 

    We press him for more: “Oh, and perhaps I’m a little bit fearless. It took some nerve, that switch.” 

    Most of our readers will already know how well it paid off. Most recently, Eugene has been named a Fellow of the Association for Computational Linguistics (ACL) as well as a Fellow of the American Association of Artificial Intelligence after previously serving as one of the latter’s Councilors. In 2011, he received ACL’s Lifetime Achievement Award.

    “He would have won even more awards than these,” says Jim Hendler, “but you usually have to apply for them, and Eugene usually didn’t. He’s one of the most respected names in AI, but he never tried to be a household name outside of the field, which he definitely could have become. I think it’s less because he’s self-effacing and more because he really cares about work-life balance. I remember one time when he cut a meeting short: ‘I know we planned for a long meeting, but my cat just got run over, and I’m worried about how my son’s dealing with it.’ Eugene is a professional, a model of idiosyncracy and the brilliant scientist, but family was where it all began for him, because he understood that work is only one part of your life.” 

    From Here To Eternity (Or 2016)

    But another major revolution in artificial intelligence was yet to come: deep learning. Charniak explains that deep learning is also concerned with probability, but in a very different way. A good way to understand this, he says, is to look at the distinction between discrete and continuous representation. 

    How does discrete representation work? Let’s say that you’re looking at the text from here to eternity as part of a search for the most common word in English. For every string of characters between two empty spaces, you create a dictionary entry, and you associate that word with a number, so you can create an array: from is 1, here is 2, and so on. Each word is a unit, distinct from all other words, and now you can start counting how many times you’ve seen each of them.

    Continuous representation, Eugene maintains, is less obvious: “Let’s say that for every word, you count the number of times it appears after the. Maybe you see the computer at first, and then the large dog. Suppose you make a representation of the quantities of each of the millions of words that come after the. What you’d find is that if you turn those numbers into probabilities, they aren’t distinct: they blend into each other continuously. Believe it or not, if you take the very simple mathematical idea of cosine difference for long vectors of numbers and ask how similar or different they are as a whole collection, the words a and the would look very similar, and the words computer and machine would look very similar, whereas computer and treble wouldn’t look very similar at all, and they’d both look very different from a and the.”

    “And wow,” he enthuses, “this is really important! We started with a representation in which ‘computer’ and ‘treble’ are simply different things, different integers. And now we see them as long vectors that are more or less similar, not integers – we see a similarity between all words! It turns out that this is so much more powerful than considering words as complete entities, and it revolutionized the field of language processing. There was a paper by Mikolov in 2012 that’s been cited more than 25,000 times, which gives you a sense of how important continuous representation is. From my point of view, it’s right up there with Watson and Crick.” 

    Telling Us How The Brain Works

    Unfortunately, says Eugene, continuous representation passed him by for the first few years of its ascendance. Around 2015 or 2016, he realized how much it had taken over his field, and so to help understand the new material, he started reading again. And then writing, with the publication of Introduction to Deep Learning in 2018. It wasn’t as big of a change as his switch to the statistical paradigm in 1990, but it marked the third major phase of his career.

    “The nice thing about continuous models,” Charniak says, “is that their continuity makes them differential. Let’s say that I’m looking at the quick brown fox jumped over the lazy dogs in French and trying to translate it into English. The first thing I want is to guess the first word, and with a non-continuous model, I’m going to get the as the answer no matter what, because it’s the most common English word. A stopped clock is right twice a day, and it’s right for the first word, but for the second, it’s wrong.”

    “But if you have a continuous model,” he says, “you can ask which changes to the numbers would make quick more likely, since that’s the correct answer, and the less likely. And this stochastic gradient descent on matrix-based complicated models is deep learning, and it works phenomenally well. I personally believe it must be telling us quite a bit about how the brain works, to a better approximation than any other mathematical thing I can state. It’s way too crude for most scientists to assent to, but it’s the best and closest sort of statement we can make at the current time and it deserves to underlie our future explorations of how the brain works. It’s the best idea we have for a theory of mind.”

    Of course we’re venturing now into the realm of philosophy. Maybe unsurprisingly, a love of the subject has been a constant throughout all phases of Eugene’s career. “I read serious philosophy for fun sometimes,” he says, “and I’ve got a fairly well-developed philosophy of science, of language. Philosophy isn’t about determining the truth of statements in the world, it’s about elucidating what we mean when we think certain things, and that’s why it differs from science. It’s very obvious what I mean when I say my left big toe, but not very clear what we mean when we talk about justice, and thus things like justice need explication.”

    “That’s part of why science progresses in a coherent fashion,” Eugene says, “but philosophy doesn’t: there are no experiments that will tell you what justice is. And that’s why philosophy doesn’t go away and will always be important, and why you don’t see philosophy unifying on a particular proposition – although I think this idea of what philosophy is ought to be a proposition that philosophy should unify around! Once you get that through your head, a lot of conundrums go away.  A lot of what people say about science or language ceases to be so problematic.”

    John Savage says that this is a familiar line of inquiry for Eugene: “I think he’s always been interested in looking at AI through a philosophical lens, and I think for him, understanding language is a mark of intelligence.”

    An Optimist, Too

    Looking ahead into his retirement, Eugene says that he has vague hopes of continuing his AI research. His fifth book, if he pursues one current idea, might be a popular book on AI, something with a wider audience in mind. “I can write when I put my mind to it,” he says, “and when I know what I want to say.”

    Jim Hendler notes that Charniak’s books have been milestones in his own intellectual development: he first read Eugene during his freshman year in college, and this summer, he started teaching from Charniak’s most recent book. “It’s brilliant,” he says. “I owe Eugene so much – my career would have been drastically different without him. His accessibility as a human as well as an advisor will always stay with me. The incident with his cat happened 35 years ago, and I still remember it. To him it was a casual thing, but to me it was an important lesson. A slogan I use a lot with my staff is ‘real life comes first’, and that’s something I got from Eugene.”

    In terms of less serious pursuits, Eugene says that fifteen years under his wife’s tutelage has made him a fan of chamber music, which he initially found austere. He’s thinking of taking up the viola. And even though he began his career at a time when nobody knew what computer science does and ends it at a time when everyone knows what machine learning is, he still advises anyone interested to go into the field.

    “I can’t decide if the percentage of people employed in computer science will grow or stay the same,” he tells us, “but it’s very high already and don’t see it decreasing ever. The last year has taught us that arguably the only thing more important than CS is epidemiology.  Eight or ten years ago, there were people claiming that the next big thing changing society would be a pandemic, and they were right. The entire study of biology and medicine are tremendously important, but computer science is right up there.” 

    And he’s openly scornful of doomsday AI scenarios: “The robot apocalypse, that’s complete and utter nonsense! Elon Musk and Stephen Hawking are idiots when it comes to AI. As if we’re smart enough to create all-powerful robots but not an on-off switch!”

    Nor does he expect artificial intelligence to create large-scale disruption. “My argument,” he says, “is quite simple: what dislocations have you noticed recently? COVID, of course, but we’ve experienced almost no dislocation due to AI. People keep saying that it’s coming, but there’s no evidence. The second reason is practical. Think of most jobs: if you’re a carpenter, the robot who would replace you has to hammer a nail, and I don’t think there’s any current robot that can hammer any given nail into any given block of wood, let alone navigate a construction site. The big changes in AI haven’t been in the capabilities of robots, they’ve been with things like chess-playing programs.” 

    “We’ll see society transformed,” Eugene says, “but no huge unemployment anytime soon, or possibly ever. Maybe we’ll have automatic trolleys in hospitals, delivering meds, or self-driving cars, but I who consider myself a realist thought we’d see things like these by now and I was completely wrong.”  

    There’s the testing of his own ideas again. So we critique the critique: is he truly a realist?

    “Eugene has always been his own man,” says John Savage. “He expresses it in the way he dresses, the way he stands out among scruffy computer scientists with his bow ties, but it’s in his way of thinking as well, how he makes his way in the world. He enjoys the very highest standing in AI, and we’ve been lucky to have him.” 

    “I’m a realist, but I’m an optimist, too,” Eugene says. “I’m an optimist by nature.”

    Or perhaps also by nurture, taught by the parents whose support he credits so highly. Maybe he’ll write another book, or maybe take up the viola. But there’s undeniable optimism in Eugene’s voice as he reflects on his hope to play in a string quartet someday: working hard to make something beautiful, then the joy of sitting back and listening to it. It’s an optimism that he’s taught as well as learned.  

    “Society will be transformed,” Eugene says, “but very gradually, and I think to the better.”