Nate Gillman Howdy!! I'm a PhD student at Brown University, where I'm fortunate to be advised by Chen Sun. I'm supported by Brown's Department of Mathematics and Department of Computer Science. I study machine learning, computer vision, and natural language processing. My current projects focus on generative modeling, applied to various domains. If you're at Brown and you're interested in working on a research project with me, that's awesome!! Please email me directly, and attach your CV and transcripts.

In the past I also did work in cryptography and pure mathematics, including number theory, algebraic geometry, and geometric measure theory. Fun fact: I actually started grad school as a PhD student in Brown's math department, conducting research in analytic number theory and cryptography with Jeff Hoffstein. I've since switched to AI, but I still like to make my background in pure math useful in my AI research. After getting my masters degree in mathematics in spring 2022, I took a professional leave of absence for a year to gain exposure to ML in industry. I did three internships: at American Express AI Labs , Akkio (a no-code AI startup), and Captions (an AI video editing startup).

I completed my undergraduate degree at Wesleyan University. During my time in college I spent one semester at the Math in Moscow program and another at the Budapest Semesters in Mathematics program. My undergraduate math research advisor was Ken Ono, I spent two summers doing research with him at Emory University's Research Experience for Undergraduates.

I'm particularly inspired by the life of Walter Pitts, who proposed the first mathematical model of the neural network.

Resume  /  Google Scholar  /  Github  /  arXiv  /  LinkedIn

nate_gillman@brown.edu


Updates

  • [May-2024] Our paper Self-Correcting Self-Consuming Loops for Generative Model Training has been accepted at ICML 2024. I'd love to connect with other researchers who will be in Vienna this summer!!
  • [Feb-2024] Our arXiv preprint aims to stabilize self-consuming generative model training. We support our proposed method with rigorous proofs, as well as experiments on the challening human motion synthesis task. You can find human motion visuals and code on our project page.
  • [June-2023] I've returned to grad school from my leave of absence in industry.
  • [June-2022] I'm taking a leave of absense from grad school to gain more exposure to AI in industry.
  • [May-2022] My collaborator William Rudman presented our IsoScore paper at ACL 2022.
  • [Aug-2021] Our arXiv preprint shows that previous metrics have been used incorrectly to analyze word embedding spaces. We provide a mathematically sound method, which we call IsoScore. We give rigorous proofs and we share an efficient Python implementation.