Nate Gillman Howdy!! I'm a PhD student at Brown University, where I'm fortunate to be advised by Chen Sun. I'm supported by Brown's Department of Mathematics and Department of Computer Science. I study machine learning, computer vision, and natural language processing. My current projects focus on generative modeling, applied to various domains. If you're at Brown and you're interested in working on a research project with me, that's awesome!! Please email me directly, and attach your CV and transcripts.

In the past I also did work in cryptography and pure mathematics, including number theory, algebraic geometry, and geometric measure theory. Fun fact: I actually started grad school as a PhD student in Brown's math department, conducting research in analytic number theory and cryptography with Jeff Hoffstein. I've since switched to AI, but I still like to make my background in pure math useful in my AI research. After getting my masters degree in mathematics in spring 2022, I took a professional leave of absence for a year to gain exposure to ML in industry. I did three internships: at American Express AI Labs , Akkio (a no-code AI startup), and Captions (an AI video editing startup).

I completed my undergraduate degree at Wesleyan University. During my time in college I spent one semester at the Math in Moscow program and another at the Budapest Semesters in Mathematics program. My undergraduate math research advisor was Ken Ono, I spent two summers doing research with him at Emory University's Research Experience for Undergraduates.

I'm particularly inspired by the life of Walter Pitts, who proposed the first mathematical model of the neural network.

Resume  /  Google Scholar  /  Github  /  MathReviews  /  arXiv  /  LinkedIn

Publications (AI/ML)

Self-Correcting Self-Consuming Loops for Generative Model Training
Nate Gillman, Michael Freeman, Daksh Aggarwal, Chia-Hong Hsu, Calvin Luo, Yonglong Tian, and Chen Sun. ICML 2024.

arXiv   /   Code   /   Project Page
IsoScore: Measuring the Uniformity of Embedding Space Utilization
William Rudman, Nate Gillman, Taylor Rayne, and Carsten Eickhoff. ACL 2022.

arXiv  /  Code   /   Journal

Patents (AI/ML)

Methods and Systems for Dynamically Generating a Plurality of Machine Learning Systems During Processing of a User Data Set
Nate Gillman, Nadia Laflaf, Abraham Parangi, Jonathon Reilly, and Nathan Wies. U.S. Patent Application No. 63/411,898. Filed Sep 30, 2022.

Publications (Mathematics)

Large sets with small injective projections
Frank Coen, Nate Gillman, Tamás Keleti, Dylan King, and Jennifer Zhu (2021). Annales Fennici Mathematici, 46(2), 683-702.

arXiv   /   Journal
Patterns of primes in the Sato-Tate conjecture
Nate Gillman, Michael Kural, Alexandru Pascadi, Junyao Peng, and Ashwin Sah (2020). Research in Number Theory, 6(9).

arXiv   /   Journal   /   MathSciNet
Explicit subconvexity savings for sup-norms of cusp forms on PGL(n,R)
Nate Gillman (2020). Journal of Number Theory, 206, 46-61.

arXiv   /  Journal   /   MathSciNet
From partitions to Hodge numbers of Hilbert schemes of surfaces
Nate Gillman, Xavier Gonzalez, Ken Ono, Larry Rolen, and Matthew Schoenbauer (2019). Philosophical Transactions of the Royal Society A, 378: 20180435.

arXiv   /   Journal   /   MathSciNet
Exact formulas for invariants of Hilbert schemes
Nate Gillman, Xavier Gonzalez, and Matthew Schoenbauer (2018). Research in Number Theory 4(39).

arXiv   /   Journal   /   MathSciNet


  • [May-2024] Our paper Self-Correcting Self-Consuming Loops for Generative Model Training has been accepted at ICML 2024. I'd love to connect with other researchers who will be in Vienna this summer!!
  • [Feb-2024] Our arXiv preprint aims to stabilize self-consuming generative model training. We support our proposed method with rigorous proofs, as well as experiments on the challening human motion synthesis task. You can find human motion visuals and code on our project page.
  • [June-2023] I've returned to grad school from my leave of absence in industry.
  • [June-2022] I'm taking a leave of absense from grad school to gain more exposure to AI in industry.
  • [May-2022] My collaborator William Rudman presented our IsoScore paper at ACL 2022.
  • [Aug-2021] Our arXiv preprint shows that previous metrics have been used incorrectly to analyze word embedding spaces. We provide a mathematically sound method, which we call IsoScore. We give rigorous proofs and we share an efficient Python implementation.