Mathematics Department Colloquium 
The Ohio State University

  Year 2025-2026

Time: (Fall 2025) Thursdays 3:00-3:55 pm
Location: Scott Lab E001

YouTube channel

Schedule of talks:


 

TIME  SPEAKER TITLE
September 25  
Lionel Levine  
(Cornell University)
Measuring AI Values
September 29 -October 1  
Hong Wang  
(NYU and IHES)
Rado Lecture Series
October 23  
Vadim Gorin  
(UC Berkeley)
Dunkl operators and random matrices

October 30  
Tye Lidman  
(North Carolina State University)
November 6  
Greta Panova  
(USC)
November 20  
Xin Zhou  
(Cornell University)
December 4  
Jeremy Avigad  
(Carnegie Mellon)
January 22  
 
Zassenhaus Lecture Series
February 26  
Jon Rosenberg  
(University of Maryland)
March 12  
John Etnyre  
(Georgia Tech)
March 26  
Lin Lin  
(UC Berkeley)
April 9  
Juan Rivera-Letelier  
(University of Rochester)
April 16  
Tamara Kucherenko  
(City University of New York)
April 23  
David Barrett  
(University of Michigan)



Abstracts

(L. Levine): Aligning AI with human values is a pressing unsolved problem. How can mathematicians contribute to solving it? We can start by clarifying the terms: What are values? What does it mean to "align" an AI to a given set of values? And how would one verify that a given AI is aligned? These hard questions, plus the annoying little issue of whose values to prioritize, led researchers at leading AI labs to aim instead for "intent alignment": AI that (wants to) do what its developers intend. Can intent alignment deliver a good future, where humans thrive alongside AI that's much smarter than us? I'll argue that intent alignment might be extremely hard to achieve, and that it's neither necessary nor sufficient for a good future. Our best shot at a good future is to not build superhuman AI. Not building superhuman AI is a coordination problem: it would require international treaties, monitoring, regulation. Coordination problems are hard (but not as hard as any form of AI alignment!). Coordination failures happen, so it would be wise to have a backup plan. Returning to value alignment as a possible path forward, I'll describe the math behind EigenBench, a pagerank-inspired approach to the problem of measuring AI values.

(V. Gorin): Dunkl differential-difference operators are one-parameter deformations of the usual derivative. 
First studied for their remarkable commutativity and their role in the Calogero-Moser-Sutherland 
quantum many-body system, they have recently found surprising applications in random matrix theory.
I will show how Dunkl operators can be used in the asymptotic analysis of random matrix eigenvalues, 
where Catalan numbers, the semicircle law, the Airy-beta line ensemble, and Tracy-Widom distributions 
naturally emerge.

(T. Lidman)

(G. Panova):

(X. Zhou):

(J. Avigad):

(J. Rosenberg):

(J. Etnyre):

(L. Lin):

(J. Rivera-Letelier):

(T. Kucherenko):

(D. Barrett):

Past Ohio State University Mathematics Department Colloquia


This page is maintained by Dave Anderson, Jean-Francois Lafont, Hoi H. Nguyen.