« search calendars

« DIMACS Workshop on Modeling Randomness in Neural Network Training: Mathematical, Statistical, and Numerical Guarantees

DIMACS Workshop on Modeling Randomness in Neural Network Training: Mathematical, Statistical, and Numerical Guarantees

June 05, 2024 - June 07, 2024

Location:

DIMACS Center

Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

Organizer(s):

Tony Chiang, University of Washington & Pacific Northwest National Lab

Ioana Dumitriu, University of California, San Diego

Anand Sarwate, Rutgers University

For the most up-to-date information about this event, please see the workshop's main webpage.

Neural networks (NNs) are at the heart of modern machine learning and artificial intelligence (ML/AI) systems. The rapid development of these technologies has led to adoption across a variety of domains, particularly in speech processing, computer vision, and natural language processing. At the same time, the theoretical underpinnings of these statistical models are not yet fully understood. The question of how and why neural networks “work” can be approached from a variety of mathematical perspectives. One of the most promising mathematical tools for analysis of neural networks is random matrix theory, a field whose relevance and applicability to modeling, understanding, and characterizing a vast array of science and technology problems is growing every day. From principle component analysis and random growth processes to particle interactions and community detection in large networks, random matrices are now used to investigate and explain high-dimensional phenomena like concentration (the so-called ”blessing of dimensionality” as opposed to the ”curse of dimensionality”). Recent results in universality allow for use of more complex, non-Gaussian models, sometimes even allowing for limited dependencies. This begs the question: what can random matrix theory tell us about neural networks, modern machine learning, and AI?

The overarching goal of the workshop is to create bridges between different mathematical and computational communities by bringing together researchers with a diverse set of perspectives on neural networks. Topics of interest include:

  • understanding matrix-valued random processes that arise during NN training,
  • modeling/measuring uncertainty and designing estimators for training processes,
  • applications to these designs within optimization algorithms.

Confirmed participants:

 

Attend: The workshop is open to all who register (subject to space limitations). There is no fee to register but registration is required. Please register using the button at the bottom of the page.

 

Present: Presentations at the workshop will be largely by invitation.

 

Poster session: The workshop will feature a poster session. If you would like to present a poster please apply using the form referenced below.

 

Request support: We hope to have limited funds available to support travel by those whose attendance is contingent on support. We encourage diverse and inclusive participation and will prioritize applications for support from students and postdocs, especially those from minority or underrepresented groups. Please apply by April 30, 2024 using the form referenced below. Earlier applications will have the best access to support.

 

To apply for travel support or to apply to submit a poster: Please complete this form. (It is a single form through which you can apply for support or to present a poster, or both.)

 

Parking: If you do not have a Rutgers parking permit and you plan to drive to the workshop, there will be free parking in Lot 64, which is adjacent to the CoRE Building, but you must register your car to park. A link to register for parking will be provided in the confirmation message you receive when you register for the workshop.