« search calendars« DIMACS Workshop on Modeling Randomness in Neural Network Training: Mathematical, Statistical, and Numerical Guarantees

« A Curious Case of the Symmetric Binary Perceptron Model: Algorithms and Algorithmic Barriers

A Curious Case of the Symmetric Binary Perceptron Model: Algorithms and Algorithmic Barriers

June 06, 2024, 1:30 PM - 2:15 PM

Location:

DIMACS Center

Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

David Gamarnik, Massachusetts Institute of Technology

Symmetric binary perceptron is a random model of a perceptron where a classifier is required to stay within a symmetric interval around zero, subject to randomly generated data. This model exhibits an interesting and puzzling property: the existence of a polynomial time algorithm for finding a solution (classifier) coincides with the presence of an extreme form of clustering. The latter means that most of the satisfying solutions are singletons separated by large distances. For the majority of other random constraint satisfaction problems of this kind, this typically suggests algorithmic hardness, which evidently is not the case for the symmetric perceptron model.

In order to resolve this conundrum, we conduct a different solution space geometry analysis. We establish that the model exhibits a phase transition called multi-overlap-gap property (m-OGP), and we show that the onset of this property asymptotically matches the performance of the best known algorithms, such as the algorithms constructed by Kim and Rouche, and Bansal and Spencer. Next, we establish that m-OGP is a barrier to large classes of algorithms exhibiting either stability or online features (or both). We show that Kim-Rouche and Bansal-Spencer algorithms indeed exhibit the stability and online features, respectively. We conjecture that m-OGP marks the onset of the genuine algorithmic hardness threshold for this model.

Joint work with Eren Kizildag (Columbia University), Will Perkins (Georgia Institute of Technology) and Changji Xu (Harvard University).

 

[Video]