DIMACS Workshop on Distributed Optimization, Information Processing, and Learning

August 21 - 23, 2017
Rutgers Academic Building, West Wing, Room 1170,
15 Seminary Place, Rutgers University, New Brunswick, NJ

Organizing Committee:
Waheed U. Bajwa, (General Chair), Rutgers University, waheed.bajwa at rutgers.edu
Alekh Agarwal, (Technical Co-Chair), Microsoft Research, New York, alekha at microsoft.com
Alejandro Ribeiro, (Technical Co-Chair), University of Pennsylvania, aribeiro at seas.upenn.edu
Presented under the auspices of the DIMACS Special Focus on Information Sharing and Dynamic Data Analysis.

Workshop Program:

Monday, August 21, 2017 

 8:30 -  9:15  Check-in for pre-registered attendees, late registration, and catered breakfast	
               Location: Outside of Room 1170

 9:15 -  9:30  **Welcome messages from General Chair and Technical Co-Chairs	
               Waheed U. Bajwa, Rutgers University, Alekh Agarwal, Microsoft, and Alejandro Ribeiro, University of Pennsylvania
               Please note this is an earlier start than previously posted.

 9:30 - 10:50  Oral Session M1 (Chair: A. Ribeiro)	
	       A Proximal Primal-Dual Algorithm for Decomposing Non-convex Nonsmooth Problems
               Mingyi Hong, Iowa State University  (40 min.)
               Slides   Video

	       Asynchronous Algorithms for Conic Programs, including Optimal, Infeasible, and Unbounded Ones
	       Wotao Yin, UCLA (40 min.)
               Slides   Video

10:50 - 11:10  Coffee break	

11:10 - 11:50  Oral Session M2 (Chair: A. Ribeiro)	
	       How to Analyze Nonconvex Optimization Algorithms in High Dimensions?
	       Exact Asymptotics via Exchangeability and Scaling Limits
               Yue M. Lu, Harvard University (40 min.)
               Video

11:50 -  1:00  Catered lunch
               Location: Lobby of the Honors College, 5 Seminary Place, New Brunswick, NJ 

 1:00 -  1:15  DIMACS Welcome
               Fred Roberts, DIMACS, Rutgers University	
               Video

 1:15 -  2:35  Oral Session M3 (Chair: W. Yin)	
	       High-order Methods In Empirical Risk Minimization
               Alejandro Ribeiro, University of Pennsylvania (40 min.)
               Slides   Video

               Distributed Approaches to Mirror Descent for Stochastic Learning over
	       Rate-limited Networks
               Matthew Nokleby, Wayne State University (40 min.)
               Slides   Video

 2:35  - 3:20  Solar Eclipse (do not look directly at the sun without eclipse glasses) / Coffee break
	       Making a pinhole: https://www.wired.com/story/view-the-eclipse-with-this-simple-homemade-gadget/

 3:20 -  5:20  Oral Session M4 (Chair: G. Scutari)	
	       Distributed Optimization Algorithms for Networked Systems
               Michael Zavlanos, Duke University (40 min.)
               Slides   Video

	       Distributed Optimization Over Directed Graphs
               Usman Khan, Tufts University (40 min.)
               Slides   Video

	       When Cyclic Coordinate Descent Beats Randomized Coordinate Descent
               Mert Gurbuzbalaban, Rutgers University (40 min.)
               Slides   Video

 5:45 pm       Informal social gathering
               Brother Jimmy's Barbeque
               5 Easton Avenue
               New Brunswick, NJ
	

Tuesday, August 22, 2017 

 8:30 -  9:30  Check-in for pre-registered attendees, late registration, and catered breakfast	
               Location: Outside of Room 1170

 9:30 - 10:50  Oral Session T1 (Chair: A. Agarwal)	
	       Federated Learning: Privacy-Preserving Collaborative Machine Learning without Centralized Training Data
               Keith Bonawitz, Google (40 min.)

               Privacy and Fault-Tolerance for Distributed Optimization
               Nitin Vaidya, University of Illinois, Urbana-Champaign (40 min.)
               Slides   Video

10:50 - 11:20  Coffee break	

11:20 - 12:00  Oral Session T2 (Chair: U. Khan)
               Fast Distributed Algorithms for Optimization in Time-Varying Graphs
               Angelia Nedich, Arizona State University (40 min.)
               Slides   Video
	
12:00 -  1:30  Catered lunch	
               Location: Lobby of the Honors College, 5 Seminary Place, New Brunswick, NJ 

 1:30  - 3:30  Oral Session T3 (Chair: A. Nedic)	
	       Convergence Rates in Decentralized Optimization
               Alex Olshevsky, Boston University (40 min.)
               Slides   Video
	
               Distributed Resource Allocation with Limited Communication
               Na Li, Harvard University (40 min.)
               Slides   Video
	
               SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
               Martin Takac, Lehigh University (40 min.)
               Slides   Video

 3:30 -  4:00  Coffee break

 4:00 -  5:45  Poster Session P1 (Chair: W. Bajwa)	

               Should I Distribute my Machine Learning Training Job?
               Michael Alan Chang, University of California, Berkeley

               Distributed Dictionary Learning over Dynamic Directed Network Topologies
               Amir Daneshmand, Purdue University

               A Decentralized Primal-Dual Quasi-Newton Method with Exact Linear Convergence
               Mark Eisen, University of Pennsylvania

               Private Learning on Networks
               Shripad Gade, University of Illinois at Urbana-Champaign

               Power and Spectrum Optimization for Wireless Autonomous Systems
               Konstantinos Gatsis, University of Pennsylvania

               Distributed Zeroth-Order Nonconvex Optimization
               Davood Hajinezhad, Iowa State University

               REPR: Regression-Style Learning by Column Generation
               Ai Kagawa, Rutgers University

               Using LDPC Codes for Computing Large Linear Transforms Distributedly
               Fatemeh Kazemikordasiabi, Rutgers University

               Decentralized Efficient Nonparametric Stochastic Optimization
               Alec Koppel, University of Pennsylvania

               Superlinearly Convergent Asynchronous Distributed Network Newton Method
               Fatemeh Mansoori, Northwestern University

               IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
               Aryan Mokhtari, University of Pennsylvania

               Accelerated Distributed Nesterov Gradient Descent
               Guannan Qu, Harvard University

               Oja's Rule for Distributed Principal Component Analysis (PCA)
               Haroon Raja, Rutgers University

               Distributed optimization over directed graphs
               Ran Xin, Tufts University

               Byzantine resilient distributed learning via coordinate descent
               Zhixiong Yang, Rutgers University

 6:30 -  8:30  Workshop banquet in New Brunswick
               Panico's
               103 Church St.
               New Brunswick, NJ


Wednesday, August 23, 2017

 8:30 -  9:30  Check-in for pre-registered attendees, late registration, and catered breakfast	
               Location: Outside of Room 1170

 9:30 - 10:50  Oral Session W1 (Chair: W. Bajwa)	
	       Consensus and Distributed Inference Rates Using Network Divergence
               Anand D. Sarwate, Rutgers University (40 min.)
               Slides   Video	

               Distributed Large-scale Optimization via Batch Gradient Tracking
               Gesualdo Scutari, Purdue University (40 min.)

10:50 - 11:10  Coffee break	

11:10 - 12:30  Oral Session W2 (Chair: A. Sarwate)	
	       Balancing Computation and Communication in Distributed Optimization
               Ermin Wei, Northwestern University (40 min.)
               Slides   Video
	
               Distributed Online Learning in the Wild
               Alekh Agarwal, Microsoft (40 min.)

12:30          Concluding remarks and boxed lunch	


Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on August 31, 2017.