Matrix Sketching for Secure Federated Learning

September 18, 2019, 9:40 AM - 10:20 AM


Center Hall

Rutgers University

Busch Campus Student Center

604 Bartholomew Rd

Piscataway NJ

Click here for map.

Shusen Wang, Stevens Institute of Technology

Federated learning (FL), also known as collaborative learning, allows multiple parties to jointly learn a model without data sharing. In this way, users' data privacy is seemingly protected. Unfortunately, although a participant's data never leave his machine, his data can be disclosed from his gradients and the global model parameters. Prior work demonstrated that one participant, not to mention the central server, can easily infer the other participants' data. The work showed that simple defenses like dropout, differential privacy, and federated averaging do not work. Since FL has been applied in the industry, its vulnerability to data leakage attacks may cause serious consequences.

We propose Double-Blind Federated Learning (DBFL) for defending against the data leakage attacks. The reason why FL is unsafe is that first, the server sees the participants' gradients and second, the participants see the true model parameters. Our key insight is to make FL double-blind: the server does not see the gradients, and the participants do not see the model parameters. DBFL is based on matrix sketching: the gradients are evaluated on sketched inputs, and the server sends only sketched parameters to the participants. DBFL is a generalization of dropout training (from uniform sampling to general sketching), and thus it is easy to tune and does not hurt test accuracy. While dropout fails, DBFL succeeds in defending some of the attacks.