Navigation
Home
About
Fellowships
Research
News
Events
Contact
Abstract: Federated Learning (FL) enables learning a joint model (neural network) over data distributed across multiple silos or devices that cannot share raw data. We propose a novel training policy, based on estimating the task performance of each learner (client) on a distributed validation dataset, which yields faster convergence in heterogeneous data and computational environments, and is robust against corrupted data. We protect the models and the data of the participating learners by performing model aggregation and evaluation using homomorphic encryption and secure computation. We propose to evaluate our FL architecture on high-profile neuroimaging studies.