Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
software [2018/07/23 17:12]
tadavid
software [2021/01/15 14:33]
guirguis
Line 1: Line 1:
-====== Software projects developed at LPD ======+====== Software projects developed at DCL ======
  
-LPD has a github page where most new software projects are published: [[https://​github.com/​LPD-EPFL/​]]+DCL has a github page where most new software projects are published: [[https://​github.com/​LPD-EPFL/​]] 
 + 
 +===== FeGAN ===== 
 + 
 +Designed for the Middleware '20 paper: "//​FeGAN:​ Scaling Distributed GANs//​."​ 
 + 
 +FeGAN is a system to train generative adversarial networks (GANs) in the federated learning setup. FeGAN has a scalable design while being also robust to non-iidness of data (i.e., tolerates skewed distribution of data on devices). FeGAN makes three important design choices to achieve its goals: (1) co-locating the discriminator and the generator networks on all devices, (2) using balanced sampling, and (3) using KL-weighting. The first decision promotes the scalability of FeGAN and reduces the probability of falling into the vanishing gradients problem. Balanced sampling enables FeGAN to not fall into the mode collapse problem while KL-weighting is designed to resist learning divergence. 
 +Unlike existing distributed GANs approaches, FeGAN scales to hundreds of devices. Moreover, FeGAN achieves 5x throughput gain while using 1.5x less bandwidth compared to its state-of-the-art competitor, namely MD-GAN. It also boosts training by 2.6x compared to the celebrated Federated Averaging (FedAvg) algorithm.  
 +The source code of FeGAN was evaluated by experts and was given ACM accreditations for being functional and reusable. 
 + 
 +[[https://​github.com/​LPD-EPFL/​FeGAN|Code]] 
 + 
 +===== AggregaThor ===== 
 + 
 +Designed for the MLSYS '19 paper: "//​AggregaThor:​ Byzantine Machine Learning via Robust Gradient Aggregation//​."​ 
 + 
 +AggregaThor is the first scalable Byzantine resilient framework for distributed machine learning applications. AggregaThor is built on top of TensorFlow while achieving transparency:​ applications built with TensorFlow do not need to change their interfaces to be made Byzantine-resilient. AggregaThor uses the parameter server architecture,​ and it adds (to vanilla TensorFlow) two main layers: (1) the aggregation layer and (2) the communication layer. The former uses a statistically-robust gradient aggregation rule, called Multi-Krum, to robustly aggregate workers'​ gradients, ensuring convergence of training even in the existence of malicious workers. The communication layer enables users to experiment with unreliable transport layer (i.e., using UDP), which achieves better performance than vanilla TensorFlow in highly-saturated networks. The source code of AggregaThor was evaluated by experts and was given ACM accreditations for being functional and reusable. 
 + 
 +[[https://​github.com/​LPD-EPFL/​AggregaThor|Code]]
  
 ===== MVTIL ===== ===== MVTIL =====