Distributed Computing LPD

Distributed ML

Overview

Modern machine learning algorithms operate over a huge volume of data thus highlighting the demand for distributed solutions both from the system and algorithmic perspective.

Asynchronous ML on android devices

This project is related to training ML algorithms asynchronously on Android devices. The challenges here are primarily: mobile churn, latency, memory, bandwidth and accuracy. The main goal is building a framework to address these challenges.

Related papers:
[1] Distributed Asynchronous Online Learning for Natural Language Processing
[2] Heterogeneity-aware Distributed Parameter Servers

Multi-output multi-class classification

The goal of this project is to design a distributed ML algorithm suitable for multi-output classification (e.g. music tag prediction on mobile devices). Deep learning-based approaches seem promising for this task. Nevertheless, current methods target only single-output classification.

Related papers:
[1] Deep Neural Networks for YouTube Recommendations
[2] Deep content-based music recommendation
[3] Codebook-based scalable music tagging with poisson matrix factorization

Personalized/Private ML in P2P network

This project calls for private ML algorithms where data does not leave the user device and each user has her personalized version of the model. More precisely, the aim is that every mobile device has its own personalized learning model (updated periodically by cross-device communications and trained locally using local data) without sending out their data to others (but the communicated gradients also need to be private and hence the trade-off between accuracy and privacy). The major challenges will be for accuracy-privacy, memory, bandwidth and latency.

Related papers:
[1] Decentralized Collaborative Learning of Personalized Models over Networks
[2] Privacy-Preserving Deep Learning
[3] Deep Learning with Differential Privacy

Federated optimization: distributed SGD with fault tolerance

This project explores the case where data does not leave each user device while certain (arbitrary) devices fail and recover. The challenge is to accelerating learning under this scenario leveraging various techniques like importance sampling.

Related papers:
[1] Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling
[2] Stochastic Optimization with Importance Sampling

Contact: Rhicheek Patra or Georgios Damaskinos