Distributed Computing LPD

Distributed Machine Learning


Modern machine learning algorithms operate over a huge volume of data thus highlighting the demand for distributed solutions both from the system and algorithmic perspective.

Asynchronous ML on android devices

This project is related to training ML algorithms asynchronously on Android devices. The challenges here are primarily: mobile churn, latency, energy consumption, memory, bandwidth and accuracy. This project involves multiple semester projects that tackle subsets of these challenges from the algorithmic (SGD variants) and the system (framework for android) perspective.

Related papers:
[1] Heterogeneity-aware Distributed Parameter Servers
[2] ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning

Personalized/Private ML in P2P network

This project calls for private ML algorithms where data does not leave the user device and each user has her personalized version of the model. More precisely, the aim is that every mobile device has its own personalized learning model (updated periodically by cross-device communications and trained locally using local data) without sending out their data to others (but the communicated gradients also need to be private and hence the trade-off between accuracy and privacy). The major challenges will be for accuracy-privacy, memory, bandwidth and latency.

Related papers:
[1] Decentralized Collaborative Learning of Personalized Models over Networks
[2] Privacy-Preserving Deep Learning
[3] Deep Learning with Differential Privacy

P2P data market

The goal is the design of a P2P infrastructure that enables service providers (peers) to buy and sell data. The main challenge for a candidate scheme is the definition and measurement of the data utility from the perspective of each peer. The revenue model and privacy guarantees are also two important challenges for this setting.

Related papers:
[1] The Cost of Privacy: Destruction of Data-Mining Utility in Anonymized Data Publishing
[2] Price-Optimal Querying with Data APIs
[3] Query-Based Data Pricing

Federated optimization: distributed SGD with fault tolerance

This project explores the case where data does not leave each user device while certain (arbitrary) devices fail and recover. The challenge is to accelerating learning under this scenario leveraging various techniques like importance sampling.

Related papers:
[1] Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling
[2] Stochastic Optimization with Importance Sampling

Byzantine-tolerant machine learning

Each node in the distributed setting can exhibit arbitrary (byzantine) behaviour during the learning procedure. This project explores algorithms (SGD variants) both in the synchronous and asynchronous setup. The student will work on our code base on top of tensorflow for the implementation of these algorithms.

Related papers:
[1] Asynchronous Byzantine Machine Learning
[2] Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent

Black-Box attacks against recommender systems

A recommender system can be viewed as a black-box that users query with feedback (e.g., ratings, clicks) before getting the output list of recommendations. The goal is to infer properties of the recommendation algorithm by observing the output from different queries.

Related papers:
[1] Stealing Machine Learning Models via Prediction APIs
[2] Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Multi-output multi-class classification

The goal of this project is to design a distributed ML algorithm suitable for multi-output classification (e.g. music tag prediction on mobile devices). Deep learning-based approaches seem promising for this task. Nevertheless, current methods target only single-output classification.

Related papers:
[1] Deep Neural Networks for YouTube Recommendations
[2] Deep content-based music recommendation
[3] Codebook-based scalable music tagging with poisson matrix factorization

Contact: Georgios Damaskinos