The lab is teaching the following courses:
The lab taught in the past the following courses:
DCL offers master projects in the following areas:
- Probabilistic Byzantine Resilience: Development of high-performance, Byzantine-resilient distributed systems with provable probabilistic guarantees. Two options are currently available, both building on previous work on probabilistic Byzantine broadcast: (i) a theoretical project, focused the correctness of probabilistic Byzantine-tolerant distributed algorithms; (ii) a practical project, focused on numerically evaluating of our theoretical results. Please contact Matteo Monti to get more information.
- Distributed computing using RDMA and/or NVRAM. RDMA (Remote Direct Memory Access) allows accessing a remote machine's memory without interrupting its CPU. NVRAM is byte-addressable persistent (non-volatile) memory with access times on the same order of magnitude as traditional (volatile) RAM. These two recent technologies pose novel challenges and raise new opportunities in distributed system design and implementation. Contact Igor Zablotchi for more information.
- Robust Distributed Machine Learning: With the proliferation of big datasets and models, Machine Learning is becoming distributed. Following the standard parameter server model, the learning phase is taken by two categories of machines: parameter servers and workers. Any of these machines could behave arbitrarily (i.e., said Byzantine) affecting the model convergence in the learning phase. Our goal in this project is to build a system that is robust against Byzantine behavior of both parameter server and workers. Our first prototype, AggregaThor(https://www.sysml.cc/doc/2019/54.pdf), describes the first scalable robust Machine Learning framework. It fixed a severe vulnerability in TensorFlow and it showed how to make TensorFlow even faster, while robust. Contact Arsany Guirguis for more information.
- Consistency in global-scale storage systems: We offer several projects in the context of storage systems, ranging from implementation of social applications (similar to Retwis, or ShareJS) to recommender systems, static content storage services (à la Facebook's Haystack), or experimenting with well-known cloud serving benchmarks (such as YCSB); please contact Adi Seredinschi or Karolos Antoniadis for further information.
If the subject of a Master Project interests you as a Semester Project, please contact the supervisor of the Master Project to see if it can be considered for a Semester Project.
EPFL I&C duration, credits and workload information are available here. Don't hesitate to contact the project supervisor if you want to complete your Semester Project outside the regular semester period.
The lab is also collaborating with the industry and other labs at EPFL to offer interesting student projects motivated from real-world problems. With LARA and Interchain Foundation we have several projects:
- AT2: Integration of an asynchronous (consensus-less) payment system in the Cosmos Hub.
- Interblockchain Communication (IBC): Protocols description (and optional implementation) for enabling the inter-operation of independent blockchain applications.
- Stainless: Implementation of Tendermint modules (consensus, mempool, fast sync) using Stainless and Scala.
- Prusti: Implementation of Tendermint modules (consensus, mempool, fast sync) using Prusti and the Rust programming language.
- Mempool performance analysis and algorithm improvement.
- Adversarial engineering: Experimental evaluation of Tendermint in adversarial settings (e.g., in the style of Jepsen).
- Testing: Generation of tests out of specifications (TLA+ or Stainless) for the consensus module of Tendermint.
- Facebook Libra comparative research: Comparative analysis of consensus algorithms, specifically, between HotStuff (the consensus algorithm underlying Facebook's Libra) and Tendermint consensus.
Contact Adi Seredinschi (INR 327) if interested in learning more about these projects.