Software

Open-source libraries & benchmarks

  • declearn [GitHub] [Gitlab] [PyPI]
    declearn is a Python package to perform federated learning, i.e., to enable several parties with local datasets to collaboratively train machine learning models without centralizing their data. It provides modular model and algorithm APIs that enable compatibility with major ML frameworks such as Pytorch, TensorFlow and scikit-learn, and facilitate the implementation of state-of-the-art FL algorithms. The package is distributed under the Apache license 2.0.
  • FLamby [GitHub] [Companion NeurIPS paper]
    FLamby is a benchmark for cross-silo Federated Learning with natural partitioning, currently focused in healthcare applications. It spans multiple data modalities and should allow easy interfacing with most Federated Learning frameworks (including Fed-BioMed, FedML, Substra...). It contains implementations of different standard federated learning strategies. The package is distributed under MIT license.
  • Metric-learn [GitHub] [PyPI] [Documentation] [Companion JMLR paper]
    metric-learn is a Python package implementing various metric learning algorithms, among which Large Margin Nearest Neighbor (LMNN), Neighborhood Component Analysis (NCA), Information-Theoretic Metric Learning (ITML), Relative Component Analysis (RCA) and Mahalanobis Metric for Clustering (MMC). As part of scikit-learn-contrib, the API of metric-learn is compatible with scikit-learn, a prominent library for machine learning in Python. This allows to use all the scikit-learn routines (for pipelining, model selection, etc) with metric learning algorithms through a unified interface. The package is distributed under MIT license.

Open-source code from papers

  • One-Shot Federated Conformal Prediction (ICML 2023) [GitHub]
  • GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security 2023) [GitHub]
  • Collaborative Algorithms for Online Personalized Mean Estimation (TMLR 2022) [GitHub]
  • Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties (Machine Learning 2022) [GitLab]
  • Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging (NeurIPS 2022) [GitHub]
  • Privacy Amplification by Decentralization (AISTATS 2022) [GitHub]
  • Differentially Private Federated Learning on Heterogeneous Data (AISTATS 2022) [GitHub]
  • Differentially Private Coordinate Descent for Composite Empirical Risk Minimization (ICML 2022) [GitLab]
  • Fair NLP Models with Differentially Private Text Encoders (Findinds of EMNLP 2022) [GitHub]
  • D-Cliques: Compensating for Data Heterogeneity with Topology in Decentralized Federated Learning (SRDS 2022) [GitLab]
  • Federated Multi-Task Learning under a Mixture of Distributions (NeurIPS 2021) [GitHub]
  • Pour plus de transparence dans l'analyse automatique des consultations ouvertes : leçons de la synthèse du Grand Débat National (Statistique et Société 2021) [GitLab]
  • Learning Fair Scoring Functions: Bipartite Ranking under ROC-based Fairness Constraints (AISTATS 2021) [GitHub]
  • Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs (AISTATS 2020) [GitHub]
  • Trade-offs in Large-Scale Distributed Tuplewise Estimation and Learning (ECML/PKDD 2019) [GitHub]
  • A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization (ICML 2018) [GitHub]
  • A Distributed Frank-Wolfe Framework for Learning Low-Rank Matrices with the Trace Norm (Machine Learning 2018) [GitHub]
  • Extending Gossip Algorithms to Distributed Estimation of U-statistics (NIPS 2015) [GitHub]
  • Similarity Learning for High-Dimensional Sparse Data (AISTATS 2015) and Escaping the Curse of Dimensionality in Similarity Learning: Efficient Frank-Wolfe Algorithm and Generalization Bounds (Neurocomputing 2019) [GitHub]
  • A Distributed Frank-Wolfe Algorithm for Communication-Efficient Sparse Learning (SDM 2015) [mloss]
  • Sparse Compositional Metric Learning (AAAI 2014) [GitHub] [mloss]
  • Learning Good Edit Similarities with Generalization Guarantees (ECML/PKDD 2011) and Good edit similarity learning by loss minimization (Machine Learning 2012) [mloss]