Latest news

  • Feb 24

    New preprint on private optimization

    From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning pdf (arXiv:2302.12559).

  • Feb 13

    New preprint on federated learning

    One-Shot Federated Conformal Prediction pdf (arXiv:2302.06322).

  • Jan 20

    Two papers accepted to AISTATS

    Refined Convergence and Topology Learning for Decentralized SGD with Heterogeneous Data pdf and High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent pdf (AISTATS 2023).

  • Dec 14

    Paper accepted to TMLR

    Collaborative Algorithms for Online Personalized Mean Estimation pdf (Transactions on Machine Learning Research, to appear).

  • Nov 18

    Paper accepted to USENIX Security

    GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation pdf (USENIX Security 2023).

  • Oct 28

    New preprint on privacy & fairness

    Fairness Certificates for Differentially Private Classification pdf (arXiv:2210.16242).

Short bio

I am a tenured researcher at Inria, where I am part of the Magnet Team (MAchine learninG in information NETworks) and affiliated with CRIStAL (UMR CNRS 9189), a research center of the University of Lille. I am also an invited associate professor at Télécom Paris. I obtained the French habilitation thesis (HDR) in 2021.

Prior to joining Inria, I was a postdoctoral researcher at the University of Southern California (working with Fei Sha) and then at Télécom Paris (working with Stéphan Clémençon). I obtained my Ph.D. from the University of Saint-Etienne in 2012 under the supervision of Marc Sebban and Amaury Habrard.

You can find more information in my CV or on my LinkedIn profile.

Research interests

My main line of research is in the theory and algorithms of machine learning. I am particularly interested in designing large-scale learning algorithms that provably achieve good trade-offs between statistical performance and other key criteria such as computational complexity, communication, privacy and fairness.

My current research focus includes:

  • distributed / federated / decentralized learning algorithms
  • privacy-preserving machine learning
  • representation learning and distance metric learning
  • optimization for machine learning
  • graph-based methods
  • statistical learning theory
  • fairness in machine learning
  • applications to NLP and speech recognition