Latest news

  • Feb 15

    New preprints on privacy in decentralized machine learning

    Differentially Private Decentralized Learning with Random Walks pdf (arXiv:2402.07471) and Privacy Attacks in Decentralized Learning pdf (arXiv:arXiv:2402.10001).

  • Jan 20

    Paper accepted to AISTATS

    The Relative Gaussian Mechanism and its Application to Private Gradient Descent pdf (AISTATS 2024).

  • Jan 16

    Two papers accepted to ICLR

    Confidential-DPproof: Confidential Proof of Differentially Private Training pdf and DP-SGD Without Clipping: The Lipschitz Neural Network Way pdf (ICLR 2024).

  • Dec 21

    New preprint on Pufferfish privacy

    Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration pdf (arXiv:2312.13985).

  • Oct 7

    Paper accepted to EMNLP

    Fair Without Leveling Down: A New Intersectional Fairness Definition pdf (EMNLP 2023).

  • Oct 1

    Promotion to senior researcher

    I have been promoted to Inria senior researcher (directeur de recherche in French)!

Short bio

I am a senior researcher (directeur de recherche) at Inria, France. I am currently part of the PreMeDICaL Team (Precision Medicine by Data Integration and Causal Learning), an Inria/Inserm research group based in sunny Montpellier. I am also an associate member of the Magnet Team (MAchine learninG in information NETworks) based in Lille.

Prior to joining Inria, I was a postdoctoral researcher at the University of Southern California (working with Fei Sha) and then at Télécom Paris (working with Stéphan Clémençon). I obtained my Ph.D. from the University of Saint-Etienne in 2012 under the supervision of Marc Sebban and Amaury Habrard.

You can find more information in my CV or on my LinkedIn profile.

Research interests

My main line of research is in the theory and algorithms of machine learning. I am particularly interested in designing large-scale learning algorithms that provably achieve good trade-offs between statistical performance and other key criteria such as computational complexity, communication, privacy and fairness.

My current research focus includes:

  • distributed / federated / decentralized learning algorithms
  • privacy-preserving machine learning
  • representation learning and distance metric learning
  • optimization for machine learning
  • graph-based methods
  • statistical learning theory
  • fairness in machine learning
  • applications to NLP, speech recognition and health