Latest news

  • May 2

    4 papers accepted to ICML

    Privacy Attacks in Decentralized Learning pdf, Differentially Private Decentralized Learning with Random Walks pdf, Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration via Shift Reduction Lemmas pdf and Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm pdf (ICML 2024).

  • Feb 15

    New preprints on privacy in decentralized machine learning

    Differentially Private Decentralized Learning with Random Walks pdf (arXiv:2402.07471) and Privacy Attacks in Decentralized Learning pdf (arXiv:arXiv:2402.10001).

  • Jan 20

    1 paper accepted to AISTATS

    The Relative Gaussian Mechanism and its Application to Private Gradient Descent pdf (AISTATS 2024).

  • Jan 16

    2 papers accepted to ICLR

    Confidential-DPproof: Confidential Proof of Differentially Private Training pdf and DP-SGD Without Clipping: The Lipschitz Neural Network Way pdf (ICLR 2024).

  • Dec 21

    New preprint on Pufferfish privacy

    Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration pdf (arXiv:2312.13985).

Short bio

I am a senior researcher (directeur de recherche) at Inria, France. I am currently part of the PreMeDICaL Team (Precision Medicine by Data Integration and Causal Learning), an Inria/Inserm research group based in sunny Montpellier. I am also an associate member of the Magnet Team (MAchine learninG in information NETworks) based in Lille.

Prior to joining Inria, I was a postdoctoral researcher at the University of Southern California (working with Fei Sha) and then at Télécom Paris (working with Stéphan Clémençon). I obtained my Ph.D. from the University of Saint-Etienne in 2012 under the supervision of Marc Sebban and Amaury Habrard.

You can find more information in my CV or on my LinkedIn profile.

Research interests

My main line of research is in the theory and algorithms of machine learning. I am particularly interested in designing large-scale learning algorithms that provably achieve good trade-offs between statistical performance and other key criteria such as computational complexity, communication, privacy and fairness.

My current research focus includes:

  • distributed / federated / decentralized learning algorithms
  • privacy-preserving machine learning
  • representation learning and distance metric learning
  • optimization for machine learning
  • graph-based methods
  • statistical learning theory
  • fairness in machine learning
  • applications to NLP, speech recognition and health