TIP: Transparent artificial Intelligence preserving Privacy
Context. While AI techniques are becoming ever more powerful, there is a growing concern about potential risks and abuses. As a result, there has been an increasing interest in research directions such as privacy-preserving machine learning, explainable machine learning, fairness and data protection legislation. Privacy-preserving machine learning aims at learning (and publishing or applying) a model from data while the data is not revealed (to some parties, e.g., to those seeing the final model or to those learning the model). Notions such as (local) differential privacy and its generalizations allow to bound the amount of information revealed in some operations. Explainable machine learning aims at learning models which are not only accurate but also can be explained to humans (according to some criterion of explainability).
Objective. The overall goal of the TIP project is to develop, exploit and explain a sound understanding of privacy-preserving strategies in larger AI-based processes involving massive numbers of agents among whom some may be malicious.
The TIP project started in November 2020, its duration is approximately 4 years.
People and related projects
This is a project of the MAGNET team.
Project members include:
The TIP project algorithms will be implemented in the TAILED open source library.
- PI: Jan Ramon
- PhD student: Antoine Barczewski
- PhD student: Marc Damie
- PhD student: to be hired
- engineer: Jean-Paul Lam
- engineer: Antonin Duez
- post-docs: to be hired
We are still looking for collaborators to start in autumn 2022:
For applying, please also read the general guidelines.
There are also open positions on related projects.
The TIP project is funded by