Michal Valko

Michal Valko, machine learning scientist DeepMind, Inria, and a lecturer at MVA/ENS PS.

  machine learning, sequential learning, active learning, graph methods, bandit theory

News

News: new I give an invited talk on May 28th, 2019 at Theoretical Computer Science seminar at CU in Bratislava.
News: new A GP-UCB sparsification paper accepted to COLT 2018. See you in Phoenix!
News: new Two papers accepted to ICML 2019. See you in Long Beach!
News: new I give an invited talk on June 14-15th, 2019 at ICML workshop on negative dependence
News: new We are organizing Optimizing Human Learning 2019 workshop.
News: new I give an invited talk on February 22nd, 2019 at Theoretical Computer Science seminar at CU in Bratislava.
News: new I give an invited talk on February 20nd, 2019 at Data Analytics Meetings at UPJŠ in Košice.
News: new We are organizing Reinforcement Learning Summer SCOOL on 1-12 July 2019 in Lille, France.
News: new On June 3-4th, 2019 we are organizing a The power of graphs workshop with Laura Toni.
News: new Pierre Ménard joins as a postdoc!
News: new I am serving as an area chair for NeurIPS 2019.
News: new I am giving an invited talk during July 3-8th, 2019 at RAAI Summer School 2019, Moscow Institute of Physics and Technology.
News: Three papers accepted to AISTATS 2019. See you in Okinawa!
News: I am giving an invited talk on January 25th, 2019 at Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, UK.
News: Two papers on black-box optimization accepted to ALT 2019!
News: I am giving two invited talks on January 7th and 8th, 2019 at Verimag, CNRS Grenoble, France.
News: Pierre Perrault gives invited talk on Stochastic multi-arm bandit problem and some extensions, November, 23rd, 2018 at Lambda seminar at Université de Bordeaux
News: DPPy: Sampling determinantal point processes with Python released!
News: I will be on the program committee for COLT 2019.

older news

Bio

Michal is a machine learning scientist in DeepMind Paris, SequeL team at Inria, and the lecturer of the master course Graphs in Machine Learning at l'ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimizing the data that humans need to spend inspecting, classifying, or “tuning” the algorithms. Another important feature of machine learning algorithms should be the ability to adapt to changing environments. That is why he is working in domains that are able to deal with minimal feedback, such as online learning, bandit algorithms, semi-supervised learning, and anomaly detection. Most recently he has worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Structured learning requires more time and space resources and therefore the most recent work of Michal includes efficient approximations such as graph and matrix sketching with learning guarantees. In past, the common thread of Michal's work has been adaptive graph-based learning and its application to real-world applications such as recommender systems, medical error detection, and face recognition. His industrial collaborators include Adobe, Intel, Technicolor, and Microsoft Research. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos before taking a permanent position at Inria in 2012.

Collaborative Projects

  • CompLACS (EU FP7) - COMposing Learning for Artificial Cognitive Systems, 2011 - 2015 (with J. Shawe-Taylor)
  • DELTA (EU CHIST-ERA) - PC - Dynamically Evolving Long-Term Autonomy, 2018 - 2021 (with A. Jonsson)
  • PGMO-IRMO grant of Fondation Mathématique Jacques Hadamard: Theoretically grounded efficient algorithms for high-dimensional and continuous reinforcement learning, 2018 - 2020 (with M. Pirotta)
  • BoB (ANR) - Bayesian statistics for expensive models and tall data, 2016 - 2020 (with R. Bardenet)
  • LeLivreScolaire.fr - Sequential Learning for Educational Systems, 2017-2020 (PI)
  • Inria/CWI – Sequential prediction & Understanding Deep RL, postdoc funding (PC, 2016-2018)
  • Extra-Learn (ANR) - PI - EXtraction and TRAnsfer of knowledge in reinforcement LEARNing, 2014 - 2018 (with A. Lazaric)
  • EduBand - coPI - Educational Bandits project with Carnegie Mellon, 2015 - 2018 (with A. Lazaric and E. Brunskill)
  • Allocate - PI - Adaptive allocation of resources for recommender systems with U. Potsdam, 2017 - 2019 (with A. Carpentier)
  • INTEL/Inria - PI - Algorithmic Determination of IoT Edge Analytics Requirements, 2013 - 2014
previous projects

Students and postdocs

  • Édouard Oyallon, 2017 - 2018, ENS Rennes/ENS Ulm, postdoc ↝ École Centrale de Paris
  • Tomáš Kocák, 2013 - 2016, Comenius University, PhD student, with Rémi Munos ↝ ENS Lyon
  • Daniele Calandriello, 2014 - 2017, Polimi, PhD student, AFIA, 1st prize, with Alessandro Lazaric ↝ IIT
  • Axel Elaldi, 2017-2018, master student, École Centrale de Lille ↝ ENS Paris-Saclay/MVA
  • Xuedong Shang, 2017, master student, ENS Rennes, with Emilie Kaufmann ↝ Inria
  • Guillaume Gautier, 2016, master student, École Normale Supérieure, Paris-Saclay, with Rémi Bardenet ↝ Inria/CNRS
  • Andrea Locatelli, 2015-2016, ENSAM/ENS Paris-Saclay, with Alexandra Carpentier ↝ Universität Potsdam
  • Souhail Toumdi, 2015 - 2016, master student, École Centrale de Lille, with Rémi Bardenet ↝ ENS Paris-Saclay/MVA
  • Akram Erraqabi, 2015, master student, École Polytechnique, Paris ↝ Université de Montréal
  • Mastane Achab, 2015, master student, École Polytechnique, Paris, with G. Neu ↝ l'ENS Paris-Saclay ↝ Télécom ParisTech
  • Jean-Bastien Grill, 2014, master student, École Normale Supérieure, Paris, with Rémi Munos ↝ Inria
  • Alexandre Dubus, 2012-2013, master student, Université Lille1 - Sciences et Technologies ↝ Inria
  • Karim Jedda, 2012-2013, master student, École Centrale de Lille ↝ ProSiebenSat.1
  • Alexis Wehrli, 2012-2013, master student, École Centrale de Lille ↝ ERDF

Selected Publications

  • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco: Gaussian process optimization with adaptive sketching: Scalable and no regret, in Conference on Learning Theory (COLT 2019), bibtex abstract abstract, arXiv preprint
  • Pierre Perrault, Vianney Perchet, Michal Valko: Exploiting structure of uncertainty for efficient matroid semi-bandits, in International Conference on Machine Learning (ICML 2019), bibtex abstract abstract
  • Peter Bartlett, Victor Gabillon, Jennifer Healey, Michal Valko: Scale-free adaptive planning for deterministic dynamics & discounted rewards in International Conference on Machine Learning (ICML 2019), bibtex abstract abstract,
  • Julien Seznec, Andrea Locatelli, Alexandra Carpentier, Alessandro Lazaric, Michal Valko: Rotting bandits are not harder than stochastic ones, in International Conference on Artificial Intelligence and Statistics (AISTATS 2019) bibtex abstract abstract [full oral presentation - 2.5% acceptance rate]
  • Andrea Locatelli, Alexandra Carpentier, Michal Valko: Active multiple matrix completion with adaptive confidence sets, in International Conference on Artificial Intelligence and Statistics (AISTATS 2019) bibtex abstract abstract
  • Pierre Perrault, Vianney Perchet, Michal Valko: Finding the bandit in a graph: Sequential search-and-stop, in International Conference on Artificial Intelligence and Statistics (AISTATS 2019) bibtex abstract abstract poster
  • Peter L. Bartlett, Victor Gabillon, Michal Valko: A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption, in Algorithmic Learning Theory (ALT 2019) bibtex abstract abstract talk 1 talk 2
  • Jean-Bastien Grill, Michal Valko, Rémi Munos: Optimistic optimization of a Brownian, in Neural Information Processing Systems (NeurIPS 2018) bibtex abstract abstract poster
  • Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko: Compressing the input for CNNs with the first-order scattering transform, in European Conference on Computer Vision (ECCV 2018) bibtex abstract abstract poster
  • Daniele Calandriello, Ioannis Koutis, Alessandro Lazaric, Michal Valko: Improved large-scale graph learning through ridge spectral sparsification, in International Conference on Machine Learning (ICML 2018) bibtex abstract abstract talk poster
  • Yasin Abbasi-Yadkori, Peter Bartlett, Victor Gabillon, Alan Malek, Michal Valko: Best of both worlds: Stochastic & adversarial best-arm identification, Conference on Learning Theory (COLT 2018) bibtex abstract abstract video talk poster
  • Daniele Calandriello, Alessandro Lazaric, Michal Valko: Efficient second-order online kernel learning with adaptive embedding, in Neural Information Processing Systems (NeurIPS 2017) bibtex abstract abstract talk poster
  • Zheng Wen, Branislav Kveton , Michal Valko, Sharan Vaswani: Online influence maximization under independent cascade model with semi-bandit feedback, in Neural Information Processing Systems (NeurIPS 2017) bibtex abstract abstract
  • Guillaume Gautier, Rémi Bardenet, Michal Valko: Zonotope hit-and-run for efficient sampling from projection DPPs, in International Conference on Machine Learning (ICML 2017) bibtex abstract abstract talk poster
  • Daniele Calandriello, Alessandro Lazaric, Michal Valko: Distributed adaptive sampling for kernel matrix approximation, in International Conference on Artificial Intelligence and Statistics (AISTATS 2017) and (ICML 2017 - LL) bibtex abstract abstract talk code poster
  • Akram Erraqabi, Alessandro Lazaric, Michal Valko, Emma Brunskill, Yu-En Liu: Trading off rewards and errors in multi-armed bandits, in International Conference on Artificial Intelligence and Statistics (AISTATS 2017) bibtex abstract abstract poster,
  • Akram Erraqabi, Michal Valko, Alexandra Carpentier, Odalric-Ambrym Maillard: Pliable rejection sampling, in International Conference on Machine Learning (ICML 2016) bibtex abstract abstract talk long talk poster
  • Tomáš Kocák, Rémi Munos, Branislav Kveton, Shipra Agrawal, Michal Valko: Spectral Bandits, accepted for publication to Journal of Machine Learning Research (JMLR 2017)
  • Branislav Kveton, Zheng Wen, Azin Ashkan, Michal Valko: Learning to Act Greedily: Polymatroid Semi-Bandits, accepted for publication to Journal of Machine Learning Research (JMLR 2017) bibtex abstract abstract arXiv preprint
  • Michal Valko: Bandits on graphs and structures, habilitation thesis, École normale supérieure de Cachan (ENS Cachan 2016) bibtex abstract abstract
  • Jean-Bastien Grill, Michal Valko, Rémi Munos: Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning, in Neural Information Processing Systems (NeurIPS 2016) bibtex abstract abstract talk poster [full oral presentation - 1.8% acceptance rate]
  • Mohammad Ghavamzadeh, Yaakov Engel, Michal Valko: Bayesian policy gradient and actor-critic algorithms, Journal of Machine Learning Research (JMLR 2016) bibtex abstract abstract code
  • Tomáš Kocák, Gergely Neu, Michal Valko: Online learning with noisy side observations, in International Conference on Artificial Intelligence and Statistics (AISTATS 2016) bibtex abstract abstract talk poster [full oral presentation - 6% acceptance rate]
  • Alexandra Carpentier, Michal Valko: Revealing graph bandits for maximizing local influence, in International Conference on Artificial Intelligence and Statistics (AISTATS 2016) bibtex abstract abstract poster
  • Jean-Bastien Grill, Michal Valko, Rémi Munos: Black-box optimization of noisy functions with unknown smoothness, in Neural Information Processing Systems (NeurIPS 2015) bibtex abstract abstract code, code in R poster
  • Alexandra Carpentier, Michal Valko: Simple regret for infinitely many armed bandits, in International Conference on Machine Learning (ICML 2015) bibtex abstract abstract talk poster arXiv
  • Tomáš Kocák, Gergely Neu, Michal Valko, Rémi Munos: Efficient Learning by Implicit Exploration in Bandit Problems with Side Observations, in Neural Information Processing Systems (NeurIPS 2014) bibtex abstract abstracttalk poster
  • Alexandra Carpentier, Michal Valko: Extreme Bandits, in Neural Information Processing Systems (NeurIPS 2014) bibtex abstract abstractposter
  • Gergely Neu, Michal Valko: Online Combinatorial Optimization with Stochastic Decision Sets and Adversarial Losses, in Neural Information Processing Systems (NeurIPS 2014) bibtex abstract abstracttalk poster
  • Michal Valko, Rémi Munos, Branislav Kveton, Tomáš Kocák: Spectral Bandits for Smooth Graph Functions, in International Conference on Machine Learning (ICML 2014) bibtex abstract abstractslides poster
  • Michal Valko, Branislav Kveton, Ling Huang, Daniel Ting: Online Semi-Supervised Learning on Quantized Graphs in Uncertainty in Artificial Intelligence (UAI 2010) bibtex abstract abstract Video: Adaptation, Video: OfficeSpace, spotlight poster
  • Milos Hauskrecht, Michal Valko, Shyam Visweswaram, Iyad Batal, Gilles Clermont, Gregory Cooper: Conditional Outlier Detection for Clinical Alerting in Annual American Medical Informatics Association conference (AMIA 2010) bibtex abstract abstract [Homer Warner Best Paper Award]

Contact

  • Inria Lille - Nord Europe, equipe SequeL (bureau: A05)
  • Parc Scientifique de la Haute Borne
  • 40 avenue Halley
  • 59650 Villeneuve d'Ascq, France
  • office phone: +33 3 59 57 7801


22-May-2019