NIPS 2017 Workshop

(Almost) 50 Shades of Bayesian Learning: PAC-Bayesian trends and insights


Long Beach Convention Center, California
December 9, 2017

Scope

Industry-wide successes of machine learning at the dawn of the (so-called) big data era has led to an increasing gap between practitioners and theoreticians. The former are using off-the-shelf statistical and machine learning methods, while the latter are designing and studying the mathematical properties of such algorithms. The tradeoff between those two movements is somewhat addressed by Bayesian researchers, where sound mathematical guarantees often meet efficient implementation and provide model selection criteria. In the late 90s, a new paradigm has emerged in the statistical learning community, used to derive probably approximately correct (PAC) bounds on Bayesian-flavored estimators. This PAC-Bayesian theory has been pioneered by Shawe-Taylor and Willamson (1997), and McAllester (1998, 1999). It has been extensively formalized by Catoni (2004, 2007) and has triggered, slowly but surely, increasing research efforts during last decades.

We believe it is time to pinpoint the current PAC-Bayesian trends relatively to other modern approaches in the (statistical) machine learning community. Indeed, we observe that, while the field grows by its own, it took some undesirable distance from some related areas. Firstly, it seems to us that the relation to Bayesian methods has been forsaken in numerous works, despite the potential of PAC-Bayesian theory to bring new insights to the Bayesian community and to go beyond the classical Bayesian/frequentist divide. Secondly, the PAC-Bayesian methods share similarities with other quasi-Bayesian (or pseudo-Bayesian) methods studying Bayesian practices from a frequentist standpoint, such as the Minimum Description Length (MDL) principle (Grünwald, 2007). Last but not least, even if some practical and theory grounded learning algorithm has emerged from PAC-Bayesian works, these are almost unused for real-world problems.

In short, this workshop aims at gathering statisticians and machine learning researchers to discuss current trends and the future of {PAC,quasi}-Bayesian learning. From a broader perspective, we aim to bridge the gap between several communities that can all benefit from sharper statistical guarantees and sound theory-driven learning algorithms.

References

  1. J. Shawe-Taylor and R. Williamson. A PAC analysis of a Bayes estimator. In Proceedings of COLT, 1997.
  2. D. A. McAllester. Some PAC-Bayesian theorems. In Proceedings of COLT, 1998.
  3. D. A. McAllester. PAC-Bayesian model averaging. In Proceedings of COLT, 1999.
  4. O. Catoni. Statistical Learning Theory and Stochastic Optimization. Saint-Flour Summer School on Probability Theory 2001 (Jean Picard ed.), Lecture Notes in Mathematics. Springer, 2004.
  5. O. Catoni. PAC-Bayesian supervised classification: the thermodynamics of statistical learning. Institute of Mathematical Statistics Lecture Notes—Monograph Series, 56. Institute of Mathematical Statistics, 2007.
  6. P. D. Grünwald. The Minimum Description Length Principle. The MIT Press, 2007.

Speakers

The workshop welcomes world-class experts on {quasi,PAC,∅}-Bayesian learning.

Olivier Catoni

Senior researcher, CNRS, France.

Peter Grünwald

Professor, CWI, The Netherlands.

François Laviolette

Professor, Université Laval, Canada.

Neil Lawrence

Professor, University of Sheffield, and
Amazon Research Cambridge, UK.

Jean-Michel Marin

Professor, Université de Montpellier, France.

Dan Roy

Assistant Professor, University of Toronto, Canada.

Yevgeny Seldin

Associate Professor, University of Copenhagen, Denmark.

John Shawe-Taylor

Professor, University College London, UK.

Schedule

To be announced.

Call for Papers

All accepted papers will have a poster presentation, and we will select two papers for oral presentations. We welcome submissions related to the following list of topics:

  • {PAC,quasi,∅}-Bayesian generalization guarantees,
  • Novel theoretical perspectives on Bayesian methods,
  • Application of the PAC-Bayesian theory to different learning frameworks,
  • Learning algorithms inspired by a {PAC,quasi}-Bayesian analysis.

Submission Instructions

Click here to submit a paper.

  • Page limit: 4 pages (excluding references).
  • Please use the NIPS 2017 submission format.
  • We are committed to a double-blind reviewing process. Submissions must be made anonymous: please hide authors' names and affiliations.
  • Results which were previously published may be submitted, although we encourage original contributions. Please clearly indicate if the submitted work has been presented or published elsewhere.
  • Please note that at least one author of each accepted paper must be available to present the paper at the workshop.

Important Dates

Deadline for submission of papers Notification of acceptance
October 27, 2017, 23:59 PDT (UTC-8) November 9, 2017

Organizers

Contact us

Benjamin Guedj

Francis Bach

Pascal Germain

Researcher, Inria, France. Senior Researcher, Inria, France. Researcher, Inria, France.

Sponsors