The problems studied in the department can be subsumed under the heading of empirical inference. This term refers to inference performed on the basis of empirical data. The type of inference can vary, including for instance inductive learning (estimation of models such as functional dependencies that generalize to novel data sampled from the same underlying distribution), or the inference of causal structures from statistical data (leading to models that provide insight into the underlying mechanisms, and make predictions about the effect of interventions). Likewise, the type of empirical data can vary, ranging from sparse experimental measurements (e. g., microarray data) to visual patterns. Our department is conducting theoretical, algorithmic, and experimental studies to try and understand the problem of empirical inference.
The department was started around statistical learning theory and certain recent developments in the field of machine learning, in particular support vector machines (SVMs). It has since broadened its set of inference tools to include a stronger component of Bayesian methods, including graphical models with a recent focus on issues of causality. In terms of the inference tasks being studied, we have moved towards tasks that go beyond the relatively well-studied problem of supervised learning, such as semi-supervised learning or structured estimation. Finally, we have continuously striven to analyze challenging datasets from biology, astronomy, and other domains, leading to the inclusion of several application areas in our portfolio. When performed in collaboration with domain experts, such work can be rewarding for both sides and it provides us with additional insights into tasks and methods for empirical inference pertaining to our department's core interests. In cases where the application areas are close to our own expertise, we also carry out application-oriented research on our own. Example thereof are robotics and computational imaging. No matter whether the applications are done in the department or in collaboration with external partners, considering a whole range of applications helps us study principles and methods of inference, rather than inference applied to one specific problem domain.
Our main mode of dissemination is the publication of results at the leading machine learning conferences. These are NIPS (Neural Information Processing Systems), ICML (International Conference on Machine Learning), UAI (Uncertainty in Artificial Intelligence), and for theoretical work, COLT (Conference on Learning Theory). Our presence at these rather competitive conferences makes us one of the top few machine learning labs worldwide. In addition, we sometimes submit our work to the leading application oriented conferences in fields including computer vision (ICCV, ECCV, CVPR), data mining (KDD, ICDM), and computational biology (ISMB, RECOMB).
Our work has earned us a number of awards
at major conferences, including best paper prizes at COLT 2003, NIPS 2004, COLT 2005, COLT 2006, ICML 2006, ALT 2007, DAGM 2008, CVPR 2008, ISMB 2008, ECCV 2008, NIPS 2008, NIPS 2009, UAI 2010, ICINCO 2010, DAGM 2011, IROS 2012, and honorable mentions at IROS 2008, NIPS 2009, DAGM 2009, KDD 2010, ECML/PKDD 2011, and IROS 2012.