Latest News


Supervised and unsupervised function learning is a vast domain with a plethora of standard algorithmic solutions. Most of these methods learn a monolithic predictor function in the sense that each test instance is processed in a single-step, atomic process. In contrast, some recent studies have proposed a different paradigm in which prediction is reformulated as a sequential decision process and learning the predictor function corresponds to solving a dynamic control problem. These new approaches bridge “classical” supervised and unsupervised learning problems with the fields of control theory and reinforcement learning (RL). To cite a few of them, Póczos et al. (2009) augment a Viola-Jones cascade with a three-way decision at each stage and learn the thresholds using RL; Gaudel and Sebag (2010) formulate feature selection as a sequential game and solve it using multi-arm bandits; G. Dulac-Arnold et al. (2012) cast feature selection as learning an agent which, for a given instance, sequentially chooses both the features to consider and the class to assign the instance to; Benbouzid, Busa-Fekete, and Kégl (2012) learn a sparse DAG-shaped sequential classifier for fast multi-class classification; inspired by the way the human eye sequentially browses images, Larochelle and Hinton (2010) train a Boltzmann machine with third-order connections that classify an image by looking at some glimpses while deciding where to collect the glimpses from.

Both the motivations and applications of these studies are quite diverse. Unrolling monolithic functions into a sequence of decisions and controling when the final prediction is made is obviously beneficial to learn “budgeted” predictors, so some of the approaches are motivated by either fast prediction or by keeping the number of costly feature evaluations low (Póczos et al. 2009; Benbouzid, Busa-Fekete, and Kégl 2012; Gao and Koller 2011; Chen et al. 2012). Typical applications of these techniques are real-time object detection or webpage ranking, trigger design in high-energy physics, or resource-bounded information extraction (Kanani and McCallum 2012). Another major motivation is structural prediction. It turns out that building an output object sequentially is a natural solution when the output object has complex inner structure (Maes, Denoyer, and Gallinari 2009; Daumé III, Langford, and Marcu 2009). Sequential prediction of this kind is typically applied to text classification (Gabriel Dulac-Arnold, Denoyer, and Gallinari 2011) and parser design (Neu and Szepesvári 2009). Furthermore, mapping instances to a set of features selected by a budget-constrained controler generates a nontrivial (usually sparse) representation of the instances that itself can be a subject of interest (Benbouzid, Busa-Fekete, and Kégl 2011). This phenomenon relates sequential prediction to sparse coding and representation learning.

Despite the promising applications and diverse algorithmic solutions, sequential prediction still remains a young and unexplored field with numerous unanswered theoretical and algorithmic questions. In most of the approaches, the controler takes over the output of classical algorithms. It is unclear to which extent the controller can be involved in solving the primary problem, or if we can even completely get rid of the “classical” part of the solution. A major bottleneck is that state-of-the-art performance on some applications would require decision sequences which are much longer than those which can be realistically handled with turn-key RL algorithms. On the other hand, the sequential decision problem has special characteristics, in particular, 1) it is often conceptually simpler than the generic problems for which RL algorithms are designed, and 2) it requires a fast controler at test time. An important question is thus to find tailor-made solutions for these problems either among RL techniques or even using simpler but compuationally more efficient setups (such as multi-armed bandits). On the conceptual side, an interesting direction is to explore the connection between sparse sequential prediction, sparse coding, and learning sparse representations. This workshop aims at gathering the various machine learning sub-communities that have worked around the subject and discuss the aforementioned issues.

The topics of interest of this workshop include, but are not limited to:

  • Generic topics:
    • Classification, ranking,
    • Budgeted and/or cost-sensitive classification,
    • Structured prediction,
    • Sparse coding with sequential models,
    • Feature selection.
  • Reinforcement learning applied to learning sequential functions:
    • RL with many discrete actions,
    • RL in high-dimensional spaces,
    • Inverse RL
  • Applications:
    • Real-time detection and classification
    • Text/image classification and information extraction
    • Trigger design in high-energy particle physics
    • Web-page ranking
    • Medical diagnosis

Invited Speakers

  • John Langford
  • Hugo Larochelle
  • Csaba Szepesvári

Program Committee

  • Francis Bach – Laboratoire d’Informatique de l’Ecole Normale Superieure – INRIA-Sierra - France
  • Aaron Courville – University of Montreal – Canada
  • Jason Eisner – Johns Hopkins University – USA
  • Damien Ernst – University of Liege – Belgium
  • Hugo Larochelle – University of Sherbrooke – Canada
  • Francis Maes – Catholic University of Leuven – Belgium
  • Rémi Munos – INRIA Sequel – France
  • Philippe Preux – University Lille 3 – INRIA Sequel – France
  • Thomas Rückstieß – Technische Universitat Munchen – Germany
  • Csaba Szepesvári – University of Alberta – Canada
  • Kilian Weinberger – Washington University – USA



Benbouzid, D., R. Busa-Fekete, and B. Kégl. 2011. “ MDDAG: learning deep decision DAGs in a Markov decision process setup.” In NIPS’11 workshop on Deep Learning and Unsupervised Feature Learning.

Benbouzid, D., R. Busa-Fekete, and B. Kégl. 2012. “Fast classification using sparse decision DAGs.” In Proceedings of the 29th International Conference on Machine Learning.

Chen, M., Z. Xu, K. Q. Weinberger, O. Chapelle, D. Kedem, and M. O. Saint Louis. 2012. “Classifier cascade for minimizing feature evaluation cost.” In AISTATS, 15:218–226.

Daumé III, Hal, John Langford, and Daniel Marcu. 2009. “Search-based structured prediction.” Machine Learning 75 (3): 297–325.

Dulac-Arnold, G., L. Denoyer, P. Preux, and P. Gallinari. 2012. “Sequential approaches for learning datum-wise sparse representations.” Machine Learning: 1–36.

Dulac-Arnold, Gabriel, Ludovic Denoyer, and Patrick Gallinari. 2011. “Text Classification: A Sequential Reading Approach.” In ECIR, 411–423.

Gao, T. and Koller, D. (2011). Active classification based on value of classifier. In NIPS, pages 1062–1070.

Gaudel, Romaric, and M. Sebag. 2010. “Feature Selection as a One-Player Game.” In Proc. of ICML.

Kanani, Pallika H., and Andrew McCallum. 2012. “Selecting actions for resource-bounded information extraction using reinforcement learning.” In WSDM, 253–262.

Larochelle, H., and G. Hinton. 2010. “Learning to combine foveal glimpses with a third-order Boltzmann machine.” In Advances in Neural Information Processing Systems 23, 1243–1251. MIT Press.

Maes, Francis, Ludovic Denoyer, and Patrick Gallinari. 2009. “Structured Prediction with Reinforcement Learning.” Machine Learning Journal 77 (2-3): 271–301.

Neu, Gergely, and Csaba Szepesvári. 2009. “Training parsers by inverse reinforcement learning.” Machine Learning 77 (2-3): 303–337.

Póczos, B., Y. Abbasi-Yadkori, Cs. Szepesvári, R. Greiner, and N. Sturtevant. 2009. “Learning when to stop thinking and do something!” In Proceedings of the 26th International Conference on Machine Learning, 825–832.