08:30  Introduction and overview Gabriel DulacArnold & Djalel Benbouzid 
08:45  Invited talk: John Langford 
09:30  Learning Sequences of Controllers for Complex Manipulation Tasks Jaeyong Sung, Bart Selman and Ashutosh Saxena 
10:00  Coffee 
10:30  Invited talk: Hugo Larochelle 
11:30  CostSensitive Tree of Classifiers Kilian Weinberger 
12:00  Lunch 
14:00  Invited talk: Csaba Szepesvári 
15:00  Dynamic Feature Selection for Classification on a Budget Sergey Karayev, Mario Fritz and Trevor Darrell 
15:30  Coffee 
16:00  Round table:Participants:

Other accepted papers:
Building Bridges: Viewing Active Learning from the MultiArmed Bandit Lens
Invited talks abstracts
John Langford: Imperative Learning
Abstract: Machine learning is often applied deep in the bowels of a process. It’s application is particularly difficult when there is a complex decision process requiring multiple predictions. I will discuss a new method for machine learning which makes a blend of programming and machine learning much more natural. In essence, you can freely mix prediction and programming (whether functional or imperative) in complex structured prediction problems.
Hugo Larochelle: The Neural Autoregressive Distribution Estimator : Distribution Modeling as Sequential Prediction
Abstract: The problem of estimating the distribution of multivariate data is one of the most general problems addressed by machine learning research. Good models of multivariate distributions can be used to extract meaningful features from unlabeled data, to provide a good prior over some output space in a structured prediction task (e.g. image denoising) or to provide regularization cues to a supervised model (e.g. a neural network classifier).
The most successful distribution/density models are usually based on graphical models, that represent the joint distribution p(x) of observations x using latent variables that explain the statistical structure within x. Mixture models are a well known example and rely on a latent representation with mutually exclusive components. For high dimensional data however, models that rely on a distributed representation, i.e. a representation where components can vary separately, are usually preferred. These include the very successful restricted Boltzmann machine (RBM) model. However, unlike mixture models, the RBM is not a tractable distribution estimator, in the sense that computing the probability of observations p(x) can only be computed up to a multiplicative constant and must be approximated.
In this talk, I’ll describe the Neural Autoregressive Distribution Estimator (NADE), which combines the tractability of mixture models with the modeling power of the RBM, by relying on distributed representations. The main idea in NADE is to use the probability product rule to decompose the joint distribution p(x) into the sequential product of its conditionals, and model all these conditionals using a single neural network. Thus, the task of distribution modeling is now tackled as the task of predicting the observations within x sequentially, in some given (usually random) order.
I’ll describe how to model binary, realvalued and categorical observations with NADE. We’ll see that NADE can achieve stateoftheart performance on many datasets, including datasets of document bagsofwords and speech spectrograms. I’ll also discuss how NADE can be used successfully for document information retrieval, image classification and scene annotation.
Csaba Szepesvári: Online Learning with Costly Features and Labels
In this talk, we consider the online probing problem: In each round, the learner is able to purchase the values of a subset of feature values. After the learner uses this information to come up with a prediction for the given round, he then has the option of paying for seeing the loss that he is evaluated against. Either way, the learner pays for the imperfections of his predictions and whatever he chooses to observe, including the cost of observing the loss function for the given round and the cost of the observed features. We consider two variations of this problem, depending on whether the learner can observe the label for free, or not. We provide algorithms and upper and lower bounds on the regret for both variants. We show that a positive cost for observing the label significantly increases the regret of theproblem.
This is joint work with Navid Zolghadr, Andras Gyorgy and Russ Greiner.