Machine Learning Seminars

Home | News | Jobs | Research Topics | People | Projects | Publications | Resources | Blogs | Students | Seminars | Visiting | Contact 


Machine Learning seminars are held every Thursday at 11.15am-12.00pm and are broadcast to NICTA labs:


CRL - Seminar Room, Ground Floor

ATP - Seminar Room, Level 4

NRL - Seminar Room, Level 1 West

VRL - Seminar Room.


Depending on the equipment used, they can be broadcast live on AARNET, click on "Live Now".

Visitors are welcome.

Current

Date:
17 April 2014
Time:
11.15am-12.00pm
Where:  
NICTA CRL Seminar Room
Presenter:
Nishant Mehta
Title: 
On the Sample Complexity of Predictive Sparse Coding
Abstract:
 
In this talk I'll present the first generalization error bounds for predictive sparse coding. This method seeks to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks. The bounds I present hold in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as O(sqrt(d k/m)) with respect to d, k, and the size m of the training sample.  It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample, and hence some luckiness-style arguments will be in play. I'll also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.
Bio:
Nishant is a postdoc at ANU working with Bob Williamson. His current research focuses on connections between online learning and statistical learning and fundamental results in each. In 2013 he finished his PhD at Georgia Institute of Technology under Alex Gray. His thesis investigated learning theory for learning sparse representations, multi-task learning, and meta learning, and he continues to be interested in a theory of representation learning in the context of distributions or sequences over tasks.

Upcoming

Date:
24 April 2014
Time: 
11.15am–12.00pm
Where:
NICTA CRL Seminar Room
Presenter:
Scott Sanner
Title:
Fast Bayesian Inference in Piecewise Graphical Models
Abstract:
 
Many real-world Bayesian inference problems such as preference learning, competitive skill learning, and Bayesian belief updating with state constraints naturally use piecewise transition, likelihood or prior models. Unfortunately, exact closed-form inference in these graphical models is intractable in the general case and existing approximation techniques provide few guarantees on both approximation quality and efficiency. While (Markov Chain) Monte Carlo sampling provides an attractive asymptotically unbiased approximation approach, rejection sampling and Metropolis–Hastings both prove inefficient in practice, and analytical derivation of Gibbs samplers require exponential space and time in the amount of data or other quantities relating to graphical model size. In this work, we show how to convert all piecewise graphical models to equivalent mixture models and then provide a blocked Gibbs sampling approach for this transformed model that achieves an exponential-to-linear reduction in space and time compared to a standard Gibbs sampler. This enables fast, asymptotically unbiased inference in a new expressive class of piecewise graphical models. This talk describes part of Hadi Afshar's thesis work with additional contributions from Ehsan Abbasnejad.

Past 

Past seminars details are stored by year: 20142013, 2012, 2011, 2010, 2009, 2008