Learning with Many Reproducing Kernel Hilbert Spaces
In this talk, we consider the problem of learning a target function that belongs to the linear span of a large number of reproducing kernel Hilbert spaces. Such a problem arises naturally in many practice situations with the ANOVA, the additive model and multiple kernel learning as the most well known and important examples. We investigate approaches based on l1-type complexity regularization and the nonnegative garrote respectively. We show that the computation of both procedures can be done efficiently and the nonnegative garrote could be more favorable at times. We also study their theoretical properties from both variable selection and estimation perspective. We establish several probabilistic inequalities providing bounds on the excess risk and L2-error that depend on the sparsity of the problem. Part of the talk is based on joint work with Vladimir Koltchinskii.
RELATED CATEGORIES
MORE VIDEOS FROM THE EVENT
Poster Spotlights 1
May 6, 2009 3694 views
Testing and estimation in a sparse normal means model, with connections to shape...
May 6, 2009 2909 views
Sparsity in online multitask/multiview learning
May 6, 2009 3223 views
Algorithmic Strategies for Non-convex Optimization in Sparse Learning
May 6, 2009 7845 views
MORE VIDEOS FROM THE SAME CATEGORIES
Improved Functional Prediction of Proteins by Learning Kernel Combinations in Mu...
Feb 25, 2007 3251 views
Learning Sequence Kernels
Dec 20, 2008 4452 views
Introduction to Kernel Methods
Feb 25, 2007 14784 views
Gaussian Processes and Process Convolutions from a Bayesian Perspective
Jan 19, 2010 5253 views
Learning Networks of Heterogeneous Influence
Jan 14, 2013 3114 views