Menu

Multi-task Learning

calendar icon Aug 26, 2013 8076 views
split view icon
video icon
presentation icon
video with chapters icon
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

A fundamental limitation of standard machine learning methods is the cost incurred by the preparation of the large training samples required for good generalization. A potential remedy is o ffered by multi-task learning: in many cases, while individual sample sizes are rather small, there are samples to represent a large number of learning tasks (linear regression problems), which share some constraining or generative property. If this property is suficiently simple it should allow for better learning of the individual tasks despite their small individual sample sizes. In this talk I will review a wide class of multi-task learning methods which encourage low-dimensional representations of the regression vectors. I will describe techniques to solve the underlying optimization problems and present an analysis of the generalization performance of these learning methods which provides a proof of the superiority of multi-task learning under specific conditions.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.