Menu

More data speeds up training time in learning halfspaces over sparse vectors

calendar icon Nov 7, 2014 1809 views
split view icon
video icon
presentation icon
video with chapters icon
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

The increased availability of data in recent years led several authors to ask whether it is possible to use data as a {\em computational} resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a {\em natural supervised learning problem} --- we consider agnostic PAC learning of halfspaces over 3-sparse vectors in {−1,1,0}n. This class is inefficiently learnable using O(n/ϵ2) examples. Our main contribution is a novel, non-cryptographic, methodology for establishing computational-statistical gaps, which allows us to show that, under a widely believed assumption that refuting random 3CNF formulas is hard, efficiently learning this class using O\left(n/\epsilon^2\right)examplesisimpossible.Wefurthershowthatunderstrongerhardnessassumptions,evenO\left(n^{1.499}/\epsilon^2\right)examplesdonotsuffice.Ontheotherhand,weshowanewalgorithmthatlearnsthisclassefficientlyusing\tilde{\Omega}\left(n^2/\epsilon^2\right)$ examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.