Steppest descent analysis for unregularized linear prediction with strictly convex penalties
Steppest descent analysis for unregularized linear prediction with strictly convex penalties
en
0.25
0.5
0.75
1.25
1.5
1.75
2
This manuscript presents a convergence analysis, generalized from a study of boosting, of unregularized linear prediction. Here the empirical risk — incorporating strictly convex penalties composed with a linear term — may fail to be strongly convex, or even attain a minimizer. This analysis is demonstrated on linear regression, decomposable objectives, and boosting.