Efficiency of Quasi-Newton Methods on Strictly Positive Functions
Efficiency of Quasi-Newton Methods on Strictly Positive Functions
en-es
en-fr
en-sl
en
en-zh
en-de
0.25
0.5
0.75
1.25
1.5
1.75
2
In this talk we consider a new class of convex optimization problems, which admit faster black-box optimization schemes. For analyzing their rate of convergence, we introduce a notion of mixed accuracy of an approximate solution, which is a convenient generalization of the absolute and relative accuracies. We show that for our problem class, a natural Quasi-Newton method is always faster than the standard gradient method. At the same time, after an appropriate normalization, our results can be extended onto the general convex unconstrained minimization problems.