For roughly six decades since the seminal paper of Robbins and Monro (1951), Stochastic Approximation has dominated the landscape of algorithms for solving root finding and optimization problems with Monte Carlo observable functions. Recently, however, inspired by the rise in parallel computing and advances in nonlinear programming methods, there has been increasing interest in alternative sampling-based frameworks. Such frameworks are convenient in that they use an existing recursive method, e.g., quasi-Newton or trust-region recursion, with embedded Monte Carlo estimators of objects appearing within the recursion. In this talk, after reviewing some recent results on optimal sampling rates, we consider the question of how to adaptively sample within stochastic recursions. Specifically, we will demonstrate that a simple adaptive scheme that has deep connections to proportional-width sequential confidence intervals endows stochastic recursions with convergence rates that are arbitrarily close to being optimal. Intriguingly, the adaptive sampling schemes we advertise were independently discovered by Nocedal et al. through heuristic (but sound) arguments and numerical experimentation.
This is joint work with Fatemeh Hashemi (Virginia Tech), Soumyadip Ghosh (IBM Research), and Peter Glynn (Stanford University).
- Tags
-