Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

The asymptotic behavior of stochastic gradient algorithms is studied. Relying on some results of differential geometry (Lojasiewicz gradient inequality), the almost sure point-convergence is demonstrated and relatively tight almost sure bounds on the convergence rate are derived. In sharp contrast to all existing result of this kind, the asymptotic results obtained here do not require … Read more

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, … Read more

Automated Tuning of Optimization Software Parameters

We present a method to tune software parameters using ideas from software testing and machine learning. The method is based on the key observation that for many classes of instances, the software shows improved performance if a few critical parameters have “good” values, although which parameters are critical depends on the class of instances. Our … Read more

Enclosing Machine Learning

This report introduces a new machine learning paradigm called enclosing machine learning for data mining. This novel method utilizes the virtues of human being’s cognition process and tries to imitate the two basic principles of cognition process from a macroscopical view, which are cognizing things of the same kind, recognizing things of a new kind … Read more