Novel stepsize for some accelerated and stochastic optimization methods

New first-order methods now need to be improved to keep up with the constant developments
in machine learning and mathematics. They are commonly used methods to solve
optimization problems. Among them, the algorithm branch based on gradient descent has developed
rapidly with good results achieved. Not out of that trend, in this article, we research a new
method combined with acceleration methods to provide updated results for optimization problems
commonly found in machine learning. Besides, realizing the remarkable increase in parameters
in recent deep learning models, we also research stochastic methods to increase speed in optimizing
model parameters. Also in this article, theories to prove the convergence of the proposed
algorithms are also given and there are experiments to prove the effectiveness of those.

Article

Download

View Novel stepsize for some accelerated and stochastic optimization methods