Equipping Barzilai-Borwein method with two dimensional quadratic termination property

A new gradient stepsize is derived at the motivation of equipping the Barzilai-Borwein (BB) method with two dimensional quadratic termination property. A remarkable feature of the new stepsize is that its computation only depends on the BB stepsizes in previous iterations without the use of exact line searches and Hessian, and hence it can easily … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Barzilai-Borwein Step Size for Stochastic Gradient Descent

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step … Read more

hBcnorm regularization algorithms for optimization over permutation matrices

Optimization problems over permutation matrices appear widely in facility layout, chip design, scheduling, pattern recognition, computer vision, graph matching, etc. Since this problem is NP-hard due to the combinatorial nature of permutation matrices, we relax the variable to be the more tractable doubly stochastic matrices and add an $L_p$-norm ($0 < p < 1$) regularization ... Read more

A New Trust Region Method with Simple Model for Large-Scale Optimization

In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization problems is proposed. By using the generalized weak quasi-Newton equations, we derive several schemes to determine the appropriate scalar matrix as the Hessian approximation. Under some reasonable conditions and the framework of the trust-region method, the global convergence … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that only stochastic information of the gradients of the objective function is available via a stochastic first-order oracle (SFO). Firstly, we propose a general framework of stochastic quasi-Newton methods for solving nonconvex stochastic optimization. The proposed framework extends the classic … Read more

An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each … Read more