On the acceleration of the Barzilai-Borwein method

The Barzilai-Borwein (BB) gradient method is efficient for solving large-scale unconstrained problems to the modest accuracy and has a great advantage of being easily extended to solve a wide class of constrained optimization problems. In this paper, we propose a new stepsize to accelerate the BB method by requiring finite termination for minimizing two-dimensional strongly … Read more

Inexact proximal stochastic second-order methods for nonconvex composite optimization

In this paper, we propose a framework of Inexact Proximal Stochastic Second-order (IPSS) methods for solving nonconvex optimization problems, whose objective function consists of an average of finitely many, possibly weakly, smooth functions and a convex but possibly nons- mooth function. At each iteration, IPSS inexactly solves a proximal subproblem constructed by using some positive … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Gradient methods exploiting spectral properties

We propose a new stepsize for the gradient method. It is shown that this new stepsize will converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang’s asymptotic optimal gradient method (Computational Optimization and Applications, 2006, 33(1): 73-88) is applied for minimizing quadratic objective functions. Based on this spectral property, we develop … Read more

A One-Parameter Family of Middle Proximal ADMM for Constrained Separable Convex Optimization

This work is devoted to studying a family of Middle Proximal Alternating Direction Method of Multipliers (MP-ADM) for solving multi-block constrained separable convex optimization. Such one-parameter family of MP-ADM combines both Jacobian and Gauss-Seidel types of alternating direction method, and proximal point techniques are only applied to the middle subproblems to promote the convergence. We … Read more

A Parameterized Proximal Point Algorithm for Separable Convex Optimization

In this paper, we develop a Parameterized Proximal Point Algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case $O(1/t)$ convergence rate, where $t$ is the iteration number. By properly choosing the algorithm parameters, numerical experiments … Read more

Generalized Symmetric ADMM for Separable Convex Optimization

The Alternating Direction Method of Multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a Generalized Symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two … Read more

Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniformly optimal convex programming algorithms to solve more general nonlinear, possibly nonconvex, optimization problems. The basic idea is to incorporate a local search step (gradient descent or Quasi-Newton iteration) into these uniformly optimal convex programming methods, and then enforce a monotone decreasing property of … Read more

A New Trust Region Method with Simple Model for Large-Scale Optimization

In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization problems is proposed. By using the generalized weak quasi-Newton equations, we derive several schemes to determine the appropriate scalar matrix as the Hessian approximation. Under some reasonable conditions and the framework of the trust-region method, the global convergence … Read more

An O(1/k) Convergence Rate for the Variable Stepsize Bregman Operator Splitting Algorithm

An earlier paper proved the convergence of a variable stepsize Bregman operator splitting algorithm (BOSVS) for minimizing $\phi(Bu)+H(u)$ where $H$ and $\phi$ are convex functions, and $\phi$ is possibly nonsmooth. The algorithm was shown to be relatively efficient when applied to partially parallel magnetic resonance image reconstruction problems. In this paper, the convergence rate of … Read more