An Infeasible Interior-point Arc-search Algorithm for Nonlinear Constrained Optimization

In this paper, we propose an infeasible arc-search interior-point algorithm for solving nonlinear programming problems. Most algorithms based on interior-point methods are categorized as line search in the sense that they compute a next iterate on a straight line determined by a search direction which approximates the central path. The proposed arc-search interior-point algorithm uses … Read more

An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems

This paper presents an accelerated composite gradient (ACG) variant, referred to as the AC-ACG method, for solving nonconvex smooth composite minimization problems. As opposed to well-known ACG variants that are either based on a known Lipschitz gradient constant or a sequence of maximum observed curvatures, the current one is based on a sequence of average … Read more

Accelerated Symmetric ADMM and Its Applications in Signal Processing

The alternating direction method of multipliers (ADMM) were extensively investigated in the past decades for solving separable convex optimization problems. Fewer researchers focused on exploring its convergence properties for the nonconvex case although it performed surprisingly efficient. In this paper, we propose a symmetric ADMM based on different acceleration techniques for a family of potentially … Read more

An accelerated inexact proximal point method for solving nonconvex-concave min-max problems

Abstract This paper presents a quadratic-penalty type method for solving linearly-constrained composite nonconvex-concave min-max problems. The method consists of solving a sequence of penalty subproblems which, due to the min-max structure of the problem, are potentially nonsmooth but can be approximated by smooth composite nonconvex minimization problems. Each of these penalty subproblems is then solved … Read more

A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems

In this paper, we describe and establish iteration-complexity of two accelerated composite gradient (ACG) variants to solve a smooth nonconvex composite optimization problem whose objective function is the sum of a nonconvex differentiable function f with a Lipschitz continuous gradient and a simple nonsmooth closed convex function h. When f is convex, the first ACG … Read more

A Log-Barrier Newton-CG Method for Bound Constrained Optimization with Complexity Guarantees

We describe an algorithm based on a logarithmic barrier function, Newton’s method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of … Read more

Inertial Block Mirror Descent Method for Non-Convex Non-Smooth Optimization

In this paper, we propose inertial versions of block coordinate descent methods for solving non-convex non-smooth composite optimization problems. We use the general framework of Bregman distance functions to compute the proximal maps. Our method not only allows using two different extrapolation points to evaluate gradients and adding the inertial force, but also takes advantage … Read more

Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(|log(epsilon)|.epsilon^{-2}) evaluations of the problem’s functions and their derivatives for finding an $\epsilon$-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, … Read more

When a maximal angle among cones is nonobtuse

Principal angles between linear subspaces have been studied for their application to statistics, numerical linear algebra, and other areas. In 2005, Iusem and Seeger defined critical angles within a single convex cone as an extension of antipodality in a compact set. Then, in 2016, Seeger and Sossa extended that notion to two cones. This was … Read more

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality, and Saddle-Points

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting, and saddle-point avoiding. To handle constrained optimization, we first propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. To … Read more