An Oblivious Ellipsoid Algorithm for Solving a System of (In)Feasible Linear Inequalities

The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P) when its set of solutions has positive volume. However, when (P) is infeasible, the ellipsoid algorithm has no mechanism for proving that (P) is infeasible. This is in contrast to the other two … Read more

A family of multi-parameterized proximal point algorithms

In this paper, a multi-parameterized proximal point algorithm combining with a relaxation step is developed for solving convex minimization problem subject to linear constraints. We show its global convergence and sublinear convergence rate from the prospective of variational inequality. Preliminary numerical experiments on testing a sparse minimization problem from signal processing indicate that the proposed … Read more

Complexity and performance of an Augmented Lagrangian algorithm

Algencan is a well established safeguarded Augmented Lagrangian algorithm introduced in [R. Andreani, E. G. Birgin, J. M. Martínez and M. L. Schuverdt, On Augmented Lagrangian methods with general lower-level constraints, SIAM Journal on Optimization 18, pp. 1286-1309, 2008]. Complexity results that report its worst-case behavior in terms of iterations and evaluations of functions and … Read more

Accelerated Symmetric ADMM and Its Applications in Signal Processing

The alternating direction method of multipliers (ADMM) were extensively investigated in the past decades for solving separable convex optimization problems. Fewer researchers focused on exploring its convergence properties for the nonconvex case although it performed surprisingly efficient. In this paper, we propose a symmetric ADMM based on different acceleration techniques for a family of potentially … Read more

A linearly convergent stochastic recursive gradient method for convex optimization

The stochastic recursive gradient algorithm (SARAH) [8] attracts much interest recently. It admits a simple recursive framework for updating stochastic gradient estimates. Motivated by this, in this paper, we propose a SARAH-I method incorporating importance sampling, whose linear conver- gence rate of the sequence of distances between iterates and the optima set is proven under … Read more

High-Order Evaluation Complexity for Convexly-Constrained Optimization with Non-Lipschitzian Group Sparsity Terms

This paper studies high-order evaluation complexity for partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. We propose a partially separable adaptive regularization algorithm using a $p$-th order Taylor model and show that the algorithm can produce an (epsilon,delta)-approximate q-th-order stationary point in at most O(epsilon^{-(p+1)/(p-q+1)}) evaluations of the objective … Read more

Time-Dependent Shortest Path Problems with Penalties and Limits on Waiting

Waiting at the right location at the right time can be critically important in certain variants of time-dependent shortest path problems. We investigate the computational complexity of time-dependent shortest path problems in which there is either a penalty on waiting or a limit on the total time spent waiting at a given subset of the … Read more

Minimum Color-Degree Perfect b -Matchings

The minimum color-degree perfect b-matching roblem (Col-BM) is a new extension of the perfect b-matching problem to edge-colored graphs. The objective of Col-BM is to minimize the maximum number of differently colored edges in a perfect b-matching that are incident to the same node. We show that Col-BM is NP-hard on bipartite graphs by a … Read more

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality, and Saddle-Points

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting, and saddle-point avoiding. To handle constrained optimization, we first propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. To … Read more

On High-order Model Regularization for Multiobjective Optimization

A p-order regularization method for finding weak stationary points of multiobjective optimization problems with constraints is introduced. Under Holder conditions on the derivatives of the objective functions, complexity results are obtained that generalize properties recently proved for scalar optimization. Article Download View On High-order Model Regularization for Multiobjective Optimization