Post-Processing with Projection and Rescaling Algorithms for Semidefinite Programming

We propose the algorithm that solves the symmetric cone programs (SCPs) by iteratively calling the projection and rescaling methods the algorithms for solving exceptional cases of SCP. Although our algorithm can solve SCPs by itself, we propose it intending to use it as a post-processing step for interior point methods since it can solve the … Read more

Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

A Cutting-Plane Global Optimization Algorithm for a Special Non-Convex Problem

This study establishes the convergence of a cutting-plane algorithm tailored for a specific non-convex optimization problem. The presentation begins with the problem definition, accompanied by the necessary hypotheses that substantiate the application of a cutting plane. Following this, we develop an algorithm designed to tackle the problem. Lastly, we provide a demonstration that the sequence … Read more

AdaBB: Adaptive Barzilai-Borwein Method for Convex Optimization

In this paper, we propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially provides a convergent variant of the Barzilai-Borwein method for general unconstrained convex optimization. We analyze the ergodic convergence of the objective function value and the convergence of the iterates for solving general … Read more

Weak convexity and approximate subdifferentials

We explore and construct an enlarged subdifferential for weakly convex functions. The resulting object turns out to be continuous with respect to both the function argument and the enlargement parameter. We carefully analyze connections with other constructs in the literature and particularize to the weakly convex setting well-known variational principles. By resorting to the new … Read more

Block Majorization Minimization with Extrapolation and Application to $\beta$-NMF

We propose a Block Majorization Minimization method with Extrapolation (BMMe) for solving a class of multi-convex optimization problems. The extrapolation parameters of BMMe are updated using a novel adaptive update rule. By showing that block majorization minimization can be reformulated as a block mirror descent method, with the Bregman divergence adaptively updated at each iteration, … Read more

An Inexact Restoration Direct Multisearch Filter Approach to Multiobjective Constrained Derivative-free Optimization

Direct Multisearch (DMS) is a well-established class of methods for multiobjective derivative-free optimization, where constraints are addressed by an extreme barrier approach, only evaluating feasible points. In this work, we propose a filter approach, combined with an inexact feasibility restoration step, to address constraints in the DMS framework. The filter approach treats feasibility as an … Read more

Greedy Newton: Newton’s Method with Exact Line Search

A defining characteristic of Newton’s method is local superlinear convergence within a neighbourhood of a strict local minimum. However, outside this neighborhood Newton’s method can converge slowly or even diverge. A common approach to dealing with non-convergence is using a step size that is set by an Armijo backtracking line search. With suitable initialization the … Read more

Solving separable convex optimization problems: Faster prediction-correction framework

He and Yuan’s prediction-correction framework [SIAM J. Numer. Anal. 50: 700-709, 2012] is able to provide convergent algorithms for solving separable convex optimization problems at a rate of $O(1/t)$ ($t$ represents iteration times) in both ergodic (the average of iteration) and pointwise senses. This paper presents a faster prediction-correction framework at a rate of $O(1/t)$ … Read more

Computational Guarantees for Restarted PDHG for LP based on “Limiting Error Ratios” and LP Sharpness

In recent years, there has been growing interest in solving linear optimization problems – or more simply “LP” – using first-order methods in order to avoid the costly matrix factorizations of traditional methods for huge-scale LP instances. The restarted primal-dual hybrid gradient method (PDHG) – together with some heuristic techniques – has emerged as a … Read more