Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose … Read more

Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms

Matrix Factorization is a popular non-convex objective, for which alternating minimization schemes are mostly used. They usually suffer from the major drawback that the solution is biased towards one of the optimization variables. A remedy is non-alternating schemes. However, due to a lack of Lipschitz continuity of the gradient in matrix factorization problems, convergence cannot … Read more

Burer-Monteiro guarantees for general semidefinite programs

Consider a semidefinite program (SDP) involving an $n\times n$ positive semidefinite matrix $X$. The Burer-Monteiro method consists in solving a nonconvex program in $Y$, where $Y$ is an $n\times p$ matrix such that $X = Y Y^T$. Despite nonconvexity, Boumal et al. showed that the method provably solves generic equality-constrained SDP’s when $p > \sqrt{2m}$, … Read more

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

Backtracking line-search is an old yet powerful strategy for finding better step size to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation … Read more

Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact

In many cases in which one wishes to minimize a complicated or expensive function, it is convenient to employ cheap approximations, at least when the current approximation to the solution is poor. Adequate strategies for deciding the accuracy desired at each stage of optimization are crucial for the global convergence and overall efficiency of the … Read more

An extragradient method for solving variational inequalities without monotonicity

A new extragradient projection method is devised in this paper, which does not obviously require generalized monotonicity and assumes only that the so-called dual variational inequality has a solution in order to ensure its global convergence. In particular, it applies to quasimonotone variational inequality having a nontrivial solution. ArticleDownload View PDF

A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem

In this paper, based on a modified Gram-Schmidt (MGS) process, we propose a class of derivative-free conjugate gradient (CG) projection methods for nonsmooth equations with convex constraints. Two attractive features of the new class of methods are: (i) its generated direction contains a free vector, which can be set as any vector such that the … Read more

Global Convergence in Deep Learning with Variable Splitting via the Kurdyka-{\L}ojasiewicz Property

Deep learning has recently attracted a significant amount of attention due to its great empirical success. However, the effectiveness in training deep neural networks (DNNs) remains a mystery in the associated nonconvex optimizations. In this paper, we aim to provide some theoretical understanding on such optimization problems. In particular, the Kurdyka-{\L}ojasiewicz (KL) property is established … Read more

Douglas-Rachford method for the feasibility problem involving a circle and a disc

The Douglas-Rachford algorithm is a classical and a successful method for solving the feasibility problems. Here, we provide a region for global convergence of the algorithm for the feasibility problem involving a disc and a circle in the Euclidean space of dimension two. Citation1. Borwein, J.M., Sims, B.: The Douglas-Rachford algorithm in the absence of … Read more

A limited-memory optimization method using the infinitely many times repeated BNS update and conjugate directions

To improve the performance of the limited-memory variable metric L-BFGS method for large scale unconstrained optimization, repeating of some BFGS updates was proposed in [1, 2]. But the suitable extra updates need to be selected carefully, since the repeating process can be time consuming. We show that for the limited-memory variable metric BNS method, matrix … Read more