Global Convergence of Algorithms Under Constant Rank Conditions for Nonlinear Second-Order Cone Programming

In [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez C., Weak notions of nondegeneracy in nonlinear semidefinite programming, arXiv:2012.14810, 2020] the classical notion of nondegeneracy (or transversality) and Robinson’s constraint qualification have been revisited in the context of nonlinear semidefinite programming exploiting the structure of the problem, namely, its eigendecomposition. This allows formulating the … Read more

An Accelerated Inexact Dampened Augmented Lagrangian Method for Linearly-Constrained Nonconvex Composite Optimization Problems

This paper proposes and analyzes an accelerated inexact dampened augmented Lagrangian (AIDAL) method for solving linearly-constrained nonconvex composite optimization problems. Each iteration of the AIDAL method consists of: (i) inexactly solving a dampened proximal augmented Lagrangian (AL) subproblem by calling an accelerated composite gradient (ACG) subroutine; (ii) applying a dampened and under-relaxed Lagrange multiplier update; … Read more

Constrained Optimization in the Presence of Noise

The problem of interest is the minimization of a nonlinear function subject to nonlinear equality constraints using a sequential quadratic programming (SQP) method. The minimization must be performed while observing only noisy evaluations of the objective and constraint functions. In order to obtain stability, the classical SQP method is modified by relaxing the standard Armijo … Read more

Dual descent ALM and ADMM

\(\) Classical primal-dual algorithms attempt to solve $\max_{\mu}\min_{x} \mathcal{L}(x,\mu)$ by alternatively minimizing over the primal variable $x$ through primal descent and maximizing the dual variable $\mu$ through dual ascent. However, when $\mathcal{L}(x,\mu)$ is highly nonconvex with complex constraints in $x$, the minimization over $x$ may not achieve global optimality, and hence the dual ascent step … Read more

A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery

This article introduces a new Penalized Majorization-Minimization Subspace algorithm (P-MMS) for solving smooth, constrained optimization problems. In short, our approach consists of embedding a subspace algorithm in an inexact exterior penalty procedure. The subspace strategy, combined with a Majoration-Minimization step-size search, takes great advantage of the smoothness of the penalized cost function, while the penalty … Read more

Nonlinear matrix recovery using optimization on the Grassmann manifold

We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by … Read more

Developments in mathematical algorithms and computational tools for proton CT and particle therapy treatment planning

We summarize recent results and ongoing activities in mathematical algorithms and computer science methods related to proton computed tomography (pCT) and intensitymodulated particle therapy (IMPT) treatment planning. Proton therapy necessitates a high level of delivery accuracy to exploit the selective targeting imparted by the Bragg peak. For this purpose, pCT utilizes the proton beam itself … Read more

QCQP with Extra Constant Modulus Constraints: Theory and Applications on QoS Constrained Hybrid Beamforming for mmWave MU-MIMO

The constant modulus constraint is widely used in analog beamforming, hybrid beamforming, intelligent reflecting surface design, and radar waveform design. The quadratically constrained quadratic programming (QCQP) problem is also widely used in signal processing. However, the QCQP with extra constant modulus constraints was not systematically studied in mathematic programming and signal processing. For example, the … Read more

Minimization over the l1-ball using an active-set non-monotone projected gradient

The l1-ball is a nicely structured feasible set that is widely used in many fields (e.g., machine learning, statistics and signal analysis) to enforce some sparsity in the model solutions. In this paper, we devise an active-set strategy for efficiently dealing with minimization problems over the l1-ball and embed it into a tailored algorithmic scheme … Read more

A spectral PALM algorithm for matrix and tensor-train based Dictionary Learning

Dictionary Learning (DL) is one of the leading sparsity promoting techniques in the context of image classification, where the “dictionary” matrix D of images and the sparse matrix X are determined so as to represent a redundant image dataset. The resulting constrained optimization problem is nonconvex and non-smooth, providing several computational challenges for its solution. … Read more