Rounding on the standard simplex: regular grids for global optimization

Given a point on the standard simplex, we calculate a proximal point on the regular grid which is closest with respect to any norm in a large class, including all $\ell^p$-norms for $p\ge 1$. We show that the minimal $\ell^p$-distance to the regular grid on the standard simplex can exceed one, even for very fine … Read more

The Euclidean distance degree of an algebraic variety

The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest point maps from the perspective of computational … Read more

A new formulation of protein evolutionary models that account for structural constraints

Despite the importance of a thermodynamically stable structure with a conserved fold for protein function, almost all evolutionary models neglect site-site correlations that arise from physical interactions between neighboring amino acid sites. This is mainly due to the difficulty in formulating a computationally tractable model since rate matrices can no longer be used. Here we … Read more

String-Averaging Projected Subgradient Methods for Constrained Minimization

We consider constrained minimization problems and propose to replace the projection onto the entire feasible region, required in the Projected Subgradient Method (PSM), by projections onto the individual sets whose intersection forms the entire feasible region. Specifically, we propose to perform such projections onto the individual sets in an algorithmic regime of a feasibility-seeking iterative … Read more

A Flexible Inexact Restoration Method and Application to Optimization with Multiobjective Constraints under Weighted-Sum Scalarization

We introduce a new flexible Inexact-Restoration (IR) algorithm and an application to problems with multiobjective constraints (MOCP) under the weighted-sum scalarization approach. In IR methods each iteration has two phases. In the first phase one aims to improve the feasibility and, in the second phase, one minimizes a suitable objective function. This is done in … Read more

Mini-batch Stochastic Approximation Methods for Nonconvex Stochastic Composite Optimization

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are … Read more

A Sequential Quadratic Optimization Algorithm with Rapid Infeasibility Detection

We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed … Read more

Local Convergence of the Method of Multipliers for Variational and Optimization Problems under the Sole Noncriticality Assumption

We present local convergence analysis of the method of multipliers for equality-constrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (which is weaker than second-order sufficiency). Local superlinear convergence is established under the … Read more

Projected subgradient minimization versus superiorization

The projected subgradient method for constrained minimization repeatedly interlaces subgradient steps for the objective function with projections onto the feasible region, which is the intersection of closed and convex constraints sets, to regain feasibility. The latter poses a computational difficulty and, therefore, the projected subgradient method is applicable only when the feasible region is “simple … Read more

Theoretical aspects of adopting exact penalty elements within sequential methods for nonlinear programming

In the context of sequential methods for solving general nonlinear programming problems, it is usual to work with augmented subproblems instead of the original ones, tackled by the $\ell_1$-penalty function together with the shortcut usage of a convenient penalty parameter. This paper addresses the theoretical reasoning behind handling the original subproblems by such an augmentation … Read more