Full Convergence of Regularized Methods for Unconstrained Optimization

Typically, the sequence of points generated by an optimization algorithm may have multiple limit points. Under convexity assumptions, however, (sub)gradient methods are known to generate a convergent sequence of points. In this paper, we extend the latter property to a broader class of algorithms. Specifically, we study unconstrained optimization methods that use local quadratic models … Read more

Worst-Case Complexity of High-Order Algorithms for Pareto-Front Reconstruction

In this paper, we are concerned with a worst-case complexity analysis of a-posteriori algorithms for unconstrained multiobjective optimization. Specifically, we propose an algorithmic framework that generates sets of points by means of $p$th-order models regularized with a power $p+1$ of the norm of the step. Through a tailored search procedure, several trial points are generated … Read more

Relaxations of KKT Conditions do not Strengthen Finite RLT and SDP-RLT Bounds for Nonconvex Quadratic Programs

We consider linear and semidefinite programming relaxations of nonconvex quadratic programs given by the reformulation-linearization technique (RLT relaxation), and the Shor relaxation combined with the RLT relaxation (SDP-RLT relaxation). By incorporating the first-order optimality conditions, a quadratic program can be formulated as an optimization problem with complementarity constraints. We investigate the effect of incorporating optimality … Read more

Asymptotically Fair and Truthful Allocation of Public Goods

We study the fair and truthful allocation of m divisible public items among n agents, each with distinct preferences for the items. To aggregate agents’ preferences fairly, we focus on finding a core solution. For divisible items, a core solution always exists and can be calculated by maximizing the Nash welfare objective. However, such a … Read more

Gradient Methods with Online Scaling Part I. Theoretical Foundations

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning … Read more

A variable dimension sketching strategy for nonlinear least-squares

We present a stochastic inexact Gauss-Newton method for the solution of nonlinear least-squares. To reduce the computational cost with respect to the classical method, at each iteration the proposed algorithm approximately minimizes the local model on a random subspace. The dimension of the subspace varies along the iterations, and two strategies are considered for its … Read more

Solving a linear program via a single unconstrained minimization

This paper proposes a novel approach for solving linear programs. We reformulate a primal-dual linear program as an unconstrained minimization of a convex and twice continuously differentiable merit function. When the optimal set of the primal-dual pair is nonempty, its optimal set is equal to the optimal set of the proposed merit function. Minimizing this … Read more

Retrospective Approximation Sequential Quadratic Programming for Stochastic Optimization with General Deterministic Nonlinear Constraints

In this paper, we propose a framework based on the Retrospective Approximation (RA) paradigm to solve optimization problems with a stochastic objective function and general nonlinear deterministic constraints. This framework sequentially constructs increasingly accurate approximations of the true problems which are solved to a specified accuracy via a deterministic solver, thereby decoupling the uncertainty from … Read more

Subgradient Regularization: A Descent-Oriented Subgradient Method for Nonsmooth Optimization

In nonsmooth optimization, a negative subgradient is not necessarily a descent direction, making the design of convergent descent methods based on zeroth-order and first-order information a challenging task. The well-studied bundle methods and gradient sampling algorithms construct descent directions by aggregating subgradients at nearby points in seemingly different ways, and are often complicated or lack … Read more

An inexact alternating projection method with application to matrix completion

We develop and analyze an inexact regularized alternating projection method for nonconvex feasibility problems. Such a method employs inexact projections on one of the two sets, according to a set of well-defined conditions. We prove the global convergence of the algorithm, provided that a certain merit function satisfies the Kurdyka-Lojasiewicz property on its domain. The … Read more