Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning Problems

Many machine learning applications and tasks rely on the stochastic gradient descent (SGD) algorithm and its variants. Effective step length selection is crucial for the success of these algorithms, which has motivated the development of algorithms such as ADAM or AdaGrad. In this paper, we propose a novel algorithm for adaptive step length selection in … Read more

On the Number of Pivots of Dantzig’s Simplex Methods for Linear and Convex Quadratic Programs

Refining and extending works by Ye and Kitahara-Mizuno, this paper presents new results on the number of pivots of simplex-type methods for solving linear programs of the Leontief kind, certain linear complementarity problems of the P kind, and nonnegative constrained convex quadratic programs. Our results contribute to the further understanding of the complexity and efficiency … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique … Read more

A New Inexact Proximal Linear Algorithm with Adaptive Stopping Criteria for Robust Phase Retrieval

This paper considers the robust phase retrieval problem, which can be cast as a nonsmooth and nonconvex optimization problem. We propose a new inexact proximal linear algorithm with the subproblem being solved inexactly. Our contributions are two adaptive stopping criteria for the subproblem. The convergence behavior of the proposed methods is analyzed. Through experiments on … Read more

A minimal face constant rank constraint qualification for reducible conic programming

In a previous paper [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez, T. P. Silveira. First- and second-order optimality conditions for second-order cone and semidefinite programming under a constant rank condition. Mathematical Programming, 2023. DOI: 10.1007/s10107-023-01942-8] we introduced a constant rank constraint qualification for nonlinear semidefinite and second-order cone programming by considering all faces … Read more

A descent method for nonsmooth multiobjective optimization problems on Riemannian manifolds

In this paper, a descent method for nonsmooth multiobjective optimization problems on complete Riemannian manifolds is proposed. The objective functions are only assumed to be locally Lipschitz continuous instead of convexity used in existing methods. A necessary condition for Pareto optimality in Euclidean space is generalized to the Riemannian setting. At every iteration, an acceptable … Read more

Per-RMAP: Feasibility-Seeking and Superiorization Methods for Floorplanning with I/O Assignment

The feasibility-seeking approach provides a systematic scheme to manage and solve complex constraints for continuous problems, and we explore it for the floorplanning problems with increasingly heterogeneous constraints. The classic legality constraints can be formulated as the union of convex sets. However, the convergence of conventional projection-based algorithms is not guaranteed when the constraints sets … Read more

Heuristic methods for noisy derivative-free bound-constrained mixed-integer optimization

This paper discusses MATRS, a new matrix adaptation trust region strategy for solving noisy derivative-free mixed-integer optimization problems with simple bounds.  MATRS repeatedly cycles through five phases, mutation, selection, recombination, trust-region, and mixed-integer in this order. But if in the mutation phase a new best point (point with lowest inexact function value among all evaluated … Read more

The alternating simultaneous Halpern-Lions-Wittmann-Bauschke algorithm for finding the best approximation pair for two disjoint intersections of convex sets

Given two nonempty and disjoint intersections of closed and convex subsets, we look for a best approximation pair relative to them, i.e., a pair of points, one in each intersection, attaining the minimum distance between the disjoint intersections. We propose an iterative process based on projections onto the subsets which generate the intersections. The process … Read more