SLiSeS: Subsampled Line Search Spectral Gradient Method for Finite Sums
CitationSLiSesArticleDownload View PDF
CitationSLiSesArticleDownload View PDF
We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more
Many machine learning applications and tasks rely on the stochastic gradient descent (SGD) algorithm and its variants. Effective step length selection is crucial for the success of these algorithms, which has motivated the development of algorithms such as ADAM or AdaGrad. In this paper, we propose a novel algorithm for adaptive step length selection in … Read more
Refining and extending works by Ye and Kitahara-Mizuno, this paper presents new results on the number of pivots of simplex-type methods for solving linear programs of the Leontief kind, certain linear complementarity problems of the P kind, and nonnegative constrained convex quadratic programs. Our results contribute to the further understanding of the complexity and efficiency … Read more
Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more
A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique … Read more
This paper considers the robust phase retrieval problem, which can be cast as a nonsmooth and nonconvex optimization problem. We propose a new inexact proximal linear algorithm with the subproblem being solved inexactly. Our contributions are two adaptive stopping criteria for the subproblem. The convergence behavior of the proposed methods is analyzed. Through experiments on … Read more
In a previous paper [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez, T. P. Silveira. First- and second-order optimality conditions for second-order cone and semidefinite programming under a constant rank condition. Mathematical Programming, 2023. DOI: 10.1007/s10107-023-01942-8] we introduced a constant rank constraint qualification for nonlinear semidefinite and second-order cone programming by considering all faces … Read more
In this paper, a descent method for nonsmooth multiobjective optimization problems on complete Riemannian manifolds is proposed. The objective functions are only assumed to be locally Lipschitz continuous instead of convexity used in existing methods. A necessary condition for Pareto optimality in Euclidean space is generalized to the Riemannian setting. At every iteration, an acceptable … Read more
The feasibility-seeking approach provides a systematic scheme to manage and solve complex constraints for continuous problems, and we explore it for the floorplanning problems with increasingly heterogeneous constraints. The classic legality constraints can be formulated as the union of convex sets. However, the convergence of conventional projection-based algorithms is not guaranteed when the constraints sets … Read more