CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

In this paper we present a variant of the conjugate gradient (CG) algorithm in which we invoke a subspace minimization subproblem on each iteration. We call this algorithm CGSO for “conjugate gradient with subspace optimization”. It is related to earlier work by Nemirovsky and Yudin. We apply the algorithm to solve unconstrained strictly convex problems. … Read more

Smoothing SQP Algorithm for Non-Lipschitz Optimization with Complexity Analysis

In this paper, we propose a smoothing sequential quadratic programming (SSQP) algorithm for solving a class of nonsmooth nonconvex, perhaps even non-Lipschitz minimization problems, which has wide applications in statistics and sparse reconstruction. At each step, the SSQP algorithm solves a strongly convex quadratic minimization problem with a diagonal Hessian matrix, which has a simple … Read more

Holder Metric Subregularity with Applications to Proximal Point Method

This paper is mainly devoted to the study and applications of H\”older metric subregularity (or metric $q$-subregularity of order $q\in(0,1]$) for general set-valued mappings between infinite-dimensional spaces. Employing advanced techniques of variational analysis and generalized differentiation, we derive neighborhood and pointbased sufficient conditions as well as necessary conditions for $q$-metric subregularity with evaluating the exact … Read more

Bundle method for non-convex minimization with inexact subgradients and function values

We discuss a bundle method to minimize non-smooth and non-convex locally Lipschitz functions. We analyze situations where only inexact subgradients or function values are available. For suitable classes of non-smooth functions we prove convergence of our algorithm to approximate critical points. CitationTo appear in: Computational and Analytical Mathematics. Springer Proceedings in MathematicsArticleDownload View PDF

Reformulation of a model for hierarchical divisive graph modularity maximization

Finding clusters, or communities, in a graph, or network is a very important problem which arises in many domains. Several models were proposed for its solution. One of the most studied and exploited is the maximization of the so called modularity, which represents the sum over all communities of the fraction of edges within these … Read more

Subgradient methods for huge-scale optimization problems

We consider a new class of huge-scale problems, the problems with {\em sparse subgradients}. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends {\em logarithmically} in the dimension. This technique is … Read more

Smoothing and Worst Case Complexity for Direct-Search Methods in Non-Smooth Optimization

For smooth objective functions it has been shown that the worst case cost of direct-search methods is of the same order as the one of steepest descent, when measured in number of iterations to achieve a certain threshold of stationarity. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, … Read more

Slopes of multifunctions and extensions of metric regularity

This article aims to demonstrate how the definitions of slopes can be extended to multi-valued mappings between metric spaces and applied for characterizing metric regularity. Several kinds of local and nonlocal slopes are defined and several metric regularity properties for set-valued mappings between metric spaces are investigated. CitationPublished in Vietnam Journal of Mathematics 40:2&3(2012) 355-369. … Read more

Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

This paper studies the long-existing idea of adding a nice smooth function to “smooth” a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of $||x||_1+1/(2\alpha)||x||_2^2$, where $x$ is a vector, as well as those of the minimization of $||X||_*+1/(2\alpha)||X||_F^2$, where $X$ is a matrix and $||X||_*$ and $||X||_F$ are the … Read more

Stochastic first order methods in smooth convex optimization.

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth … Read more