Reformulation of a model for hierarchical divisive graph modularity maximization

Finding clusters, or communities, in a graph, or network is a very important problem which arises in many domains. Several models were proposed for its solution. One of the most studied and exploited is the maximization of the so called modularity, which represents the sum over all communities of the fraction of edges within these … Read more

Subgradient methods for huge-scale optimization problems

We consider a new class of huge-scale problems, the problems with {\em sparse subgradients}. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends {\em logarithmically} in the dimension. This technique is … Read more

Smoothing and Worst Case Complexity for Direct-Search Methods in Non-Smooth Optimization

For smooth objective functions it has been shown that the worst case cost of direct-search methods is of the same order as the one of steepest descent, when measured in number of iterations to achieve a certain threshold of stationarity. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, … Read more

Slopes of multifunctions and extensions of metric regularity

This article aims to demonstrate how the definitions of slopes can be extended to multi-valued mappings between metric spaces and applied for characterizing metric regularity. Several kinds of local and nonlocal slopes are defined and several metric regularity properties for set-valued mappings between metric spaces are investigated. Citation Published in Vietnam Journal of Mathematics 40:2&3(2012) … Read more

Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

This paper studies the long-existing idea of adding a nice smooth function to “smooth” a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of $||x||_1+1/(2\alpha)||x||_2^2$, where $x$ is a vector, as well as those of the minimization of $||X||_*+1/(2\alpha)||X||_F^2$, where $X$ is a matrix and $||X||_*$ and $||X||_F$ are the … Read more

Stochastic first order methods in smooth convex optimization.

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth … Read more

On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers

Recently, a worst-case O(1/t) convergence rate was established for the Douglas-Rachford alternating direction method of multipliers in an ergodic sense. This note proposes a novel approach to derive the same convergence rate while in a non-ergodic sense. Article Download View On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers

Algorithms for Bilevel Pseudomonotone Variational Inequality Problems

We propose easily implementable algorithms for minimizing the norm with pseudomonotone variational inequality constraints. This bilevel problem arises in the Tikhonov regularization method for pseudomonone variational inequalities. Since the solution set of the lower variational inequality is not given explicitly, the available methods of mathematical programming and variational inequality can not be applied directly. With … Read more

Algebraic Relaxations and Hardness Results in Polynomial Optimization and Lyapunov Analysis

The contributions of the first half of this thesis are on the computational and algebraic aspects of convexity in polynomial optimization. We show that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomial of degree four (or higher even degree) is globally convex. This solves … Read more