Gradient methods for minimizing composite objective function

In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both … Read more

Gradient methods for minimizing composite objective function

In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both … Read more

A secant method for nonsmooth optimization

The notion of a secant for locally Lipschitz continuous functions is introduced and a new algorithm to locally minimize nonsmooth, nonconvex functions based on secants is developed. We demonestrate that the secants can be used to design an algorithm to find descent directions of locally Lipschitz continuous functions. This algorithm is applied to design a … Read more

Max-min separability: incremental approach and application to supervised data classification

A new algorithm for the computation of a piecewise linear function separating two finite point sets in $n$-dimensional space is developed and the algorithm is applied to solve supervised data classification problems. The algorithm computes hyperplanes incrementally and it finds as many hyperplanes as necessary to separate two sets with respect to some tolerance. An … Read more

The Speed of Shor’s R-Algorithm

Shor’s r-algorithm is an iterative method for unconstrained optimization, designed for minimizing nonsmooth functions, for which its reported success has been considerable. Although some limited convergence results are known, nothing seems to be known about the algorithm’s rate of convergence, even in the smooth case. We study how the method behaves on convex quadratics, proving … Read more

A Proximal Cutting Plane Method Using Chebychev Center for Nonsmooth Convex Optimization

An algorithm is developed for minimizing nonsmooth convex functions. This algorithm extends Elzinga-Moore cutting plane algorithm by enforcing the search of the next test point not too far from the previous ones, thus removing compactness assumption. Our method is to Elzinga-Moore’s algorithm what a proximal bundle method is to Kelley’s algorithm. Instead of lower approximations … Read more

Discrete gradient method: a derivative free method for nonsmooth optimization

In this paper a new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can … Read more

A conic interior point decomposition approach for large scale semidefinite programming

We describe a conic interior point decomposition approach for solving a large scale semidefinite programs (SDP) whose primal feasible set is bounded. The idea is to solve such an SDP using existing primal-dual interior point methods, in an iterative fashion between a {\em master problem} and a {\em subproblem}. In our case, the master problem … Read more

Computing Proximal Points on Nonconvex Functions

The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with … Read more

An incremental method for solving convex finite minmax problems

We introduce a new approach to minimizing a function defined as the pointwise maximum over finitely many convex real functions (next referred to as the “component functions”), with the aim of working on the basis of “incomplete knowledge” of the objective function. In fact, a descent algorithm is proposed which does not necessarily require at … Read more