One Class Nonsmooth Dyscrete Step Control Problem

In this paper a survey and refinement of its recent results in the discrete optimal control theory are presented. The step control problem depending on a parameter is investigated. No smoothness of the cost function is assumed and new versions of the discrete maximum principle for the step control problem are derived Citationsubmited to the … Read more

Selective Gram-Schmidt orthonormalization for conic cutting surface algorithms

It is not straightforward to find a new feasible solution when several conic constraints are added to a conic optimization problem. Examples of conic constraints include semidefinite constraints and second order cone constraints. In this paper, a method to slightly modify the constraints is proposed. Because of this modification, a simple procedure to generate strictly … Read more

Dini Derivative and a Characterization for Lipschitz and Convex Functions on Riemannian Manifolds

Dini derivative on Riemannian manifold setting is studied in this paper. In addition, a characterization for Lipschitz and convex functions defined on Riemannian manifolds and sufficient optimality conditions for constraint optimization problems in terms of the Dini derivative are given. ArticleDownload View PDF

Large Scale Portfolio Optimization with Piecewise Linear Transaction Costs

We consider the fundamental problem of computing an optimal portfolio based on a quadratic mean-variance model of the objective function and a given polyhedral representation of the constraints. The main departure from the classical quadratic programming formulation is the inclusion in the objective function of piecewise linear, separable functions representing the transaction costs. We handle … Read more

Discrete gradient method: a derivative free method for nonsmooth optimization

In this paper a new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can … Read more

Benchmark of Some Nonsmooth Optimization Solvers for Computing Nonconvex Proximal Points

The major focus of this work is to compare several methods for computing the proximal point of a nonconvex function via numerical testing. To do this, we introduce two techniques for randomly generating challenging nonconvex test functions, as well as two very specific test functions which should be of future interest to Nonconvex Optimization Benchmarking. … Read more

Smooth minimization of two-stage stochastic linear programs

This note presents an application of the smooth optimization technique of Nesterov for solving two-stage stochastic linear programs. It is shown that the original O(1/e) bound of Nesterov on the number of main iterations required to obtain an e-optimal solution is retained. CitationTechnical Report, School of Industrial & Systems Engineering, Georgia Institute of Technology, 2006.ArticleDownload … Read more

Stationarity and Regularity of Real-Valued Functions

Different stationarity and regularity concepts for extended real-valued functions on metric spaces are considered in the paper. The properties are characterized in terms of certain local constants. A classification scheme for stationarity/regularity constants and corresponding concepts is proposed. The relations between different constants are established. CitationUniversity of Ballarat, School of Information Technology and Mathematical Sciences, … Read more

A conic interior point decomposition approach for large scale semidefinite programming

We describe a conic interior point decomposition approach for solving a large scale semidefinite programs (SDP) whose primal feasible set is bounded. The idea is to solve such an SDP using existing primal-dual interior point methods, in an iterative fashion between a {\em master problem} and a {\em subproblem}. In our case, the master problem … Read more

Primal interior-point method for large sparse minimax optimization.

In this paper, we propose an interior-point method for large sparse minimax optimization. After a short introduction, where various barrier terms are discussed, the complete algorithm is introduced and some implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus nonconvex problems can be solved successfully. The results … Read more