A note on Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms

The Legendre-Fenchel conjugate of the product of two positive-definite quadratic forms was posted as an open question in the field of nonlinear analysis and optimization by Hiriart-Urruty [`Question 11′ in {\it SIAM Review} 49, 255-273, (2007)]. Under a convex assumption on the function, it was answered by Zhao [SIAM J. Matrix Analysis $\&$ Applications, 31(4), … Read more

Robust Shortest Path Problems with Two Uncertain Multiplicative Cost Coefficients

We consider a robust shortest path problem when the cost coefficient is the product of two uncertain factors. We first show that the robust problem can be solved in polynomial time by a dual variable enumeration with shortest path problems as subproblems. We also propose a path enumeration approach using a $K$-shortest paths finding algorithm … Read more

Convergence of trust-region methods based on probabilistic models

In this paper we consider the use of probabilistic or random models within a classical trust-region framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in … Read more

Globally convergent DC trust-region methods

In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the solution of nonlinear optimization problems by trust-region methods. We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of … Read more

Worst case complexity of direct search under convexity

In this paper we prove that the broad class of direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same global rate or worst case complexity bound of the gradient method for the unconstrained minimization of a convex and smooth function. More precisely, it will be shown that … Read more

A merit function approach for direct search

In this paper it is proposed to equip direct-search methods with a general procedure to minimize an objective function, possibly non-smooth, without using derivatives and subject to constraints on the variables. One aims at considering constraints, most likely nonlinear or non-smooth, for which the derivatives of the corresponding functions are also unavailable. The novelty of … Read more

Faster, but Weaker, Relaxations for Quadratically Constrained Quadratic Programs

We introduce a new relaxation framework for nonconvex quadratically constrained quadratic programs (QCQPs). In contrast to existing relaxations based on semidefinite programming (SDP), our relaxations incorporate features of both SDP and second order cone programming (SOCP) and, as a result, solve more quickly than SDP. A downside is that the calculated bounds are weaker than … Read more

A SIMPLE TROLLEY-LIKE MODEL IN THE PRESENCE OF A NONLINEAR FRICTION AND A BOUNDED FUEL EXPENDITURE

We consider a problem of maximization of the distance traveled by a material point in the presence of a nonlinear friction under a bounded thrust and fuel expenditure. Using the maximum principle we obtain the form of optimal control and establish conditions under which it contains a singular subarc. This problem seems to be the … Read more

REDUCTION OF TWO-STAGE PROBABILISTIC OPTIMIZATION PROBLEMS WITH DISCRETE DISTRIBUTION OF RANDOM DATA TO MIXED INTEGER PROGRAMMING PROBLEMS

We consider models of two-stage stochastic programming with a quantile second stage criterion and optimization models with a chance constraint on the second stage objective function values. Such models allow to formalize requirements to reliability and safety of the system under consideration, and to optimize the system in extreme conditions. We suggest a method of … Read more

Tail bounds for stochastic approximation

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each … Read more