An Interior Proximal Method in Vector Optimization

This paper studies the vector optimization problem of finding weakly ef- ficient points for maps from Rn to Rm, with respect to the partial order induced by a closed, convex, and pointed cone C ⊂ Rm, with nonempty inte- rior. We develop for this problem an extension of the proximal point method for scalar-valued convex … Read more

On the convergence of the projected gradient method for vector optimization

In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. In that method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the steplengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with … Read more

On Cone of Nonsymmetric Positive Semidefinite Matrices

In this paper, we analyze and characterize the cone of nonsymmetric positive semidefinite matrices (NS-psd). Firstly, we study basic properties of the geometry of the NS-psd cone and show that it is a hyperbolic but not homogeneous cone. Secondly, we prove that the NS-psd cone is a maximal convex subcone of $P_0$-matrix cone which is … Read more

An novel ADM for finding Cournot equilibria of bargaining problem with alternating offers

Bargaining is a basic game in economic practice. Cournot duopoly game is an important model in bargaining theory. Recently, asymmetry information [20] and incomplete information [19], limited individual rationality [2] and slightly altruistic equilibrium [10] are introduced into bargaining theory. And computational game theory come into being a new research field. In this paper, we … Read more

Stochastic Nash Equilibrium Problems: Sample Average Approximation and Applications

This paper presents a Nash equilibrium model where the underlying objective functions involve uncertainty and nonsmoothness. The well known sample average approximation method is applied to solve the problem and the first order equilibrium conditions are characterized in terms of Clarke generalized gradients. Under some moderate conditions, it is shown that with probability one, a … Read more

Dido’s Problem and Pareto Optimality

Under study is the new class of geometrical extremal problems in which it is required to achieve the best result in the presence of conflicting goals; e.g., given the surface area of a convex body~$\mathfrak x$, we try to maximize the volume of~$\mathfrak x$ and minimize the width of~$\mathfrak x$ simultaneously. These problems are addressed … Read more

Asymptotic expansions for interior penalty solutions of control constrained linear-quadratic problems

We consider a quadratic optimal control problem governed by a nonautonomous affine differential equation subject to nonnegativity control constraints. For a general class of interior penalty functions, we show how to compute the principal term of the pointwise expansion of the state and the adjoint state. Our main argument relies on the following fact: If … Read more

Decomposition of large-scale stochastic optimal control problems

In this paper, we present an Uzawa-based heuristic that is adapted to some type of stochastic optimal control problems. More precisely, we consider dynamical systems that can be divided into small-scale independent subsystems, though linked through a static almost sure coupling constraint at each time step. This type of problem is common in production/portfolio management … Read more

Risk averse feasible policies for large-scale multistage stochastic linear programs

We consider risk-averse formulations of stochastic linear programs having a structure that is common in real-life applications. Specifically, the optimization problem corresponds to controlling over a certain horizon a system whose dynamics is given by a transition equation depending affinely on an interstage dependent stochastic process. We put in place a rolling-horizon time consistent policy. … Read more

Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization

Several efficient methods for derivative-free optimization (DFO) are based on the construction and maintenance of an interpolation model for the objective function. Most of these algorithms use special “geometry-improving” iterations, where the geometry (poisedness) of the underlying interpolation set is made better at the cost of one or more function evaluations. We show that such … Read more