On the convergence rate of grid search for polynomial optimization over the simplex

We consider the approximate minimization of a given polynomial on the standard simplex, obtained by taking the minimum value over all rational grid points with given denominator ${r} \in \mathbb{N}$. It was shown in [De Klerk, E., Laurent, M., Sun, Z.: An error analysis for polynomial optimization over the simplex based on the multivariate hypergeometric … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

Manifold Sampling for L1 Nonconvex Optimization

We present a new algorithm, called manifold sampling, for the unconstrained minimization of a nonsmooth composite function $h\circ F$ when $h$ has known structure. In particular, by classifying points in the domain of the nonsmooth function $h$ into manifolds, we adapt search directions within a trust-region framework based on knowledge of manifolds intersecting the current … Read more

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

We consider the problem of minimizing a Lipschitz differentiable function over a class of sparse symmetric sets that has wide applications in engineering and science. For this problem, it is known that any accumulation point of the classical projected gradient (PG) method with a constant stepsize $1/L$ satisfies the $L$-stationarity optimality condition that was introduced … Read more

Quantitative recovery conditions for tree-based compressed sensing

As shown in [9, 1], signals whose wavelet coefficients exhibit a rooted tree structure can be recovered — using specially-adapted compressed sensing algorithms — from just $n=\mathcal{O}(k)$ measurements, where $k$ is the sparsity of the signal. Motivated by these results, we introduce a simplified proportional-dimensional asymptotic framework which enables the quantitative evaluation of recovery guarantees … Read more

Algorithms for the power-$ Steiner tree problem in the Euclidean plane

We study the problem of constructing minimum power-$p$ Euclidean $k$-Steiner trees in the plane. The problem is to find a tree of minimum cost spanning a set of given terminals where, as opposed to the minimum spanning tree problem, at most $k$ additional nodes (Steiner points) may be introduced anywhere in the plane. The cost … Read more

Robust Sensitivity Analysis of the Optimal Value of Linear Programming

We propose a framework for sensitivity analysis of linear programs (LPs) in minimiza- tion form, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties … Read more

Sequential equality-constrained optimization for nonlinear programming

A new method is proposed for solving optimization problems with equality constraints and bounds on the variables. In the spirit of Sequential Quadratic Programming and Sequential Linearly-Constrained Programming, the new method approximately solves, at each iteration, an equality-constrained optimization problem. The bound constraints are handled in outer iterations by means of an Augmented Lagrangian scheme. … Read more

A basis-free null space method for solving generalized saddle point problems

Using an augmented Lagrangian matrix approach, we analytically solve in this paper a broad class of linear systems that includes symmetric and nonsymmetric problems in saddle point form. To this end, some mild assumptions are made and a preconditioning is specially designed to improve the sensitivity of the systems before the calculation of their solutions. … Read more