Variable Neighborhood Search for parameter tuning in Support Vector Machines

As in most Data Mining procedures, how to tune the parameters of a Support Vector Machine (SVM) is a critical, though not sufficiently explored, issue. The default approach is a grid search in the parameter space, which becomes prohibitively time-consuming even when just a few parameters are to be tuned. For this reason, for models … Read more

On RIC bounds of Compressed Sensing Matrices for Approximating Sparse Solutions Using Lq Quasi Norms

This paper follows the recent discussion on the sparse solution recovery with quasi-norms Lq; q\in(0,1) when the sensing matrix possesses a Restricted Isometry Constant \delta_{2k} (RIC). Our key tool is an improvement on a version of “the converse of a generalized Cauchy-Schwarz inequality” extended to the setting of quasi-norm. We show that, if \delta_{2k}\le 1/2, … Read more

On Stable Piecewise Linearization and Generalized Algorithmic Differentiation

It is shown how functions that are defined by evaluation programs involving the absolute value function (besides smooth elementals), can be approximated locally by piecewise-linear models in the style of algorithmic, or automatic, differentiation (AD). The model can be generated by a minor modification of standard AD tools and it is Lipschitz continuous with respect … Read more

Second-order variational analysis and characterizations of tilt-stable optimal solutions in finite and infinite dimensions

The paper is devoted to developing second-order tools of variational analysis and their applications to characterizing tilt-stable local minimizers of constrained optimization problems in finite-dimensional and infinite-dimensional spaces. The importance of tilt stability has been well recognized from both theoretical and numerical aspects of optimization. Based on second-order generalized differentiation, we obtain qualitative and quantitative … Read more

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

The formulation min f(x)+g(y) subject to Ax+By=b, where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous … Read more

An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each … Read more

A Block Coordinate Descent Method for Regularized Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. (Using proximal updates, we further allow non-convexity over some blocks.) Under certain conditions, we show that … Read more

CHARACTERIZATIONS OF FULL STABILITY IN CONSTRAINED OPTIMIZATION

This paper is mainly devoted to the study of the so-called full Lipschitzian stability of local solutions to finite-dimensional parameterized problems of constrained optimization, which has been well recognized as a very important property from both viewpoints of optimization theory and its applications. Based on second- order generalized differential tools of variational analysis, we obtain … Read more

Some criteria for error bounds in set optimization

We obtain sufficient and/or necessary conditions for global/local error bounds for the distances to some sets appeared in set optimization studied with both the set approach and vector approach (sublevel sets, constraint sets, sets of {\it all } Pareto efficient/ Henig proper efficient/super efficient solutions, sets of solutions {\it corresponding to one} Pareto efficient/Henig proper … Read more

An adaptive accelerated first-order method for convex optimization

This paper presents a new accelerated variant of Nesterov’s method for solving composite convex optimization problems in which certain acceleration parameters are adaptively (and aggressively) chosen so as to substantially improve its practical performance compared to existing accelerated variants while at the same time preserve the optimal iteration-complexity shared by these methods. Computational results are … Read more