A polynomial-size extended formulation for the multilinear polytope of beta-acyclic hypergraphs
ArticleDownload View PDF
ArticleDownload View PDF
To explore the limits of a stochastic gradient method, it may be useful to consider an example consisting of an infinite number of quadratic functions. In this context, it is appropriate to determine the expected value and the covariance matrix of the stochastic noise, i.e. the difference of the true gradient and the approximated gradient … Read more
We first provide an inner-approximation hierarchy described by a sum-of-squares (SOS) constraint for the copositive (COP) cone over a general symmetric cone. The hierarchy is a generalization of that proposed by Parrilo (2000) for the usual COP cone (over a nonnegative orthant). We also discuss its dual. Second, we characterize the COP cone over a … Read more
This paper proposes and analyzes a subsampled Cubic Regularization Method (CRM) for solving finite-sum optimization problems. The new method uses random subsampling techniques to approximate the functions, gradients and Hessians in order to reduce the overall computational cost of the CRM. Under suitable hypotheses, first- and second-order iteration-complexity bounds and global convergence analyses are presented. … Read more
One popular approach to access the importance/influence of a group of nodes in a network is based on the notion of centrality. For a given group, its group betweenness centrality is computed, first, by evaluating a ratio of shortest paths between each node pair in a network that are “covered” by at least one node … Read more
We study network flow interdiction problems with nonlinear and nonconvex flow models. The resulting model is a max-min bilevel optimization problem in which the follower’s problem is nonlinear and nonconvex. In this game, the leader attacks a limited number of arcs with the goal to maximize the load shed and the follower aims at minimizing … Read more
Under mild assumptions stochastic gradient methods asymptotically achieve an optimal rate of convergence if the arithmetic mean of all iterates is returned as an approximate optimal solution. However, in the absence of stochastic noise, the arithmetic mean of all iterates converges considerably slower to the optimal solution than the iterates themselves. And also in the … Read more
Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more
We consider the nonconvex set \(S_n = \{(x,X,z): X = x x^T, \; x (1-z) =0,\; x \geq 0,\; z \in \{0,1\}^n\}\), which is closely related to the feasible region of several difficult nonconvex optimization problems such as the best subset selection and constrained portfolio optimization. Utilizing ideas from convex analysis and disjunctive programming, we … Read more
We present a novel method for mixed-integer optimization problems with multivariate and Lipschitz continuous nonlinearities. In particular, we do not assume that the nonlinear constraints are explicitly given but that we can only evaluate them and that we know their global Lipschitz constants. The algorithm is a successive linear relaxation method in which we alternate … Read more