Second-Order Strong Optimality and Second-Order Duality for Nonsmooth Constrained Multiobjective Fractional Programming Problems

\(\) This paper investigates constrained nonsmooth multiobjective fractional programming problem (NMFP) in real Banach spaces. It derives a quotient calculus rule for computing the first- and second-order Clarke derivatives of fractional functions involving locally Lipschitz functions. A novel second-order Abadie-type regularity condition is presented, defined with the help of the Clarke directional derivative and the … Read more

Fast convergence of the primal-dual dynamical system and algorithms for a nonsmooth bilinearly coupled saddle point problem

\(\) This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretizations associated with a nonsmooth bilinearly coupled convex-concave saddle point problem. We derive the convergence rate of the primal-dual gap for the second-order dynamical system with asymptotically vanishing damping term. Based on the implicit discretization, we propose … Read more

Scalable Projection-Free Optimization Methods via MultiRadial Duality Theory

Recent works have developed new projection-free first-order methods based on utilizing linesearches and normal vector computations to maintain feasibility. These oracles can be cheaper than orthogonal projection or linear optimization subroutines but have the drawback of requiring a known strictly feasible point to do these linesearches with respect to. In this work, we develop new … Read more

Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization

We consider (stochastic) subgradient methods for strongly convex but potentially nonsmooth non-Lipschitz optimization. We provide new equivalent dual descriptions (in the style of dual averaging) for the classic subgradient method, the proximal subgradient method, and the switching subgradient method. These equivalences enable $O(1/T)$ convergence guarantees in terms of both their classic primal gap and a … Read more

Weakly convex Douglas-Rachford splitting avoids strict saddle points

We prove that the Douglas-Rachford splitting method converges, almost surely, to local minimizers of semialgebraic weakly convex optimization problems, under the assumption of the strict saddle property. The approach consists of two steps: first, we prove a manifold identification result, and local smoothness of the involved iteration operator. Then, we proceed to show that strict … Read more

Weak convexity and approximate subdifferentials

We explore and construct an enlarged subdifferential for weakly convex functions. The resulting object turns out to be continuous with respect to both the function argument and the enlargement parameter. We carefully analyze connections with other constructs in the literature and extend well-known variational principles to the weakly convex setting. By resorting to the new … Read more

An Inexact Restoration Direct Multisearch Filter Approach to Multiobjective Constrained Derivative-free Optimization

Direct Multisearch (DMS) is a well-established class of methods for multiobjective derivative-free optimization, where constraints are addressed by an extreme barrier approach, only evaluating feasible points. In this work, we propose a filter approach, combined with an inexact feasibility restoration step, to address constraints in the DMS framework. The filter approach treats feasibility as an … Read more

Solving separable convex optimization problems: Faster prediction-correction framework

He and Yuan’s prediction-correction framework [SIAM J. Numer. Anal. 50: 700-709, 2012] is able to provide convergent algorithms for solving separable convex optimization problems at a rate of $O(1/t)$ ($t$ represents iteration times) in both ergodic (the average of iteration) and pointwise senses. This paper presents a faster prediction-correction framework at a rate of $O(1/t)$ … Read more

Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

We establish the optimal ergodic convergence rate for the classical projected subgradient method with time-varying step-sizes. This convergence rate remains the same even if we slightly increase the weight of the most recent points, thereby relaxing the ergodic sense. Article Download View Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

Convergence of the Chambolle–Pock Algorithm in the Absence of Monotonicity

The Chambolle-Pock algorithm (CPA), also known as the primal-dual hybrid gradient method (PDHG), has surged in popularity in the last decade due to its success in solving convex/monotone structured problems. This work provides convergence results for problems with varying degrees of (non)monotonicity, quantified through a so-called oblique weak Minty condition on the associated primal-dual operator. … Read more