An Extension of the Proximal Point Method for Quasiconvex Minimization

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on the Euclidean space and the nonnegative orthant. For the unconstrained minimization problem, assumming that the function is bounded from below and lower semicontinuous we prove that iterations fxkg given by 0 2 b@(f(:)+(k=2)jj:􀀀xk􀀀1jj2)(xk) are well de ned and if, in addition, f is quasiconvex then ff(xk)g is decreasing and fxkg converges to a point of U := fx 2 IRn : f(x)  infj0 f(xj)g assumed nonempty. Under the assumption that the sequence of parameters is bounded and f is continuous it is proved that fxkg converges to a generalized critical point of f. Furthermore, if fkg converge to zero and the iterations fxkg are global minimizers of the regularized subproblems f(:) + (k=2)jj: 􀀀 xk􀀀1jj2; the sequence converges to an optimal solution. For the quasiconvex minimization on the nonnegative orthant, using the same premises of the unconstrained case and using a general proximal distance (which includes as particular cases Bregman distances, 􀀀 divergence distances and second order homogeneous distances), we nd that the iterations fxkg given by 0 2 b@(f(:) + kd(:; xk􀀀1))(xk); are well de ned and rest in the positive orthant and ff(xk)g is decreasing and convergent. If U+ = fx 2 IRn + \ domf : f(x)  infj0 f(xj)g is nonempty, fkg is bounded and the distance is separable, we obtain the convergence to a generalized KKT point of the problem. Furthermore, in the smooth case we introduce a sucient condition on the proximal distance such that the sequence converges to an optimal solution of the problem. When the parameters converge to zero we obtain a similar result, of the unconstrained case for self-proximal distances.

Article

Download

View PDF