Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose … Read more

Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach

In the application of machine learning to real life decision-making systems, e.g., credit scoring and criminal justice, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness. The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction … Read more

A conjugate directions-type procedure for quadratic multiobjective optimization

We propose an extension of the real-valued conjugate directions method for unconstrained quadratic multiobjective problems. As in the single-valued counterpart, the procedure requires a set of directions that are simultaneously conjugate with respect to the positive definite matrices of all quadratic objective components. Likewise, the multicriteria version computes the steplength by means of the unconstrained … Read more

An Adaptive Patch Approximation Algorithm for Bicriteria Convex Mixed Integer problems

Pareto frontiers of bicriteria continuous convex problems can be efficiently computed and optimal theoretical performance bounds have been established. In the case of bicriteria mixed-integer problems, the approximation of the Pareto frontier becomes, however, significantly harder. In this paper, we propose a new algorithm for approximating the Pareto frontier of bicriteria mixed-integer programs with convex … Read more

A Tractable Multi-Leader Multi-Follower Peak-Load-Pricing Model with Strategic Interaction

While single-level Nash equilibrium problems are quite well understood nowadays, less is known about multi-leader multi-follower games. However, these have important applications, e.g., in the analysis of electricity and gas markets, where often a limited number of firms interacts on various subsequent markets. In this paper, we consider a special class of two-level multi-leader multi-follower … Read more

A general branch-and-bound framework for continuous global multiobjective optimization

Current generalizations of the central ideas of single-objective branch-and-bound to the multiobjective setting do not seem to follow their train of thought all the way. The present paper complements the various suggestions for generalizations of partial lower bounds and of overall upper bounds by general constructions for overall lower bounds from partial lower bounds, and … Read more

No-regret Learning in Price Competitions under Consumer Reference Effects

We study long-run market stability for repeated price competitions between two firms, where consumer demand depends on firms’ posted prices and consumers’ price expectations called reference prices. Consumers’ reference prices vary over time according to a memory-based dynamic, which is a weighted average of all historical prices. We focus on the setting where firms are … Read more

Cut-Sharing Across Trees and Efficient Sequential Sampling for SDDP with Uncertainty in the RHS

In this paper we show that when a multistage stochastic problem with stage-wise independent realizations has only RHS uncertainties, solving one tree provides a valid lower bound for all trees with the same number of scenarios per stage without any additional computational effort. The only change to the traditional algorithm is the way cuts are … Read more

Characterization of an Anomalous Behavior of a Practical Smoothing Technique

A practical smoothing method was analyzed and tested against state-of-the-art solvers for some non-smooth optimization problems in [BSS20a; BSS20b]. This method can be used to smooth the value functions and solution mappings of fully parameterized convex problems under mild conditions. In general, the smoothing of the value function lies from above the true value function … Read more

An Integer Programming Approach to Deep Neural Networks with Binary Activation Functions

We study deep neural networks with binary activation functions (BDNN), i.e. the activation function only has two states. We show that the BDNN can be reformulated as a mixed-integer linear program which can be solved to global optimality by classical integer programming solvers. Additionally, a heuristic solution algorithm is presented and we study the model … Read more