DIMACS Center, CoRE Building, Rutgers University

**Organizers:****Melike Baykal-Gursoy**, Rutgers University, gursoy at rci.rutgers.edu**David Brown**, Duke University, dbbrown at duke.edu**Aleksandar Pekec**, Duke University, pekec at duke.edu**Andrzej Ruszczynski**, Rutgers University, rusz at business.rutgers.edu**Dharmashankar Subramanian**, IBM Watson Labs dharmash at us.ibm.com

Title: Dynamic Portfolio Optimization with Transaction Costs: Heuristics and Dual Bounds

We study dynamic portfolio choice with transaction costs. This problem is naturally formulated as a dynamic program that is very difficult to solve when there are more than just a few assets. We explore some limited-look-ahead heuristic trading strategies and develop information-relaxation-based dual bounds that provide an upper bound on the performance of an optimal trading strategy. Extensive numerical results suggest that these easy-to-compute heuristic strategies are often nearly optimal.

Title: Risk-Averse Optimal Path Problems for Markov Models

We use Markov risk measures to formulate a risk-averse version of the undiscounted stochastic shortest or longest path problems in an absorbing controlled Markov chain. We derive risk-averse dynamic programming equations and we show that a randomized policy may be strictly better than deterministic policies, when risk measures are employed.

This is a joint work with Andrzej Ruszczynski

Title: Scenario decomposition of risk-averse multistage stochastic programming problems

In the last decade the theory of coherent risk measures established itself as an alternative to expected utility models of risk averse preferences in stochastic optimization. Recently, increased attention is paid to dynamic measures of risk, which allow for risk-averse evaluation of streams of future costs or rewards.

When used in stochastic optimization models, dynamic risk measures lead to a new class of problems, which are significantly more complex than their risk-neutral counterparts. Decomposition, an established and efficient approach to risk-neutral multistage stochastic optimization problems, cannot be directly applied to risk-averse models. With dynamic risk measures, the main feature facilitating decomposition, the integral form of the objective function, is absent.

Our main objective is to overcome this difficulty by exploiting specific structure of dynamic risk measures, and to develop new decomposition methods that extend the ideas of earlier approaches to risk-neutral problems.

In this work we develop generalizations of scenario decomposition methods, in the spirit of J.M. Mulvey and A. Ruszczynski, "A new scenario decomposition method for large-scale stochastic optimization", Operations Research 43, 1995. The key to success is the use of dual properties of dynamic measures of risk to construct a family of risk-neutral approximations of the problem. First, we define and analyze a two-stage risk-averse stochastic optimization problem. Next, we develop methods to solve efficiently this problem. Later, we formally define a multistage risk-averse stochastic optimization problem and we discuss its properties. We also develop efficient methods to solve the multistage problem and apply these to an inventory planning and assembly problem.

Finally, we analyze and compare the results of our computational experiments.

Title: Extension of Lyapunov's Convexity Theorem to Subranges

Consider a measurable space with a finite vector measure. This measure defines a mapping of the $\sigma$-field into a Euclidean space.

According to Lyapunov's convexity theorem, the range of this mapping is compact and, if the measure is atomless, this range is convex.

Similar ranges are also defined for measurable subsets of the space.

We show that the union of the ranges of all subsets having the same given vector measure is also compact and, if the measure is atomless, it is convex. We further provide a geometrically constructed convex compactum in the Euclidean space that contains this union.

We show that, for two-dimensional measures, among all the subsets having the same given vector measure, there exists a set with the maximal range of the vector measure (maximal subset). Furthermore, for two-dimensional measures, the maximal subset, the above-mentioned union, and the above-mentioned compactum are equal sets. We also give counterexamples showing that, in three or higher dimensions, the maximal subset may not exist and these equalities may not hold.

We use the existence of maximal subsets to strengthen the Dvoretzky-Wald-Wolfowitz purification theorem for the case of two measures. We show that there are no similar results for three or higher dimensions.

Title: Tight Dynamically Consistent Approximations of Inconsistent Distortion Risk Measures

A proper framework for measuring and mitigating risk in dynamic settings is of utmost importance, on both a practical, as well as a theoretical level. In recent years, coherent risk measures have emerged as a viable alternative to classical frameworks involving expected utility theory; their properties and axiomatic representation theorems are well understood in static settings, and have recently been extended to dynamic decision problems. In practice, however, since the specification of the resulting dynamic risk measures is typically much more involved, single-period static risk metrics are often used to assess dynamic risk, due to their conciseness and ease of interpretability. This can lead to an over or under-estimation of the true dynamic risk, as well as potentially inconsistent behavior.

In this paper, we investigate two different frameworks for assessing the risk in a multi-period decision process: a dynamically inconsistent formulation (whereby a single, static risk measure is applied to the entire sequence of future costs), and a dynamically consistent one, obtained by suitably composing one-step risk mappings. We characterize the class of dynamically consistent measures that provide a tight approximation for a given inconsistent measure, and discuss how the approximation factors can be computed. For the case where the consistent measures are given by Average Value-at-Risk, we derive a polynomial-time algorithm for approximating an arbitrary inconsistent distortion measure. We also present exact analytical bounds for the case where the dynamically inconsistent measure is also given by Average Value-at-Risk. Our theoretical and algorithmic constructions exploit interesting connections between the study of risk measures and the theory of submodularity and lattice programming, which may be of independent interest.

This is joint work with Pu Huang, Marek Petrik and Dharmashankar Subramanian.

Title: Multi-Period Production Planning Under Non-Compliance Risk

This paper addresses a production planning setting for pharmaceutical companies under the risk of failing quality inspections that are undertaken by the regulatory authorities to ensure good manufacturing practices. We consider a staged decision model where the regulatory authority is considered an adversary with limited inspection budget, and the chosen inspections themselves have uncertain outcomes, depending on the set of products at a given production site. After reviewing the single-period problem of maximizing the worst-case revenue and the worst-case expectation, we discuss and compare different stochastic programming and dynamic programming formulations for the multi-period version. In particular, the problem of handling decision-dependent probabilities is highlighted, which requires solving a MINLP for each state in the multi-period DP.

This is joint work with Alwin Haensel, Ban Kawas, Eleni Pratsini, Steve Prestwich and Catalin-Stefan Tiseanu.

Title: An Integrated Framework for Stochastic Programming, Dynamic Programming, Stochastic Search and Simulation-Optimization

Stochastic programming, dynamic programming, stochastic search and simulation-optimization have evolved to a large degree in separate communities (with some overlap), characterized by cosmetic differences in notation and problem representation, and more substantive differences in applications. In this talk, I will present a perspective on policies that brings these fields under a single umbrella, exposing common algorithmic strategies that cut across fields. At the same time, I will describe how differences in problem characteristics lead to fundamentally different algorithmic strategies. Key issues are the dimensionality of the decision variable, and the complexity of the state variable. I will describe why problems with discrete actions or low-dimensional vectors can be "hard" while very high-dimensional decision vectors can be "easy." I will describe the exploration vs. exploitation problem, when it is important and the challenges that arise in high-dimensional problems. Finally, I will close by discussing how advanced machine learning methods can help bridge the gap between dynamic programming and scenario trees.

Title: Multistage stochastic optimization with extended polyhedral risk measures

Extended polyhedral risk measures (EPRMs) are introduced and discussed. Examples of EPRMs are provided for the one- and multi-period situation. It is also argued that they are time consistent and information monotone . It is shown that EPRM represents a suitable concept to be incorporated into (linear) multistage stochastic optimization models. After their incorporation linearity of the optimization model, stability properties and decomposition structures are preserved. Finally, we show that the risk averse dynamic programming equations may be used within sampling-based decomposition methods.

Title: Dynamic Risk-Averse Optimization

We present the concept of a dynamic risk measure and discuss its important properties. In particular, we focus on time-consistency of risk measures. Next, we focus on dynamic optimization problems for Markov models. We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we also develop a risk-averse policy iteration method and we prove its convergence. We propose a version of the Newton method to solve a non-smooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to Markov games.

Title: Risk averse dynamic optimization

In recent years a theory of the so-called coherent risk measures had emerged. Its application to static (two-stage) stochastic programming is well established and generally accepted by the scientific community. The multistage case, on the other hand, is much more delicate. In this talk we discuss various approaches to multistage risk averse stochastic programming. In particular, we discuss the concept of time consistency and the Stochastic Dual Dynamic Programming approach to solving multistage risk averse stochastic programs.

Previous: Program

Workshop Index

DIMACS Homepage

Contacting the Center

Document last modified on May 2, 2011.