DIMACS Workshop on Polyhedral Combinatorics of Random Utility

May 24 - 26, 2006
DIMACS Center, CoRE Building, Rutgers University

Jean-Paul Doignon, Univ. Libre de Bruxelles, doignon@ulb.ac.be
Aleksandar Pekec, Fuqua School of Business, Duke University, pekec@duke.edu
Presented under the auspices of the Special Focus on Computation and the Socio-Economic Sciences.

This workshop is dedicated to the memory of Yehuda Vardi, Professor of Statistics, Rutgers University.


Barry Balof, Whitman College

Title: Vertices and Facets of the Semiorder Polytope

A representation of a semiorder (X, P) on n elements is a function that assigns each element x in the semiorder a function value f(x), and additionally, we assign a length value r. Two elements in the semiorder are incomparable if and only if their function values are within r of each other. The set of all representations of a semiorder of n elements can be viewed as a polytope in n-dimensional space. In 1990, Pirlot proved the existence of a representation of the semiorder that was minimal in the sense that all function values and the interval length were as small as possible. Such a representation corresponded to one vertex of the semiorder polytope, but this need not be the only vertex. In this talk, we look at some of the properties of these vertices as well as facets of this polytope, and of certain graphs one can associate to the semiorder.

PDF available

Darius Braziunas and Craig Boutilier, University of Toronto

Title: Bayesian Elicitation of Structured Multiattribute Utility Models

Structured utility models are essential for the effective representation and elicitation of complex multiattribute utility functions. Fishburn's generalized additive independence (GAI) (or interdependent value additive) models provide an attractive structural model of user preferences, offering a balanced tradeoff between simplicity and applicability. GAI models generalize more widely used linear/utility independent models in important ways. While representation and inference with such models is reasonably well understood, elicitation of the parameters of such models received less attention.

We propose a procedure to elicit GAI model parameters using only ``local'' utility queries rather than ``global'' queries over full outcomes. Our local queries take full advantage of GAI structure and provide a sound framework for extending the elicitation procedure to settings where the uncertainty over utility parameters is represented probabilistically. Query responses impose linear constraints on the space of feasible parameters of the GAI model, and we assume a prior over the parameters the model, quantifying the probability of any utility function within the resulting (parameter) polytope. Expected decision quality w.r.t. this density over utility functions and expected value of information are used to determine: appropriate queries; when to terminate elicitation; and which decision to recommend. Our procedure also exploits constraints on the feasible decision space to focus attention only on those "relevant" aspects of decision space.

Clint Davis-Stober, University of Illinois at Urbana-Champaign

Title: Frequentist Axiom Testing: Analysis of Choice Data Under Convex Polytope Constraints

There is a long standing disconnect between algebraic axioms of preference and the stochastic data collected to verify them (Luce & Narens, 1994). Frequently, a probabilistic reformulation of these axioms results in a union of convex polytopes defined by binary choice. The geometric structure of such reformulated axioms and their relationships to operations research have given birth to a growing literature (Fiorini & Fishburn, 2003; Zhang, 2004; Fishburn, 1999). However, numerous statistical difficulties have plagued the empirical analysis of such reformulated axioms (Iverson & Falmagne, 1985; Iverson, 1991). This paper offers a general method of analysis to statistically analyze properly defined measurement axioms. The author presents techniques for maximum likelihood estimation as well as derivation of the asymptotic distribution of likelihood ratio tests over a general class of inequality constraints defined by convex polytopes. This paper generalizes Iverson and Falmagne's (1985) seminal work on the statistical analysis of measurement axioms.

Jean-Paul Doignon, Universite Libre de Bruxelles

Title: Graphical Inequalities for the Linear Ordering Polytope

Mathieu Koppen found a remarkable way of producing a facet of the linear ordering polytope (also called binary choice polytope) from any stability-critical graph. Recall that a graph is stability critical when its stability number increases whenever two adjacent vertices are made nonadjacent. We generalize Koppen's construction to weighted graphs. As a result, we obtain new facets of the polytope. At the same time, a weighted generalization of stability-critical graphs appears. Because of some duality among the resulting graphs, it happens that most stability-critical graphs deliver not one, but two facets of the polytope. This is joint work with Samuel Fiorini and Gwenael Joret.

Jean-Claude Falmagne, University of California, Irvine

Title: Mediatic Graphs

It is well-known that any medium corresponds to an isometric subgraph of the hypercube, and vice versa. Such a characterization, although useful, is not especially revealing of the structure of a particular medium. We propose an axiomatic definition of the concept of a `mediatic graph'. The graph of any medium is a mediatic graph. We also show that, for any non-necessarily finite set, there exists a bijection between the collection of all the media on that set and the collection of all the mediatic graphs on the same set. Moreover, two media are isomorphic if and only if their representing graphs are also isomorphic.

PDF available

Samuel Fiorini, Universite Libre de Bruxelles

Title: Surfaces that Define Facets of the Linear Ordering Polytope

The Moebius ladder inequalities play a central role in the study of the linear ordering polyope. Their name is reminiscent of the surface known as the Moebius band. This is not a mere coincidence. There is a natural connection between surfaces and the facets of the linear ordering polytope. Namely, to any (not necessarily coherent) orientation of a triangulated surface one can associate a valid inequality. We prove that no (coherently) orientable surface can give rise to a facet, whereas all nonorientable surfaces admit a facet-defining oriented triangulation.

S. Ghosh and Jayant R. Kalagnanam, IBM T.J. Watson Research Center

Title: Polyhedral Sampling for Multiattribute Preference Elicitation

Multiattribute auctions are now routinely used for B2B negotiations in electronic procurement. A basic requirement for running multiattribute auctions is a utility function of the buyer that trades off nonprice attributes against price. Such a utility function can then be used to design a (strategic) scoring function that communicates to the suppliers how the buyer will evaluate multtriatribute bids. One approach to eliciting the parameters and design a strategic scoring function that communicates to the suppliers how the buyer will evaluate multiattribute bids. One approach to eliciting the preference structure is to model it as polyhedra in the space of the parameters and design pair wise comparison to quickly narrow down the feasible region. In order to perform this in real time, we need efficient techniques for estimating the centroid and a cut that is perpendicular to the "longest axis" of the polyhedra thereby minimizing the feasible region for the parameters. In this paper, we present the use of a "hit-and-run" algorithm to also produce a cut that approximately minimizes the volume of feasible polyhedra. The advantage of this technique it its relative simplicity- it relies only on matrix algebra and avoids the use of nonlinear optimization techniques. Computational results suggest that this method is fast and accurate.

Nathanael Hyafil and Craig Boutilier, University of Toronto

Title: Regret-based Partial Revelation Mechanisms

Classic direct mechanisms suffer from the drawback of requiring full type revelation from participating agents (generally, requiring agents to reveal their full utility function to the mechanism). In complex settings with multi-attribute utility, assessing utility functions (or even individual parameters of these utility functions) can be very difficult for agents (human or otherwise). As a result, the question of how best to elicit relevant utility information and make decision with partial utility information in mechanism design is paramount. Most recent work on incremental or partial revelation in mechanism design (for example, in combinatorial auctions) assumes that enough information is elicited to determine optimal social outcomes and VCG payments, thus alleviating incentive issues (ex post). However, even this amount of information can be too demanding. In this work we propose a framework for both one-shot and incremental, partial revelation mechanisms and study the use of minimax regret as an optimization criterion for allocation determination with type uncertainty. We examine the incentive properties of incremental mechanisms under various payment rules and show that bounds on efficiency, incentive compatibility and individual rationality can be derived as a function of minimax regret. This gives us an adaptive elicitation procedure in which loss in efficiency and the degree of manipulability can be controlled.

We illustrate this procedure by using queries in generalized additive independence models whose responses place linear constraints on utility parameters, showing how minimax regret can be computed over the resulting polytope. We then propose specific query strategies for reducing the size of this polytope in "relevant dimensions"--exploiting constraints on the set of feasible decisions--to quickly reduce both loss in efficiency and manipulability of the mechanism with as few queries as possible.

Gwenaël Joret, Universite Libre de Bruxelles

Title: On weighted graphs yielding facets of the linear ordering polytope

The linear ordering polytope, also known as the binary choice polytope, has been the subject of numerous works since its introduction in the 60's. The main problem is to find new families of facet-defining inequalities, and nowadays a fair number of such families are known. Among them lie the graphical inequalities, introduced recently by Doignon, Fiorini and the speaker. These graphical inequalities generalize previously known inequalities and are built from a specific class of vertex-weighted graphs called facet-defining graphs. Koppen proved that the latter graphs include the well-studied stability-critical graphs as a special case. A nice classification theorem for stability-critical graphs was given by Lovasz, and it is tempting to believe that there is a natural extension of this classification which handles facet-defining graphs. In this talk we report partial results on this problem.

Tony Marley, University of Victoria and J.J. Louviere, University of Technology, Sydney

Title: An Open Problem: Characterizing Random Utility Models of Best-Worst Choice

Over the past decade or so, a choice design in which a person is asked to select both the best and the worst option in an available set of options has been gaining favor over more traditional designs such as where the person is asked, for instance, to: select the best option; select the worst option; rank the options; or rate the options. Marley and Louviere (2005) present various probabilistic models of such best-worst choice and pose the open problem of characterizing random utility models of (probabilistic) best-worst choice. We will formulate this open problem and present partial results. If time allows, we will also present preliminary results on the use of best-worst as a voting procedure.

PDF available

Michel Regenwetter, University of Illinois at Urbana-Champaign

Title: Measurement/Decision Theory Axioms, Mixture Models and Convex Polytopes

Axioms of decision theory and axioms of measurement are typically cast in algebraic terms. Empirical tests of such axioms are usually, by design, probabilistic, as they use random sampling. On a finite set of objects, the collection of all relational structures satisfying a given axiom (or combination of axioms) is finite. This collection can serve as vertices of a convex polytope in a suitably chosen space. The convex polytope represents the collection of all convex combinations among the vertices, thus the collection of all possible probability distributions over the set of structures represented by the vertices. This is called a mixture model and it naturally extends a deterministic model into a probabilistic one. An empirical sample of data violates the mixture model if no point in the convex hull could plausibly be the probability distribution that generated the observed data. In particular, such (model violating) data must lie outside the convex polytope. I will illustrate this approach using transitivity of preference and Amos Tversky's famous (1969) paper on intransitivity of preference.

Eva Schuberth, ETH Zurich

Title: Approximate Sorting

In market research especially in conjoint analysis one wants to find out a respondent's utility function over a finite set of products. In order to simulate real buying situations the respondent is presented pairs of products out of which he has to choose one that he prefers, i.e., he has to perform paired comparisons. As it is not obvious how to deduce a metric utility function from non-metric information we restrict ourselves to elicitating the respondent's ranking of the products with respect to his preferences. The information theoretic lower bound on sorting states that in general \Omega(n log n) comparisons are needed to deduce the ranking. Even for only moderately large n that easily is too much since respondents often get worn out after a certain number of questions and do not answer further questions faithfully anymore. On the other hand, it might be enough to know the respondent's ranking approximately.

Here we pursue the question of how many comparisons are necessary and sufficient in order to approximately rank n items. We show that any comparison based, randomized algorithm to approximate any given ranking of n items within expected Spearman's footrule distance n2/r needs at least n(min{log r, log n} -6) comparisons in the worst case. This bound is tight up to a constant factor since there exists a deterministic algorithm that shows that 6n log(r) comparisons are always sufficient.

Andreas S. Schulz, Massachusetts Institute of Technology, Sloan school of Management

Title: The interval order polytope of a digraph

We study the polytope that arises as the convex hull of the incidence vectors of all interval orders contained in a given digraph, the so-called interval order polytope. The linear ordering polytope is a face of the interval order polytope on a complete digraph, and inequalities valid for the full-dimensional interval order polytope can be used to derive and shed new light on many classes of facet-inducing inequalities for the linear ordering polytope. While optimizing a linear objective function over the interval order polytope is NP-hard in general, we present a complete linear description of the anti-dominant of the interval order polytope of a series-parallel poset.

Reinhard Suck, University of Osnabrück, FRG

Title: Block-Marschak type conditions for incomplete choice probabilities

It is investigated in which cases conditions of the Block-Marschak type for choice probabilities can be used for incomplete data sets (such as binary choice). The suborders of the power set of the choice alternatives for which choice data are available are described by their Möbius functions. The maximal chains of such an order are shown to be important. The conditions under which this technique yields necessary (or perhaps even sufficient) inequalities are investigated and the potential to characterize interesting polytopes is explored.

Jun Zhang, University of Michigan, Ann Arbor

Title: Borda Scores and Aggregation of Preference: A Geometric-Combinatoric and a Topological Approach

Borda scores are defined over ranking probabilities ("voter profiles"), that is, the probability distribution over all rankings of a set of candidates. It will first be shown that the space of Borda scores form a convex polytope called "permutahedron". Investigating the various ways a permutahedron as a geometric-combinatoric object may arise allows us to appropriately define Borda scores for binary choice probabilities, for subset choice probabilities, and for rank-position probabilities. A topological approach is then adopted to study aggregation of preference ("social choice") when each voter's preference is given by a utility vector with components defined on an interval scale. It turns out that any well-behaved (continuous, anonymous, and respecting unanimity) aggregation map has to allow for null outcome in the social choice while not allowing for null preference by an individual voter. This result complements the impossibility theorems of Arrow (1963) and Chichilnisky (1980).

Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on April 19, 2006.