DIMACS Mini-Workshop on Mathematical and Computational Approaches to Elections

June 18, 1999

Steven Brams, New York University

Voting Models: Explaining Practice and Recommending Reform

Three topics will be discussed:

1. Resource-allocation models in US presidential campaigns. They predict that presidential candidates will spend disproportionately much in large states because of the winner-take-all effect of the Electoral College. Does this happen? Would abolition of the Electoral College better support the principle of one-person, one-vote?

2. Approval voting violates the one-person, one-vote principle, yet it has been recommended as an election reform in multi-candidate elections. How has appproval voting worked in the professional societies that have adopted it? Do voters behave in accordance with the predictions of approval-voting models (e.g., to vote for about half the candidates), and does it tend to elect Condorcet candidates (i.e., those who can beat all others in pairwise contests)?

3. A new "paradox of multiple elections" shows that the winning combination in a referendum with multiple propositions, such as yes-yes-no in the case of three propositions, may receive fewer votes than any of the other combinations (seven in the case of three propositions). This paradox generalizes the Condorcet paradox, in which majorities cycle, and is illustrated by actual voting on 28 propositions in the 1990 California general election.

Peter Fishburn, AT&T Labs - Research

Electoral Axiomatics and Combinatorics

Elections provide a rich field for applications of discrete mathematics because they typically involve finite numbers of voters and candidates in a variety of configurations. We discuss axioms for impossibility and possibility theorems whose analyses feature discrete methods. We then turn to highly combinatorial problems that focus on simple majority cycles and related phenomena.

Michel Regenwetter, Duke University

Probabilistic Models of Social Choice and Preference

Much work on voting and social choice assumes an impartial culture or allows for all preference profiles. In practice, however, individual preferences may be highly correlated.

To evaluate a voting scheme empirically, we need to be able to 1) infer the distribution of preferences in a population from a sample, 2) compute statistics of the inferred distribution that capture relevant normative criteria, and 3) use that information to decide whether a given election satisfied or violated the normative benchmark. Probabilistic choice or ranking models offer a powerful tool to measure, quantify, model and predict social choice behavior.

I present various preference and choice models that blend probabilistic modeling with ordinal utility theory and I apply them to Approval Voting as well as National Election Study (NES) data. Some of these models are exploiting interesting combinatorial properties of ordinal preference structures.

Fred S. Roberts, DIMACS, Rutgers University

On the Median Procedure

Elections are examples of group decisionmaking situations. Two of the most widely studied group decisionmaking procedures are the median procedure, which chooses all those alternatives that minimize the sum of the distances to a given set of alternatives, and the mean procedure, which chooses all those alternatives that minimize the sum of the squares of the distances. This talk will explore the use of the median and mean in problems arising in location theory, graph theory, and molecular biology.

Donald Saari, Northwestern University

From Arrow's and Sen's Theorems to resolving all voting paradoxes

A major portion of choice and decision theory has been dominated by Arrow's and Sen's Theorems which, loosely speaking, assert that all procedures have serious flaws. But, is this what the theorems really mean? An alternative, quite benign interpretation is presented. Then it is shown how the tools used to reinterpret these classical results can be used to understand and characterize all positional voting paradoxes.

Bruno Simeone, La Sapienza University and Rutgers University

Discrete Optimization Methods for Electoral Systems

Discrete Optimization provides an effective and versatile methodology for the analysis and design of Electoral Systems. On one hand, it offers a broad conceptual framework for the systematic study of electoral procedures. On the other hand, it yields a powerful tool-kit for designing different components of an electoral system, such as electoral formulas, electoral districts, etc. These two aspects are reflected in the two parts of the present talk. In the first part, we look at proportional electoral formulas as algorithms for minimizing specific disproportionality indexes. The theory of Schur-convex functions and their generalizations provides a unifying framework for these indexes, and suggests that exchange algorithms are the natural setting for their minimization. Some computational complexity issues will also be discussed.

The second part of the talk is devoted to political districting. A variety of integer programming and combinatorial optimization models have been proposed in the literature. The graph-theoretic model we present here is based on a multi-criteria connected partitioning formulation. Different local search metaheuristics have been tested for its solution. Computational experiments on large real-life test problems indicate that Old Bachelor Acceptance often outperforms other well-known heuristics such as Tabu Search and Simulated Annealing.

Mike Trick, Carnegie Mellon University

Computational Complexity and Algorithmic Aspects in Voting Theory

Voting rules can be seen as algorithms, aggregating preference orders into a single preference order. As algorithms, we can use techniques from computational complexity and algorithm theory to show properties of such aggregating procedures. I discuss some issues of complexity in determining winners, manipulating elections (as a voter, as a chair, and as a group), and some topics in mechanism design, including determining an agenda to elect a given alternative.