DIMACS Workshop on Evidence-based Policy Making

December 2 - 3, 2010
Lamsade, University Paris Dauphine, France

Organizers:
Fred Roberts, Rutgers University, froberts at dimacs.rutgers.edu
Alexis Tsoukiàs, LAMSADE, tsoukias at lamsade.dauphine.fr
Presented under the auspices of the Special Focus on Algorithmic Decision Theory.

This workshop deals to a large extent with use of expert opinion. Too often governmental policy making disregards data and theories about the issues involved and depends on the intuition (or political philosophy) of a policy maker. Better utilization of evidence in policy and practice can help save lives; reduce poverty, pollution, and poor health; and improve allocation of scarce resources.

This workshop will deal with ``evidence-based policy making,'' an idea that took hold in the UK a number of years ago and has led to a rather robust literature. However, that literature has tended to disregard critical ADT challenges surrounding management of very large data bases, data mining and knowledge extraction, construction of indicators, development of meaningful comparisons, and structured argumentation and consensus building. The concept of evidence used here depends on expert opinion and gathering of statistics, but goes further to examine the more complex issue of constructing the ``reasons'' for or against a certain policy. Such reasons can come from research findings, photographs, text archives, observer accounts, even sensor data, or experience . It can be quantitative or qualitative, personal or societal.

The challenges faced by ADT to form a basis for evidence-based decision making cannot be handled by just extending decision analytic tools in policy making. This Workshop will investigate several of these challenges. For instance, how do we combine scores or indicators on different criteria or from different experts to obtain average scores that can be compared in a meaningful way? This question is widely studied in the theory of measurement. A key idea in that theory is the notion of meaningful statement. It is shown in these papers that under different ways of combining scores, it may or may not be meaningful to compare combined scores. The challenge is to develop similar results when combining different scores must be done rapidly, with uncertainty about the actual score, and when there are a large number of scores to be combined.

Another challenge is to construct decision analysis tools that: enable shared collective problem formulation; integrate decision support tools with justifications, explanations, and dynamic revision; and integrate artificial intelligence contributions accounting for new forms of uncertainty and data structures to improve algorithmic efficiency. Help may come from new methods of social choice and negotiation theory for dynamic consensus, extending the notion of consensus to information interpretation and participation rules, and extending use of power indices as general information tools for public decision.

A third challenge is to learn from sparse or massive qualitative and incomplete data sets and from social networks. This calls for new machine learning/knowledge extraction tools that adapt dynamically.


Next: Call for Participation
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on July 26, 2010