DIMACS Workshop on The Science of Expert Opinion

October 24 - 25, 2011
DIMACS Center, CoRE Building, Rutgers University

Organizer:
Cliff Behrens, Telcordia, cliff at research.telcordia.com
Presented under the auspices of the Special Focus on Algorithmic Decision Theory.

Abstracts:

William Batchelder, University of California, Irvine

Title: Cultural Consensus Theory: Detecting Experts and their Shared Knowledge

Assume that a researcher can pose questions and obtain responses from individuals (experts) that are hypothesized to share knowledge or beliefs. Further, assume that apriori the researcher has no information about the specifics of the shared knowledge or the level of expertise of each individual. Cultural Consensus Theory (CCT) consists of cognitive models for aggregating the responses of the experts to determine if they share a common knowledge base; and if so, to estimate the level of expertise and response bias of each expert as well as the correct (consensus) response to each question. I will illustrate these points by further developing a CCT model known as the General Condorcet Model (GCM, Batchelder &Romney, 1988, Psychometrika; Karabatsos & Batchelder, Psychometrika, 2003). In essence the model is a yes/no signal detection model, except that the 'correct answers' are latent and the hit and false alarm rates are heterogeneous over both experts and questions. Bayesian inference for the model is developed along with a special statistic for assessing whether the experts are responding on the basis of a single latent answer key. A finite mixture GCM is presented to handle the case where there are two or more latent answer keys.

Speaker Biosketch:

William H. Batchelder is a Professor of Cognitive Science at University of California Irvine (UCI). He received a PhD in psychology in 1966 from Stanford University. He was an Assistant Professor at University of Illinois at Urbana-Champaign before joining UCI. He has held Visiting Professorships at University of Wisconsin, Stanford University, University of Groningen NL, Santa Fe Institute, and University of Amsterdam. He is an elected member of the Society of Experimental Psychologists and he received the Lauds and Laurels award for Distinguished Research at UCI. He has served as President of the Society for Mathematical Psychology, as Editor-in-Chief of the Journal of Mathematical Psychology, and as an Associate Editor of Journal of Experimental Psychology: Learning, Memory & Cognition. His research areas include mathematical modeling and measurement in the social and behavioral sciences, especially cultural anthropology, social network theory, and cognitive psychology.


Roger M. Cooke, Resources for the Future (RFF)

Title: Eating the Pudding (Keynote Talk)

An expert judgment method is not validated by a narrative. It is validated by recognizing that experts who quantify their uncertainty are statistical hypotheses; experts, and combinations of experts, should be analyzed from this point of view. When the respective hypotheses look good, based on data, and ONLY then, can we offer the results as science-based uncertainty quantification. Anything else is parochial, it may appeal to believers. This simple idea underlies the classical model, a performance based weighting scheme that has seen many non-academic applications - real problems, real experts, real deliverables, real peer review. This talk will review the classical model, and touch on some current issues with regard to validation.

Speaker Biosketch:

Roger Cooke joined Resources for the Future in September 2005 as the first appointee to the Chauncey Starr Chair in Risk Analysis. Prior to joining RFF, Cooke was professor of applied decision theory at the Department of Mathematics at Delft University of Technology in The Netherlands where he was on the faculty for more than 25 years. His work focuses on methodological issues of risk analysis, uncertainty analysis and expert judgment. He has also worked in competing risk, design of reliability data bases, and stochastic processes. He has served on the executive board of the European Safety and Reliability Association, serves in the editorial board of Reliability Engineering and System Safety, and on many technical committees of Mathematical Methods for Reliability, Probabilistic Safety and Accident Management, he served as chairman of the Technical Committee on Uncertainty Modeling of the European Safety and Reliability Association. He founded a master's program Risk and Environmental Modeling at the Delft University of Technology. Cooke has published four books, edited two books, published 92 articles in international refereed journals, and 115 papers in refereed international conference proceedings and books. The book, Probabilistic Risk Analysis, has been translated into Japanese. Cooke has been principal investigator on many contract research projects for the Dutch government, the Japanese government, the European Union, the US Nuclear Regulatory Authority, the Swedish Nuclear Inspectorate, the German VGB centralized Databank, the Harvard Center for Risk Analysis, as well as many companies and laboratories. He played a central role in expert elicitation, dependence modeling and uncertainty propagation in the integrated uncertainty analysis of accident consequence models for nuclear power plants, undertaken jointly by the US Nuclear Regulatory Commission and the European Union. In 2005 Cooke won the Risk Management Oeuvre Award from the Dutch Association for Reliability. In 2006 he served on panels of the National Academy of Science and on NASA's safety Study Team. In 2006-8 he led a mathematical support team on causal modeling for civil aviation and supervised the development of non-parametric continuous-discrete Bayesian Belief Net software. In 2008 he was contracted by the National Institute for Aerospace to use this modeling tool in analyzing the risk of new Merging and Spacing protocols, and was elected fellow of the Society for Risk Analysis. In 2010 he was named lead author in the 5th assessment of the Intergovernmental Panel on Climate Change for the chapter on Risk and Uncertainty.


Simon French, University of Warwick

Title: Expert Judgement and Societal Decision Making in a Web-connected World

Some 25 years back societal risk decision making was left to the politicians and regulators. The process was essential in which supporting analysis and decision making took place behind closed doors, then the decision would be announced and defended by the authorities. Nowadays things are more participatory and open. Stakeholders and the public are involved much earlier in the process and the analysis and debates are open and wide ranging. While this progress towards deliberative democracy may be applauded, it does have significant implications for the analytic process and particularly the use of expert judgement. Moreover, the web has brought further complications to the process. Stakeholders and the public may draw upon a much wider range of 'expert' judgement than the authorities might have used. They will collate information - and misinformation - from a wide range of sources. In this talk we shall discuss these issues from a decision theoretic perspective, asking how the process should be conducted in an ideal world and considering how it is in practice.

Speaker Biosketch:

Simon French is the Director of the Risk Initiative and Statistical Consultancy Unit in the Department of Statistics at the University of Warwick. Until recently, he was Professor of Information and Decision Sciences at Manchester Business School. He has an international reputation in decision analysis, risk assessment and Bayesian statistics. In all his work the emphasis is on multi-disciplinary approaches to solving real problems and the innovative use of technology in supporting decision making. In 1990-92 he was a member of the International Chernobyl Project, leading the team which looked at the issues driving decision making in the aftermath of the accident, running five decision conferences within the Soviet Union. It was his experience on this Project that led him to realise the paramount importance of including good information management and communication as an integral part of risk management, especially in relation to crisis response. Since then he has worked in many multi-disciplinary, national and European projects which address major societal risk management. Currently he is working on e-democracy and e-participation and wider contexts of societal decision making.


Bob Hetes, Senior Advisor for Regulatory Support, National Health and Environmental Effects Research Laboratory, Environmental Protection Agency

Title: Uncertainty, Expert Judgment, and the Regulatory Process: Challenges and Issues

Quantifying uncertainty is a critical element of the regulatory process and there are concerns about how to do it and how to use it. In particular, quantifying uncertainty might undermine confidence in regulatory decisions, open the regulation to legal challenges, and might provide fertile grounds for those wishing to delay a regulatory process. These concerns become more pronounced in the multi-stakeholder and adversarial context in which regulatory decisions are often taken. Expert opinion and expert judgment is often used in the debate over uncertainty, what is the nature and magnitude of the uncertainty and what is the importance of those uncertainties. The nature of the regulatory process necessitates a high degree of scrutiny, which has implications on how expert opinion is utilized. Expert elicitation (EE) is one of many forms of expert opinion and one which presents special challenges. An EPA Task Force has explored the utility of EE, addressing a number of important issues related to the conduct and use of expert elicitation within US EPA, especially When is expert elicitation appropriate relative to other methods to characterize uncertainty? This presentation will summarize the findings and recommendations of the recently finalized Task Force White Paper (http://www.epa.gov/stpc/pdfs/ee-white-paper-final.pdf).

Speaker Biosketch:

He has served as co-chair of an Intra-Agency Task Force on The Conduct and Use of Expert Elicitation in EPA, and co-chair for the Risk Assessment Forum's Probabilistic Risk Assessment Technical Panel, and as a representative on the EPA Science Policy Council Steering Committee. He also served on the EPA Task Force and was a Chapter lead for the 2004 EPA Staff Paper Risk Assessment Principles and Practices. Prior to joining the EPA in 1997, Hetes spent more than 15 years as an environmental consultant, working in both the private and public sectors, developing and applying innovative risk assessment for regulatory decision making. He received an M.S.P.H. in environmental sciences from University of North Carolina in Chapel Hill.


D. Frank Hsu, Christina Schweikert and Roger Tsai, Fordham University

Title: Combining Multiple Expert Systems using Combinatorial Fusion Analysis

We present a new methodology to combine opinions and judgements of multiple experts using an information fusion paradigm "Combinatorial Fusion Analysis (CFA)". In our framework, each expert is treated as a decision (scoring) system which consists of a score function s , an induced rank function r, and a Rank-Score Characteristics (RSC) function f computed from s and r. Both score and rank combinations are applied to fuse decisions from multiple experts. Variations on the RSC functions of the experts are used to measure cognitive diversity in their respective scoring/ranking behavior. We introduce a novel way of partitioning the spectrum of values for the target variables into a number of n buckets, where each bucket is a candidate in a contest judged by these multiple experts. Among many domain applications, we include a corporation sales revenue prediction of a computer manufacturer. Our new method outperforms each individual forecaster for each quarter as well as the average performance through four quarters. We will also discuss future work to explore the relationship between diversity among and performance of the multiple expert systems in order to derive optimum strategy.

Speaker Biosketch:

D. Frank Hsu is the Clavius Distinguished Professor of Science and a Professor of Computer and Information Science at Fordham University in New York city. He has been visiting professor /scholar at University of Parid-Sud (and CNRS), Taiwan University, Tsing Hua University (Hsin-chu, Taiwan), Keio University, JAIST, Boston University and MIT. Hsu's research interests include combinatorics and graph theory, network interconnection and communications, and computing, informatics and analytics. An information fusion method he and his colleagues proposed and developed, Combinatorial Fusion Analysis, has been applied to target tracking, internet search, virtual screening, bioinformatics and brain informatics. Hsu has served on several editorial boards including Journal of Interconnection Networks, Pattern Recognition Letter, IEEE Transactions on Computers, Networks, International Journal of Foundation of Computer Science, and Journal of Ubiquitous Computing and Intelligence. Hsu is a Fellow of the New York Academy of Sciences, the Institute of Combinatorics and Applications, and International Society of Intelligent Biological Medicine. He is currently Vice Chair of the New York Chapter of the IEEE Computational Intelligence Society.


James Langenbrunner, Los Alamos National Laboratory, Jane Booker, Los Alamos National Laboratory, and Tim Ross, University of New Mexico

Title: Roles for Elicitation in Physics Information Integration: An Expert's Perspective

Twenty years ago, Meyer and Booker published their practical guide on formal elicitation of expert knowledge. Their expert-oriented, bias minimization approach established the important linkage between elicitation and the subsequent analysis of the expert's knowledge in physical science and engineering applications. The NRC's reactor safety study (NUREG 1150) and Los Alamos' reliability of nuclear weapons program were the first to utilize their methods. From those, they formalized the use of expertise to formulate the structure of complex problems - the second role for elicitation of expert knowledge. By 1999, the first Information Integration methodology, PREDICT, was developed. Elicited knowledge became a primary source of information along with data and models, and experts' predictions were validated. In today's Information Integration experts provide multi-faceted products, including experts taking on the role of hunter and gatherer of data, information and knowledge to be integrated in a waste nothing philosophy, and they play a prominent role in providing "glue" for the integration. As an expert in physics applications, uncertainty management, and the information integration methods, I will present, by "thinking aloud," two examples demonstrating the roles of expert knowledge, in terms of my "community of practice."

Speaker Biosketch:

James Langenbrunner, Los Alamos National Laboratory (LANL) earned his B.S. with honors from Stanford University, in 1983, and his Ph.D. in nuclear physics at Duke University, in 1989. He recently received Defense Programs Awards of Excellence for contributions to Stockpile Stewardship, 2009, 2010, while working in Applied Physics Division at LANL. His work has focused on programmatic deliverables for LANL, including establishing goodness-of-fit for simulation models with historical data, validation and verification, and predictability. He has worked with Jane Booker, an original architect in the science of eliciting and analyzing expert judgment, for ten years.


Casey Lichtendahl, Darden School of Business, University of Virginia

Title: The Wisdom of Competitive Crowds

We analyze a winner-take-all forecasting competition in which a prize is awarded to the forecaster whose point forecast is closest to the actual outcome. This competition induces forecasters to report strategically. The question is whether the competitive crowd's forecast, the average of the strategic forecasts, is more accurate than that of a crowd of truthful forecasters. We find the competitive crowd's forecast is more accurate and measure its degree of improvement. To accomplish this, we characterize the nature of strategic forecasting in a set of limiting equilibrium results. As the correlation between the forecasters' private information increases, the forecasters switch from using a pure strategy to a mixed strategy. When each forecaster's private information is comprised of sample data, we show that this mixed-strategy equilibrium is equivalent to simply reporting the sample's last data point. The report-the-last strategy is consistent with the availability heuristic and suggests that the availability heuristic may be well-adapted to competitive forecasting situations. Consequently, winner-take-all forecasting competitions may serve as an attractive alternative to prediction markets: they are easy to organize, easy to participate in, and potentially highly accurate.

Speaker Biosketch:

Casey Lichtendahl is an Assistant Professor at the Darden School of Business, University of Virginia. He received his A.B. from Princeton University, M.B.A. from the University of Virginia, M.S. from Stanford University, and Ph.D from Duke University. He serves as an Associate Editor for the INFORMS journal Decision Analysis. His research focuses on eliciting, evaluating, and combining probability forecasts and multiattribute utility theory.


Thomas Mazzuchi, Department of Engineering Management and Systems Engineering, School of Engineering and Applied Science, The George Washington University

Title: Use of Expert Judgment in Risk Assessments Involving Complex State Spaces

This talk discusses the use of a modified paired comparison technique for eliciting and combining expert judgment in risk analyses where the risk states are multidimensional. The technique can be used to model static probabilities and also to define exponential probability distributions for component failure times. Both technical and operational challenges when using the technique will be discussed. Examples will be taken from actual uses of the technique in several transportation risk analyses in both the maritime and aviation industries.

Speaker Biosketch:

Thomas A. Mazzuchi, is a Professor of Operations Research and Engineering Management in the Department of Engineering Management and Systems Engineering of the School of Engineering and Applied Science at the George Washington University. His areas of expertise are applied statistics, Bayesian inference, quality control, reliability and risk analysis, and systems engineering. Previous research projects of his have focused on analysis of quality and reliability in the small spacecraft technology initiative, spares provisioning modeling, accelerated life test methodology for high reliability components, data anomalies in field test reliability data, statistical analysis of non-homogeneous failure data and of composite material failure data, maritime risk analysis, and a study of the use of expert judgment in reliability and risk analysis. Professor Mazzuchi joined the George Washington University in 1985 and has served as Chairman of the Department of Operations Research, Chairman of the Department of Engineering Management and Systems Engineering and as Interim Dean of the School of Engineering and Applied Science.


Jason R. W. Merrick, Department of Statistical Sciences and Operations Research, Virginia Commonwealth University

Title: Overlapping Expert Information: Learning about Dependencies in Expert Judgment

The Bayesian approach to combining expert opinions is well developed, providing a decision maker's posterior beliefs after receiving advice from people with deep knowledge in a given subject. A necessary part of these models is the inclusion of dependencies between the experts' judgments, often justified by an overlap in the information upon which the experts base their judgments. Often these dependencies must be specified a priori and there are insufficient degrees of freedom in the judgments to update. We discuss two mechanisms for learning about expert dependencies. One comes from extended pairwise comparisons used in maritime risk assessment. The other uses a hierarchical structure drawn from Bayesian semiparametric methods. We discuss the Bayesian analysis underlying these methods and illustrate the learning about expert dependencies through applications.

Speaker Biosketch:

Jason Merrick is a Professor of Operations Research in the Department of Statistical Sciences and Operations Research at Virginia Commonwealth University and program director for the new PhD in Systems Modeling and Analysis. He received his B.A. degree in mathematics and computation from Oxford University, England, and his D.Sc. degree in operations research from the George Washington University, Washington DC. His research is primarily in the area of decision analysis, simulation, and Bayesian statistics. He has worked on projects ranging from oil spill risk to watershed management to counter-terrorism and received grants from the National Science Foundation, the Federal Aviation Administration, the United States Coast Guard, the Department of Homeland Security, the American Bureau of Shipping, British Petroleum, and Booz Allen Hamilton, amongst others. He has also performed training for Infineon Technologies, Wyeth Pharmaceuticals, and Capital One Services.


Winston R. Sieck, Global Cognition

Title: Explanations as Indicators of Expertise

In this talk, I explore the possibility that explanations can be employed to assess the cognitive competence or expertise of an explainer. Decades of research and inquiry within the areas of cognitive science and philosophy have attempted to address the notion of explanation quality to determine, "What is a good explanation?" In many well-defined areas this is a relatively easy question to address; a good explanation is one that reflects ground-truth. For example within the biological domain, explanations of the human circulatory system are considered good to the extent that they completely and consistently address the physical components involved and the relations between them, and without introducing misconceptions. In social and political domains, on the other hand, ground-truth is less well-defined, and gauging the quality of explanations is more tenuous. Nevertheless, it may still be possible to determine the relative quality of such explanations by examining domain general characteristics, such as conceptual specificity and integration, explanatory complexity, and evaluative differentiation. I will review efforts along these lines to gauge expertise based on evaluations of explanations, and discuss prospects and pitfalls.

Speaker Biosketch:

Winston R. Sieck is president of Global Cognition, a research organization and e-learning solutions provider located in Yellow Springs, OH. His areas of specialization include culture and cognition, metacognition, cross-cultural competence, decision making and cognitive skills/expertise. He received his PhD in cognitive psychology from the University of Michigan, and MA in statistics from the same university. After serving as a post-doctoral scholar in quantitative psychology at the Ohio State University, he led the cultural programs in naturalistic decision making at Klein Associates for several years, and then founded Global Cognition.


Eric Stone, Wake Forest University

Title: Training to Improve Judgmental Expertise by Using Decompositions of Judgment Accuracy Measures

For both judgment of probabilities and of continuous quantities, there exist both overall accuracy measures as well as decompositions of these accuracy measures into component parts. These decompositions are helpful, because they tell us not only if a judge's accuracy is high or low, but also in what ways the judge is performing well or poorly (e.g., is the judge overconfident or underconfident?). Although these decompositions have often been used to evaluate accuracy, they have rarely been used as means to improve training techniques for increasing accuracy. Our work attempts to do just that by developing training techniques designed to focus on specific components of accuracy. We show that techniques that work to improve one component of accuracy frequently will not influence other components. Thus, to maximize gains in judgment accuracy, separate training techniques should be used that focus on different accuracy components.

Speaker Biosketch:

Eric Stone is an associate professor of psychology at Wake Forest University. His research is in the field of judgment and decision making, with specific emphasis on how to make accurate probability judgments, how to communicate these probabilities, and how people decide differently for others than for themselves. Among other projects, he is presently working on the IARPA-funded ACE program designed to improve forecasting ability about world events.


Alexis Tsoukiąs, LAMSADE-CNRS, Université Paris Dauphine, Paris, France

Title: Justified Opinions are Better than Simple Ones: The Use of Argumentation Theory in Forming Collective Opinions

Recent advances in preferences and judgement aggregation suggest that Arrow's theorem remains the crucial obstacle in order to have universal procedures producing consensus opinions in collective and social choice settings. In the presentation we first review some of the main results in this sense. We then claim that a way to handle the problem of establishing a collective opinion is to use formal argumentation theory in order to produce arguable opinions. The basic idea is that in order to have an "acceptable" collective opinion we do not only need to have a reasonable aggregation procedure; we also need to be able to explain it and justify it. Under such a perspective we introduce a first classification scheme of aggregation procedures considering three basic dimensions: how differences of preferences are handled, how dependencies among preferences are handled and how negative opinions are considered. We then show how justifications can be automatically constructed in order to obtain arguable conclusions.

Speaker Biosketch:

Alexis Tsoukiąs is a research director at LAMSADE ? CNRS (Centre National de la Recherche Scientifique), Université Paris Dauphine. His research interests include subjects such as: multiple criteria decision making, non conventional preference modeling, applied non classical logics, ordinal mathematical programming, artificial intelligence and decision theory. He is the co-author of two books and more than 70 journal articles and book contributions. He has been vice-president of ROADEF (the French OR society) as well as President of EURO (the European association of OR societies). Besides teaching to several post-graduate classes he occasionally practices decision support. He is the coordinator of the COST Action IC0602 "Algorithmic Decision Theory". He holds (1989) a PhD in Computer Science and Systems Engineering from Politecnico di Torino (Italy) where he also graduated engineering studies.


Carolyn Wong, The RAND Corporation

Title: Consensus Building Using E-DEL+I: Lessons Learned

Informed decision making with multiple stakeholders is a complicated act. Each stakeholder must balance his focused interests and specific expertise with the need to collaborate, cooperate, synchronize, or otherwise engage with the activities of other stakeholders who are simultaneously attempting the same balancing act. Making an informed decision in this type of environment requires an awareness of all stakeholders' viewpoints. The Electronic Decision Enhancement Leverager plus Integrator (E-DEL+I ?, ©, provisional patents, RAND) is an analytic research capability that can be used to integrate diverse viewpoints and expertise on complex issues such as those that involve many stakeholders and multiple critical dimensions. E-DEL+I is applicable to such situations because it is designed to support consensus building for solution generation, conflict resolution, and strategic definition of next steps. E-DEL+I has been used in a variety of applications including determining Army technology areas most conducive to collaborative approaches, determining science and technology priorities for the Navy, completing Functional Area Analysis of the Capability Based Assessment of the Netcentric Operational Environment for the Joint Staff, assessing Army long term strategic and operational alternatives, and determining research needs for the law enforcement community. This talk focuses on lessons learned from applying E-DEL+I.

Speaker Biosketch:

Carolyn Wong is an operations research specialist at The RAND Corporation. Her research focus is in defense acquisition, science, and technology. She pioneered the Electronic Decision Enhancement Leverager plus Integrator technique and the Electronic Policy Improvement Capability (E-DEL+I and EPIC, ?, ©, RAND). Her publications helped create the body of literature on collaborative working arrangements. She has led studies on consensus building; roles, responsibilities, and authorities of defense officials; strategy for tactical wheeled vehicles; policy development in light of technology advancement; organizational effectiveness of research management; and cost analysis. Dr. Wong has a Ph.D. in Electrical Engineering, a M.S. in Management, and a B.A. in Mathematics, all from UCLA.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on October 25, 2011.