« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« Designing Hybrid Crowd+AI Prediction Markets for Estimating Scientific Replicability

Designing Hybrid Crowd+AI Prediction Markets for Estimating Scientific Replicability

December 02, 2022, 1:45 PM - 1:50 PM

Location:

Online and Paris, France

Tatiana Chakravorti, Pennsylvania State University

Despite high-profile successes in the field of Artificial intelligence (AI) [1-4], machine-driven solutions still suffer important limitations, particularly for complex tasks where creativity, common sense, intuition, or learning from limited data is required [5-8]. Both the promises and challenges of AI have motivated work exploring frameworks for human-machine collaboration [9-13]. The hope is that we can eventually develop hybrid systems that bring together human intuition and machine rationality to tackle today’s grand challenges effectively and efficiently.  

In this talk, we will overview ongoing research to develop and test hybrid prediction markets for crowd+AI collaboration. This builds on our own and others’ prior work developing fully artificial prediction markets as a novel machine learning algorithm and demonstrating the success of this approach on benchmark classification tasks [14,15]. In an artificial prediction market, algorithmic agents (or, bot traders) buy and sell outcomes of future events. Classification decisions can be framed as outcomes of future events, and accordingly, the price of an asset corresponding to a given classification outcome can be taken as a proxy for the system’s confidence in that decision. 

The most exciting opportunity artificial prediction markets afford, we suggest, is the opportunity to integrate human inputs more meaningfully than currently possible with existing machine learning algorithms. Human traders can participate alongside algorithmic agents during both training and testing, and the efficient markets hypothesis [16] states that the market price reflects the aggregate information available to participants (humans and agents) at least as well as any competing methods.  

We have designed and piloted hybrid prediction markets for the task of estimating scientific replicability (replication markets; see, e.g., [17]). Assets in the market represent “will replicate” and “will not replicate” outcomes of real replication studies for published findings in the social and behavioral sciences. We present the outcomes of our initial experiments with human subjects, compare the performance of artificial, human-alone, and hybrid scenarios, and lay out a research agenda that builds upon these ideas.