This two-and-half day workshop is hosted by DIMACS and the Department of Statistics and Biostatistics at Rutgers University and is the third workshop on Bayesian, Frequentist, and Fiducial Inferences and Statistical Foundations.
The objective of this workshop is to study the role and foundation of statistical inference in the era of data science as well as its applications in fusion learning. Primary aims of the workshop are to:
The recent emergence of big data has heightened the need for efficient methodologies for analyzing data and drawing inferences, and it has highlighted the growing importance of statistics. Despite the tremendous progress it has made, statistics is still a young discipline with several different and competing paths in its approaches and foundations. Most notable are the differences between the Bayesian, frequentist, and fiducial (BFF) approaches. While competing approaches are a natural progression of any scientific discipline, the difference in the foundations of statistical inference can lead to different interpretations and possible misuses of inferences from the same data set. Misuses of statistical inferences and the lack of coherent bridges between them often lead to mistrust of statistics. Statisticians have long been aware of this hidden danger to the field, and many have stressed the urgent need to build a modern statistical inference that "matches contemporary attitudes" (Kass, 2011, Stat. Sci.) and allows "Bayesian, fiducial and frequentist (BFF) inferences to thrive under one roof as BFFs (Best Friends Forever)." (Meng, 2014, IMS Bulletin).
The differences among the BFF approaches, "unlike most philosophical disputes, have immediate practical consequences." (Efron 2013, Science). Case in point is the impact of BFF approaches in fusion learning and combining information, areas that are gaining in importance as vast quantities of data are collected routinely from various sources all the time. The workshop will provide an ideal platform for comparing and connecting methods, building theory, and overcoming barriers for fusing inferences from multiple sources, based on the different but possibly shared BFF perspectives. The ultimate goal is to develop approaches that can improve decision making by exploiting inferences that are more efficient and potentially more accurate than those from any single source.
The workshop will bring together statisticians and data scientists to address issues related to foundation of statistical inference and its applications to combining information and fusion learning. Professors Jim Berger (Duke University) and Brad Efron (Stanford University) will provide the keynote addresses, to be followed by many other talks and discussions. This workshop will help disseminate new approaches for coherent BFF inferences and new advances in statistical inferences and their applications to both within the field of statistics and all fields that use statistics.