« search calendars« DIMACS Workshop on Foundation Models, Large Language Models, and Game Theory

« When the Majority is Wrong: Modeling Annotator Disagreement for Language Tasks

When the Majority is Wrong: Modeling Annotator Disagreement for Language Tasks

October 19, 2023, 11:25 AM - 11:45 AM

Location:

DIMACS Center

Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

Eve Fleisig, University of California, Berkeley

Machine learning methods have long used majority vote among annotators for ground truth labels, but annotator disagreement often reflects real differences in opinion, not noise. This issue is particularly key for training large language models, which perform a wide range of often sensitive tasks for a diverse population. For example, a crucial problem in hate speech detection is whether a statement is offensive to the demographic that it targets, which may constitute a small fraction of the annotator pool. In this talk, I’ll present a model that predicts individual annotators’ ratings on potentially offensive text and combines this information with the predicted group targeted by the text to model the opinions of relevant stakeholders. I’ll also discuss ongoing challenges and opportunities of designing large language models that incorporate human feedback from multiple perspectives.

[Video]

 

Speaker bio: Eve Fleisig is a third-year PhD student at UC Berkeley, advised by Rediet Abebe and Dan Klein. Her research lies at the intersection of natural language processing and AI ethics, with a focus on preventing societal harms of text generation models and improving large language model evaluation. Previously, she received a B.S. in computer science from Princeton University. She is a Berkeley Chancellor’s Fellow and recipient of the NSF Graduate Research Fellowship.