« search calendars« DIMACS Workshop on Foundation Models, Large Language Models, and Game Theory

« Fine-tuning Games: Bargaining and Adaptation for General-Purpose Models

Fine-tuning Games: Bargaining and Adaptation for General-Purpose Models

October 20, 2023, 11:05 AM - 11:25 AM

Location:

DIMACS Center

Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

Hoda Heidari, Carnegie Mellon University

Major advances in Machine Learning (ML) and Artificial Intelli- gence (AI) increasingly take the form of developing and releasing general-purpose models. These models are designed to be adapted by other businesses and agencies to perform a particular, domain- specific function. This process has become known as adaptation or fine-tuning. This paper offers a model of the fine-tuning process where a Generalist brings the technological product (here an ML model) to a certain level of performance, and one or more Domain- specialist(s) adapts it for use in a particular domain. Both entities are profit-seeking and incur costs when they invest in the technology, and they must reach a bargaining agreement on how to share the revenue for the technology to reach the market. For a relatively general class of cost and revenue functions, we characterize the conditions under which the fine-tuning game yields a profit-sharing solution. We observe that any potential domain-specialization will either contribute, free-ride, or abstain in their uptake of the technology, and we provide conditions yielding these different strategies. We show how methods based on bargaining solutions and sub-game perfect equilibria provide insights into the strategic behavior of firms in these types of interactions, and we find that profit-sharing can still arise even when one firm has significantly higher costs than another. We also provide methods for identifying Pareto-optimal bargaining arrangements for a general set of utility functions.

 

Speaker bio: Hoda Heidari is the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies at Carnegie Mellon University, with joint appointments in Machine Learning and Societal Computing. She is also affiliated with the Human-Computer Interaction Institute, CyLab, the Block Center for technology and Society, and the Institute for Politics and Strategy. Her research is broadly concerned with the social, ethical, and economic implications of Artificial Intelligence, and in particular, issues of fairness and accountability through the use of Machine Learning in socially consequential domains. Her work in this area has won a best-paper award at the ACM Conference on Fairness, Accountability, and Transparency (FAccT), an exemplary track award at the ACM Conference on Economics and Computation (EC), and a best paper award at IEEE Conference on Secure and Trustworthy Machine Learning (SAT-ML).

Dr. Heidari co-founded and co-leads the university-wide Responsible AI Initiative at CMU. She has organized several scholarly events on topics related to Responsible and Trustworthy AI, including multiple tutorials and workshops at top-tier Artificial Intelligence academic venues, such as NeurIPS, ICLR, and the Web conference. She is particularly interested in translating research contributions into positive impact on AI policy and practice, and has organized multiple campus-wide events and policy convenings, to address such topics as AI governance and accountability.

Dr. Heidari completed her doctoral studies in Computer and Information Science at the University of Pennsylvania. She holds an M.Sc. degree in Statistics from the Wharton School of Business. Before joining Carnegie Mellon as a faculty member, she was a postdoctoral scholar at the Machine Learning Institute of ETH Zurich, followed by a year at the Artificial Intelligence, Policy, and Practice (AIPP) initiative at Cornell University.