May 12, 2022, 1:15 PM - 3:15 PM
The Heldrich Hotel & Conference Center
10 Livingston Avenue
New Brunswick, NJ 08901
Click here for map.
Theia Henderson, Massachusetts Institute of Technology
Farnaz Jahanbakhsh, Massachusetts Institute of Technology
David Karger, Massachusetts Institute of Technology
Part 1: Empowering End Users to Make their own Choices on Harassment, Misinformation, Free expression, and More
Today we mostly rely on platforms and/or administrators to moderate content for us, and are often dissatisfied when they fail to do so effectively. In my group, we are exploring an alternative approach: empowering all individuals on the platform to make their own moderation decisions for themselves. I will argue that existing systems can and should be changed to give individual users greater autonomy in moderation. In this and the next few talks, we'll discuss a few different systems that we've built to explore this idea. I'll start with Squadbox, a system that helps users coordinate their own friends as moderators to protect them from harassment.
Part 2: Fighting Misinformation through a User-Managed Web of Trust
Platform operators have devoted significant effort to combating misinformation on behalf of their users. Users are also stakeholders in this battle, but their efforts to combat misinformation go unsupported by the platforms. In this work, we consider three new user affordances that give social media users greater power in their fight against misinformation: (1) structured accuracy assessments of posts by users, (2) user-specified indication of trust in other users, and (3) and user configuration of social feed filters according to assessed accuracy. Through interviews, a survey, and experiments with a prototype system implementing the affordances, we assess the potential power of these affordances to improve the quality of social information-sharing.
Part 3: Graffiti: The Potential for Anarchy in the Formation of Online Spaces
The social spaces we inhabit online are rarely ideal, but there is little we can do to change them without jeopardizing our social connections that exist within them. We introduce a system titled Graffiti that suggests a new ecosystem of online social spaces is possible: one where social data is not siloed into any one specific space. In this ecosystem, users can use whatever interfaces, filters, algorithms, moderators and broadcast patterns they see fit. The Graffiti API is simple enough that it is even possible for novice programmers to create or remix social spaces using nothing but HTML and CSS. There is still a separation between contextual spaces in Graffiti --- without any separation between our home, work and third spaces we will inevitably experience "context collapse" --- but Graffiti's power comes from its ability for data to exist in large sets of spaces all at once and these sets need only intersect, not overlap, with the set of spaces that someone observes to facilitate communication. An ecosystem with this amount of flexibility requires rethinking moderation, identity management, and design interventions and we discuss how these changes might interplay with existing and future laws.
Part 4: Can it Work?
We’ve discussed three systems that give end users power to substantially control what they encounter. Here we'll step back and think about the pros and cons of this approach.
Theia Henderson is a 2nd-year PhD student in computer science at MIT CSAIL, where she is a member of the Haystack group advised by David Karger. Her research in Human-Computer Interaction is focused on improving social media and other online social tools. As part of her work she builds real-world systems, but these practical designs draw inspiration from her background in theoretical computer science.
Farnaz Jahanbakhsh is currently a PhD student at MIT CSAIL advised by David Karger. Before starting at MIT, she completed a Masters in computer science at the University of Illinois at Urbana Champaign (UIUC) and, before that, a bachelors in computer engineering at Sharif University of Technology in Tehran, Iran. Her areas of research are Human-Computer Interaction and Social Computing.
David Karger is a Professor in the Computer Science and Artificial Intelligence Laboratory in the EECS department at MIT. His primary interest is currently in developing tools that help individuals manage information better. This involves studying people and current tools to understand where the problems are, creating and evaluating tools that address those problems, and deploying those tools to learn how people use them and iterate the whole process. His work draws on whatever fields can help: information retrieval, machine learning, databases, and algorithms, but most often human computer interaction. Karger began his career in algorithms and continues to be interested in the topic, particularly in the application of algorithms to real world problems. This has led him to work in systems, networking, and coding and communication.