Do Not Recommend: Reduction Techniques, as a Form of Moderation

May 11, 2022, 1:30 PM - 2:30 PM

Location:

The Heldrich Hotel & Conference Center

10 Livingston Avenue

New Brunswick, NJ 08901

https://www.theheldrich.com/directions/

Click here for map.

Tarleton Gillespie, Microsoft Research and Cornell University

Public and policy debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to. But removal is not the only available remedy. Reducing the visibility of problematic content is becoming an increasingly common part of platform governance. Platforms use machine learning classifiers to identify content that is misleading enough, harmful enough, offensive enough that, while they do not warrant removal according to the site’s guidelines, they warrant reducing their visibility by demoting them in algorithmic rankings and recommendations, or excluding them entirely. In this talk, I will make the case that reduction techniques should be understood as part of content moderation, and consider the implications of using recommendation in this way. 

[Video]

Bio:

Tarleton Gillespie is a senior principal researcher at Microsoft Research, an affiliated associate professor in the Department of Communication and Department of Information Science at Cornell University, author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007) , co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale, 2018).