Lunch & Poster Session

May 12, 2022, 11:45 AM - 1:15 PM

Location:

The Heldrich Hotel & Conference Center

10 Livingston Avenue

New Brunswick, NJ 08901

https://www.theheldrich.com/directions/

Click here for map.

List of Posters

Public Verification for Private Hash Matching

Presenter: Anunay Kulshrestha, Princeton University

The increasing implementation of end-to-end encryption (E2EE) poses unprecedented challenges for content moderation, because communications services lack access to plaintext content. Efforts to combat child sexual abuse material (CSAM) have become a particular global flashpoint: the predominant method of detection in non-E2EE settings, perceptual hash matching, is unavailable in E2EE settings. Recent advances in applied cryptography enable privacy-preserving hash matching, where a service can identify a match without accessing non-matching content and without disclosing the hash set. These designs, especially a high-profile proposal by Apple for identifying CSAM in its iCloud Photos service, have attracted widespread criticism for creating security, privacy, and free expression risks.

In this work, we aim to advance scholarship and dialog about private hash matching systems. We systematize considerations for deployment of private hash matching systems, developing a set of questions that remain unresolved for implementation. Next, we describe how law and public policy could respond to concerns about private hash matching by providing guardrails for implementations. Finally, we contribute three novel cryptographic protocols for improving public confidence in private hash matching systems: (1) proof that child safety groups approved the hash set; (2) proof that if a user's content is a false positive match, the user will receive eventual notification of the match; and (3) proof that particular lawful content is not present in the hash set. The protocols that we describe are practical, efficient, and compatible with existing constructions for private hash matching.

 

Approaches to Content Moderation in End-to-end Encrypted Systems

Presenter: Lucy Qin, Brown University

This poster presents on a published report that assesses current technical proposals for the detection of unwanted content in end-to-end encrypted (E2EE) services against the guarantees of E2EE. We find that technical approaches for user-reporting and meta-data analysis are the most likely to preserve privacy and security guarantees for end-users. Both provide effective tools that can detect significant amounts of different types of problematic content on E2EE services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM. Conversely, we find that other techniques that purport to facilitate content detection in E2EE systems have the effect of undermining key security guarantees of E2EE systems. The full report is available here: https://cdt.org/insights/outside-looking-in-approaches-to-content-moderation-in-end-to-end-encrypted-systems/

 

ε-Differential Privacy, and a Two Step

Presenter: Nathan Reitinger, University of Maryland

Sharing data in the 21st century is fraught with error. Most commonly, data is freely accessible, surreptitiously stolen, and easily capitalized in the pursuit of monetary maximization. But when data does find itself shrouded behind the veil of “personally identifiable information,” data becomes nearly sacrosanct, impenetrable without dense consideration of ambiguous statutory law—inhibiting utility. Both outcomes are abhorrent, unnecessarily stifling innovation or indiscriminately pilfering privacy.

We propose a novel, two-step test which creates future-proof, bright-line rules around the sharing of legally-protected data. The crux of our test centers on identifying a legal comparator between a particular data sanitization standard—differential privacy, assessing “mechanisms” or “recipes” for data manipulation—and statutory law. Step one identifies a proxy value which may be easy calculated from an ε-differentially private mechanism: “re-identification risk”; step two looks for a corollary in statutory law, assessing the maximum “re-identification risk” a statute tolerates when permitting confidential data sharing. If step one is lower than or equal to step two, any output derived using the mechanism may be considered legally shareable; the mechanism is (statute,ε)-differentially private.

True, our test lacks a healthy dose of justiciability, making it difficult to predict what particular mechanism attributes will be appropriate for which particular statutes. That precision is lacking, however, does not displace the true value of our test—its ability to provide confidence to data stewards hosting legally-protected data. This confidence, in turn, may give rise to risk-free, privacy-protected data sharing, greasing the wheels on advancements in science and technology, rather than stifling innovation with high-penalty, low-description statutes.

 

Resolving Online Content Disputes in the age of Artificial Intelligence: Legal and Technological Solutions

Presenter: Faye Wang, Brunel University London (presenting remotely)

The common universal route of seeking solutions for copyright infringement on online platforms is that when rightsholders notice that their copyrighted content was infringed online, they may initiate a notice and takedown procedure to the operators of online platforms, seek solutions from online dispute resolution (ODR) and alternative dispute resolution (ADR) services, and when all fails, they may then file a lawsuit in courts .Nowadays, it is possible that notice and take down procedures and ODR services may be assisted by AI technology. Technical measures for blocking injunctions may also involve the consideration of appropriate AI technology. It is increasingly common for established online platforms to adopt voluntary AI-assisted technological solutions (such as ‘automated filtering software’ or ‘automated content moderation tools’) to minimise their legal risks for infringing content on their platforms before a notice and takedown legal procedure takes place. In Europe, the EC Directive on Electronic Commerce 2000 prohibits imposing intermediaries with general monitoring obligations but grants intermediaries with responsibilities to remove illegal content under the ‘notice and takedown’ regime. The proposed Digital Service Act 2020 also affirms the prohibition of general monitoring obligations. However, it is arguable that the Copyright Directive in 2019 supports the use of automated filtering system to detect illegal content which may appear to contradict with the principle of prohibiting imposing ‘general monitoring obligations’. However, in the most recent European Court of Justice joint cases of YouTube and Elsevier, the Court also upholds the use of automated filtering systems to benefit from liability exemption.

This poster reviews current regulations for the liability of hosting service providers in notice and takedown procedures with reference to Europe, US and China. It also provides the interpretation of the meaning of ‘best efforts’ to prevent future uploads by hosting service providers as newly introduced in Article 17 of the Copyright Directive 2019. It proposes possible legal and technological solutions to resolve copyright-related disputes over the Internet with the assistance of AI.