« The Artist Formerly Known As Your Computer: Ensuring Safe and Responsible Human-Machine Collaboration with Automated Media Generation Tools
December 02, 2022, 9:25 AM - 9:35 AM
Location:
Online and Paris, France
Ned Cooper, ANU School of Cybernetics
Julian Vido, ANU School of Cybernetics
In 2022, a number of companies have released Artificial Intelligence (AI) enabled tools that generate digital images via text prompts. For example: OpenAI announced DALL-E 2 and released an API with limited access; Google announced Imagen, though did not release an API; Midjourney announced a tool of the same name and released an API; and Stability.ai announced and open sourced Stable Diffusion. These models, also known as text-to-image diffusion models, are effectively automated media-generators (AMGs).
There is great interest in AMGs for creative and commercial pursuits, but they pose challenges to safe and responsible human-machine collaboration. In particular, the Stable Diffusion tool allows users to generate manipulated images of real people, governed only by a CreativeML OpenRAIL-M license, which requires users to self-regulate.
Self-regulation of AMGs contrasts with recent developments in individual protections against automated decision-making. For example, the General Data Protection Regulation affords individuals in the European Union and the European Economic Area (the EU) the right not to be the subject of a decision based solely on automated processing [1] and the Australian Human Rights Commission has recommended a similar right be established in Australia [2]. No explicit protections exist for individuals in relation to AMGs. However, the EU is currently considering legislation commonly known as the AI Act [3]. In its current form, the AI Act requires users to disclose when media has been generated or manipulated using an AI system.
In this talk, we will consider whether the protections proposed in the AI Act, are sufficient to ensure people’s safety in relation to AMGs, or whether more expansive protections are necessary, such as a right not to be the subject of an AMG. With a view to demonstrating the breadth of considerations relevant to the safety and responsibility of human-machine collaboration for AMGs, the talk will also consider: