Paper/Proposal Title
Deep Fakes: Preserving Truth & Human Rights in an Era of Truth Decay
Location
New Media and Imagery
Start Date
10-2-2019 11:30 AM
End Date
10-2-2019 1:00 PM
Keywords
Deep Fakes, artificial intelligence, truth decay, human rights & media, free speech
Abstract
Lawmakers, technology companies, and the general public are increasingly concerned about the prevalence of “deep fake” videos. Often shared on social media platforms, these digitally altered videos are made possible with recent advances in machine learning and artificial intelligence.
Although altered and faked media content is not necessarily a new issue, images and videos can now be altered quickly, cheaply, and more convincingly than ever before. An underlying concern is that platforms will be overwhelmed with believable deep fakes, leaving Internet users struggling to discern fact from fiction. Yet a future in which no one call tell what is real will threated democracies everywhere.
Moreover, human rights organizations, journalists, and governments depend on reliable information-gathering and dissemination in order to raise awareness about rights abuses and hold bad actors accountable. The national security community is also concerned about how deep fakes problematize our intelligence gathering.
A growing push by lawmakers to regulate deep fakes raises various rights issues, including content moderation and free speech. Empirical, scholarly literature on deep fakes can help provide guidance on how best to proceed. But debates exist as to whether deep fakes present novel problems or merely perpetuate confirmation biases in a new form. Currently, we don’t know if, how, or to what extent deep fake videos actually affect users.
As such, this paper first aims to validate – or contest – prevailing assumptions about the novelty and societal impact of deep fakes. It will begin by surveying the history of similar media-altering technologies such as Photoshop and ask to what extent deep fake videos differ from other kinds of altered media. A key question to consider is whether viewers’ savviness and ability to distinguish between unaltered and altered content will develop in response to new editing technologies. In addition, this paper will identify possible steps forward, attempting to move past over-simplified, binary solutions and instead considering ecosystems of regulatory, political, and social schemes that both protect the human rights of free speech and expression as well as preserve truthful information and accurate reporting of events.
Author/Speaker Biographical Statement(s)
Virginia Kozemczak is a Juris Doctor candidate at the Cardozo School of Law in New York. She holds degrees in human rights and the arts from Columbia University and New York University. Virginia has held positions at the New York City Commission on Human Rights, New York Legal Services, and the Open Society Foundations. Her research focuses on the intersection of media, technology, and human rights.
Included in
Human Rights Law Commons, Intellectual Property Law Commons, Law and Society Commons, Science and Technology Law Commons
Deep Fakes: Preserving Truth & Human Rights in an Era of Truth Decay
New Media and Imagery
Lawmakers, technology companies, and the general public are increasingly concerned about the prevalence of “deep fake” videos. Often shared on social media platforms, these digitally altered videos are made possible with recent advances in machine learning and artificial intelligence.
Although altered and faked media content is not necessarily a new issue, images and videos can now be altered quickly, cheaply, and more convincingly than ever before. An underlying concern is that platforms will be overwhelmed with believable deep fakes, leaving Internet users struggling to discern fact from fiction. Yet a future in which no one call tell what is real will threated democracies everywhere.
Moreover, human rights organizations, journalists, and governments depend on reliable information-gathering and dissemination in order to raise awareness about rights abuses and hold bad actors accountable. The national security community is also concerned about how deep fakes problematize our intelligence gathering.
A growing push by lawmakers to regulate deep fakes raises various rights issues, including content moderation and free speech. Empirical, scholarly literature on deep fakes can help provide guidance on how best to proceed. But debates exist as to whether deep fakes present novel problems or merely perpetuate confirmation biases in a new form. Currently, we don’t know if, how, or to what extent deep fake videos actually affect users.
As such, this paper first aims to validate – or contest – prevailing assumptions about the novelty and societal impact of deep fakes. It will begin by surveying the history of similar media-altering technologies such as Photoshop and ask to what extent deep fake videos differ from other kinds of altered media. A key question to consider is whether viewers’ savviness and ability to distinguish between unaltered and altered content will develop in response to new editing technologies. In addition, this paper will identify possible steps forward, attempting to move past over-simplified, binary solutions and instead considering ecosystems of regulatory, political, and social schemes that both protect the human rights of free speech and expression as well as preserve truthful information and accurate reporting of events.