Presenter/Author Information

Adam Todd, University of DaytonFollow

Location

Kennedy Union 207 (on UD's main campus)

Start Date

11-4-2023 12:00 AM

End Date

11-4-2023 12:00 AM

Keywords

Human rights, technology, artificial intelligence

Abstract

Artificial intelligence (AI) is a new technology with profound implications for law, its practice, and our definitions of legal rights. This presentation examines how generative AI, particularly through its use of large language models like ChatGPT, may affect the social practice of human rights.

AI language models are computer programs that are trained by reading billions of pages of materials available through the internet and, through brute processing, is able to provide information about the relationships between the language derived from this raw, language-based data. Through this process, the program can provide users with valuable written information with summaries, analyses, and predictions based on the data it is being fed. As a result, the biases and cultural values embedded in the data used by AI language models is reflected and potentially amplified in its results. In terms of the concept of “human rights”, AI has the dangerous potential to distort or exclude non-dominant cultures from the defining and understanding of these rights.

This presentation will build on research that has identified the problems of a digital divide and the colonialization of “big data.” It will also examine the concepts of “data sovereignty” and the need for “decolonializing” of new technologies. International instruments provide guidance, as reflected in the 2016 U.N. General Assembly Article 19 resolution on internet access, the Sustainable Development Framework, and the UNESCO concept of “internet universality.”

As AI shapes our language and ultimately our culture, researchers and practitioners of human rights must intensely study, monitor, and take a leading role in regulating these new technologies. Through proper regulation, AI provides the possibility of exciting new ways to find a shared, international rhetoric of human rights that facilitates understanding, empathy, and equity across cultures. But, as one scholar notes, “We should regulate AI before it regulates us.”

Author/Speaker Biographical Statement(s)

Adam Todd is a Professor of Lawyering Skills and Liaison to the Human Rights Center at the University of Dayton Law School. He earned his J.D. with honors from Rutgers Law School and his Bachelor of Arts degree from Brown University. Prior to coming to University of Dayton, Professor Todd served as a visiting associate professor of Law at Southern Methodist University’s Dedman School of Law, associate professor at University of Baltimore School of Law, and professor and director of academic support at Salmon P. Chase College of Law. He also served as a Visiting Fulbright Professor at Palacky University in the Czech Republic. Prior to teaching, he was a legal services attorney and continues to have an interest and dedication to public interest law. He has published articles in the areas of postmodern legal theory, legal rhetoric, torts law and pedagogy.

Share

COinS
 
Nov 4th, 12:00 AM Nov 4th, 12:00 AM

Artificial Intelligence, Large Language Models, and the Colonialization of Data: Implications for the Rhetoric of Human Rights

Kennedy Union 207 (on UD's main campus)

Artificial intelligence (AI) is a new technology with profound implications for law, its practice, and our definitions of legal rights. This presentation examines how generative AI, particularly through its use of large language models like ChatGPT, may affect the social practice of human rights.

AI language models are computer programs that are trained by reading billions of pages of materials available through the internet and, through brute processing, is able to provide information about the relationships between the language derived from this raw, language-based data. Through this process, the program can provide users with valuable written information with summaries, analyses, and predictions based on the data it is being fed. As a result, the biases and cultural values embedded in the data used by AI language models is reflected and potentially amplified in its results. In terms of the concept of “human rights”, AI has the dangerous potential to distort or exclude non-dominant cultures from the defining and understanding of these rights.

This presentation will build on research that has identified the problems of a digital divide and the colonialization of “big data.” It will also examine the concepts of “data sovereignty” and the need for “decolonializing” of new technologies. International instruments provide guidance, as reflected in the 2016 U.N. General Assembly Article 19 resolution on internet access, the Sustainable Development Framework, and the UNESCO concept of “internet universality.”

As AI shapes our language and ultimately our culture, researchers and practitioners of human rights must intensely study, monitor, and take a leading role in regulating these new technologies. Through proper regulation, AI provides the possibility of exciting new ways to find a shared, international rhetoric of human rights that facilitates understanding, empathy, and equity across cultures. But, as one scholar notes, “We should regulate AI before it regulates us.”