Ethical Implications of Generative AI |
The widespread adoption of Generative AI (GenAI) raises numerous ethical questions, including about AI developers' and corporations' moral responsibilities, the environmental impact of AI use, and the potential erosion of public trust if AI is deployed in harmful ways. Ensuring ethical AI development and usage requires a critical analysis of its interconnected societal and ecological implications. This work involves addressing issues such as privacy, accountability, integrity, intellectual property, bias, and human labour.
Although GenAI presents significant opportunities for innovation in higher education, it is essential to critically examine and keep up-to-date on ethical concerns related to honesty, fairness, accountability, ownership, privacy, security, equity, and access.
For more on the ethical implications of AI, see: Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities
GenAI is rapidly disrupting many aspects of society, including, but not limited to education, labour and manufacturing, science and technology, arts and entertainment, the environment, and political life. The scale of this disruption is still uncertain, but there has been much discussion around potential benefits and real concerns about GenAI.
Potential Benefits | Real Concerns | |
---|---|---|
Time savings and efficiencies by automating repetitive tasks and streamlining workflows, leading to enhanced productivity, and a reduction in human error. | Over-reliance on AI can devalue human creativity and increase homogenization, leading to the loss of important skills and decision-making capabilities as tasks become automated. | |
Personalized instruction and feedback can lead to improved learning outcomes and a more targeted approach to skill development. | Privacy concerns arise when personal data is collected and analyzed to provide tailored experiences. Biased data could result in unfair or inaccurate outcomes, exacerbating existing inequalities. | |
Language translation and editing to facilitate communication and collaboration across linguistic barriers. | Oversimplification and misinterpretation of cultural nuances can lead to potential miscommunication. The digital divide can potentially exclude certain groups or individuals from these benefits. | |
Coaching and individualized support in areas such as mental health, wellness, and career development can lead to more effective interventions. | Over-dependence on AI can diminish human-to-human support and empathy. The digital divide can potentially exclude certain groups or individuals from these benefits. | |
Research, coding, and data analysis assistance can facilitate pattern identification and lead to more accurate insights. | Ownership and attribution of AI-assisted work can lead to an increased risk of plagiarism or academic dishonesty. | |
Improved healthcare diagnostics and treatment can lead to better patient outcomes, more efficient healthcare delivery, and reduced healthcare costs. | Privacy and consent concerns raise ethical questions about the appropriate use of sensitive medical data. |
Additional Resources |
To learn more, explore the following resources:
GenAI and Indigenous Knowledges |
---|
Scholar and artist from Mistawasis First Nation in Treaty 6 Territory, Archer Pechawis, writes in "Against reduction: Making kin with the machines".
We need to "Ensur[e] understanding of and respect for territory—and the languages and cultures that grow from specific territories—is built into the foundation of AI systems such that they help us care for territory rather than exploit it" (p. 10).
The rapid proliferation and growing global impact of generative AI (GenAI) call for increased critical examination of its educational, linguistic, economic, social, political, cultural, environmental, health, and legal impacts on Indigenous communities. As scholars and Knowledge Keepers have pointed out for many years, artificial intelligence can offer numerous benefits to Indigenous peoples, but its uses must come with proper human oversight and meaningful stakeholder involvement.
To ensure that GenAI is used ethically and in a way that values, respects, and includes Indigenous peoples, users must adhere to responsive and responsible principles, learning about how AI relates to traditional knowledge (TK), traditional cultural expressions (TCEs), genetic resources (GRs), intellectual property rights (IP), the public domain (PD), Indigenous cultural sovereignty, Indigenous data sovereignty, and self-determination (1).
When it comes to the impact of GenAI on Indigenous communities, there is a range of potential benefits and real concerns to consider. Here are just a few examples:
Potential Benefits | Real Concerns |
Language protection and revitalization
|
Misrepresentation of cultural practices
|
Indigenous-led environmental conservation In Sanikiluaq, an Inuit community in Nunavut, a combination of Indigenous knowledge and AI is used to manage natural resources and adapt to the impacts of climate change, demonstrating the potential of AI to complement Indigenous knowledge and practices, rather than being in conflict with them. |
Misappropriation of traditional or sacred imagery There are critical concerns about misuse of generative AI to appropriate Indigenous cultures and knowledge systems without consent, highlighting the importance of data sovereignty, ethical AI development, and increasing Indigenous voices in shaping these technologies. |
Indigenous rights and intellectual property Meaningful Indigenous involvement can help prevent the unethical generation and commercialization of Indigenous art, designs, and intellectual property by AI models. |
Cultural appropriation and copyright infringement Theft and appropriation of Indigenous intellectual property, Indigenous art, and cultural identities by AI systems trained on data scraped from the internet can lead to further exploitation and marginalization of Indigenous peoples' cultures, knowledge systems, and lived realities. |
When conducting research related to Indigenous peoples and practices, consult local protocol and USask institutional policies.
(1) (See Coffey & Tsosie, 2001) Lewis, et al. 2021; Indigenous Protocol and Artificial Intelligence Working Group, 2019; UNESCO, 2023, UNDRIP, 2007; USask Copyright Office’s Indigenous Knowledges and Canadian Copyright Law, 2024; Raine, et al., 2019).
Additional Resources |
To learn more, explore the following resources:
Indigenous Protocol and Artificial Intelligence Working Group.
Resources, news and events, a position paper, and more. "The Indigenous Protocol and Artificial Intelligence (A.I.) Working Group develops new conceptual and practical approaches to building the next generation of A.I. systems."
Indigenous Protocol and Artificial Intelligence position paper, written by the Indigenous Protocol and Artificial Intelligence Working Group.
Offers insights and guidance for researchers navigating the intersection of GenAI and Indigenous knowledge.
University of Saskatchewan Copyright Office’s resource page on Indigenous Knowledges and Canadian Copyright Law, 2024.
This page provides information on how to respectfully use, share, and protect Indigenous cultural knowledge and intellectual property in accordance with ethical guidelines and legal frameworks.
Imagining Indigenous AI. In S. Cave & K. Dihal (Eds.), A collection of papers on Indigenous AI, published in 2023.
Massachusetts Institute of Technology News "Is it wise to merge indigenous knowledge with modern technology?"
The Indigenous Digital Delegation gathering at MIT met to discuss the potential benefits and challenges of integrating Indigenous knowledges with modern technology to address contemporary issues. Includes a recorded talk by Ojibwe Elder, artist, poet, and scholar Duke Redbird: "Dish with One Spoon."
Decolonizing AI Ethics: Indigenous AI Reflections
Jennafer Roberts discusses citation, understanding AI outside the "Western Technoutilitarian Lens," and imagining decolonization with AI. Published July 10, 2023.
Sand Talk, Indigenous Thinking and Digital Future
Published July 2023. Includes an interview with Tyson Yunkaport, writer, academic, and Indigenous thinker from the Apalech Clan of the Anangu people, in what is now known as Australia.
Designing ethical AI through Indigenous-centred approaches
UOttawa discussion with Jason Edward Lewis, author of Indigenous Protocols and Artificial Intelligence Position Paper and "Making kin with machines." Transcript included.
Article: Artificial Intelligence in the Colonial Matrix of Power
This Dec. 2023 paper discusses how the structure and functioning of artificial intelligence (AI) systems are influenced by colonial systems.
Sacred Waveforms: An Indigenous Perspective on the Ethics of Collecting and Usage of Spiritual Data for Machine Learning. "This talk is an introduction to the intersection of revitalizing sacred knowledge and exploitation of this data."
Environmental Considerations |
---|
The potential benefits of AI to the environment are numerous, such as improved biodiversity monitoring, precision agriculture, and waste management, to name a few. However, the development and deployment of AI technologies require substantial computational power, leading to significant energy consumption and a considerable carbon footprint. As the demand for AI grows and its availability widens, we are seeing increased environmental impacts.
Potential Benefits of GenAI | Real Concerns about GenAI |
---|---|
Biodiversity monitoring and protection AI can help analyze large datasets to monitor species populations, habitats, and ecosystems more efficiently. See: A synergistic future for AI and ecology |
Energy consumption during AI model training The process of developing and training AI models requires significant computational power, which in turn leads to extraordinarily high energy consumption. See: The AI Boom Could Use a Shocking Amount of Electricity |
Climate change mitigation AI can help optimize renewable energy systems, smart grids, industrial processes, and energy-efficient buildings, reducing waste, energy consumption, and greenhouse gas emissions. See: How Ecology Could Inspire Better Artificial Intelligence and Vice Versa |
Carbon footprint and greenhouse gas emissions The energy-intensive nature of AI development and deployment results in a substantial carbon footprint, contributing to global greenhouse gas emissions. See: AI’s carbon footprint – how does the popularity of artificial intelligence affect the climate? |
Precision agriculture AI can be used to optimize farming practices, reducing the need for water, pesticides, and fertilizers. See: How can AI benefit sustainability and protect the environment? |
Water-intensive processes Generative AI requires vast amounts of water for manufacturing microchips and cooling data centres. See: AI water consumption: Generative AI’s unsustainable thirst |
Waste management and recycling AI can help optimize waste collection, sorting, and recycling processes, reducing the amount of waste heading to landfills or polluting the environment. See: How AI is Revolutionizing Solid Waste Management |
Electronic waste E-waste associated with the development of AI technologies is growing, and e-waste recycling is not keeping up, contributing to air, soil, and water pollution. See: Electronic Waste Rising Five Times Faster than Documented E-waste Recycling: UN |
Ecosystem restoration and conservation planning AI can analyze large environmental datasets and satellite imagery to identify areas in need of restoration, map ecosystem boundaries, and help plan and prioritize conservation efforts in a more data-driven and efficient manner. See: Improving biodiversity protection through artificial intelligence. |
Consequence of mining rare earth elements The hardware components necessary for AI technologies rely on rare earth elements, the mining of which contaminates soil and groundwater with an array of toxic chemicals. See: Rare earth mining may be key to our renewable energy future. But at what cost? |
Given the rapid growth of GenAI, scholars like Aimee van Wynsberghe assert that "this is not a technology that we can afford to ignore the environmental impacts of” (2021), especially when considering the lack of publicly available information about the sector’s total energy consumption. In one alarming example, it is estimated that compared to a conventional web search (e.g., Google, Bing), it is “a search driven by generative AI uses four to five times the energy."
Economic and Labour Considerations |
---|
GenAI will lead to the loss of many jobs and, conversely, the creation of new jobs.
Administrative and office work, arts and entertainment, IT, and education sectors are predicted to be particularly vulnerable to these disruptions. Higher education has been affected and will continue to see change in numerous ways, including in marketing and campus relations, admissions and enrollment, IT, finance and administration, libraries, student support services, and teaching and learning. In many industries, there is now a growing emphasis on reskilling and upskilling to meet the demands and offset the disruptions of an AI-driven job market.
Potential Benefits | Real Concerns | |
---|---|---|
Enhanced productivity: Generative AI has the potential to increase productivity by automating tasks and enabling workers to focus on higher-level responsibilities. |
Job displacement and automation: The rise of Generative AI may lead to job displacement and automation as machines become capable of performing tasks previously done by humans. |
|
Streamlined work processes: AI technologies can help optimize and streamline various work processes, leading to more efficient operations and reduced costs. |
Widening income inequality: The adoption of Generative AI requires workers to adapt to new roles or face job loss, exacerbating income inequality. |
|
Improved efficiency: By automating tasks and optimizing workflows, Generative AI can improve efficiency across industries, allowing businesses to achieve more with fewer resources. |
Impact on vulnerable populations: The adoption of Generative AI may reinforce existing biases and disproportionately impact vulnerable populations, such as marginalized communities, who may lack the resources to adapt to the changing job market. |
|
Innovation and technological advancements: Generative AI can catalyze technological breakthroughs and innovations that benefit society and improve people's lives. |
Need for social safety nets: As AI technologies disrupt the labour market, there will be a growing need for social safety nets to support displaced workers and ensure a just transition. |
For more insights on this topic, refer to the International Monetary Fund report "Generative Artificial Intelligence and the Future of Work."
Examples of current economic and labour concerns related to the development and implementation of GenAI are around:
Several lawsuits have been brought against companies like OpenAI, Midjourney, Suno, and Udio by writers, artists (including the estate of the late comedian George Carlin), the New York Times, eight US newspaper publishers owned by Alden Global Capital, and as of June 2024, the world’s largest music labels. All allege copyright infringement, asserting that their work has been unlawfully used to train AI systems without proper consent or compensation. Ethical and economic concerns around Indigenous intellectual property exist, as well, particularly around cultural appropriation, misappropriation, and misrepresentation.
Although Generative AI was trained on massive digital datasets, it required human intervention to become usable. During ChatGPT's development, OpenAI outsourced a significant portion of this work to data labellers in Kenya, who were paid less than $2 per hour to sift through often harmful and traumatizing content and filter out toxic material. This situation has given rise to the concept of "digital sweatshops" and raises concerns about the ongoing exploitation of workers in the Global South by Big Tech companies.
Bias and Misinformation - Why Should We Care? |
---|
Generative AI is known to reproduce biased content in its outputs. These biases are inherited from its training data. In other words, if the training data contains biases—such as stereotypes or unequal representations of certain groups—these biases are learned and perpetuated by the AI.
AI systems can, therefore, amplify existing societal biases, which can result in unfair treatment and discrimination in various applications, from hiring practices to law enforcement. To address and mitigate biases, developers must carefully select training data and continuously monitor and adjust algorithms. Users must critically evaluate AI outputs to ensure transparency and fairness.
Types of Bias |
In the context of generative AI, various forms of bias can be manifested, including societal, data, and algorithmic biases.
Societal bias | Refers to the broader societal and cultural prejudices existing in the real world, which can be reflected and perpetuated in AI systems. For instance, societal biases surrounding gender roles or racial stereotypes can be unwittingly incorporated into AI applications, leading to discriminatory outcomes. |
Data bias | Refers to the large datasets that GenAI is trained on. If the data contains biases, such as under-representing or overrepresenting certain groups, the AI can learn and perpetuate these biases. For instance, if a facial recognition system is trained using predominantly lighter-skinned individuals, it may be less accurate when identifying individuals with darker skin tones. |
Algorithmic bias | Refers to the underlying algorithms and techniques used to develop AI models that introduce or amplify existing biases. This happens when developers fail to consider diverse perspectives, cultural contexts, or ethical considerations during the development process. |
Misinformation and Disinformation: Why Should We Care? |
The 2024 Global Risks Report has identified manipulated and falsified information as “the most severe short-term risk the world faces.”
The potential of GenAI to amplify biases and stereotypes can lead to distorted, incorrect, or harmful representations of race, religion, culture, class, sex, and gender. AI systems have also exhibited political biases in their outputs, making it easier to spread misinformation and disinformation.
"Misinformation" is incorrect or misleading information that is often spread unintentionally.
"Disinformation" is incorrect or misleading information that is deliberately deceptive and intentionally spread.
Examples of misinformation and disinformation include fabricated content, manipulated and misleading content, and deepfakes. According to Goldstein et al. (2023), Generative AI could exacerbate this issue by making it easier, more reliable, and more efficient to spread misinformation and disinformation while also complicating their detection.
Source: Ferrara, E. GenAI against humanity: nefarious applications of generative artificial intelligence and large language models (2024),
licensed under a Creative Commons Attribution 4.0 International License
As the lines between the virtual world and the real world become more blurred, it will become increasingly difficult to discern between authentic and manipulated information, making it crucial for individuals to develop critical thinking and media literacy skills. As demonstrated by the BBC Verify examination of a fake Katy Perry AI-generated image, it is crucial to remain vigilant and address these concerns as AI technologies continue to advance and become more pervasive.
How Can We Protect Ourselves? |
We can take several proactive steps to mitigate and limit the spread of harmful information:
Academic and Research Integrity |
---|
The rapidly growing use of Generative AI (GenAI) by the academic community—including students, faculty, and staff—has amplified conversations about the importance of academic integrity and the need to ensure that GenAI tools are used appropriately.
Academic integrity, a cornerstone of post-secondary work, is defined by the International Centre for Academic Integrity as a commitment to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage. These values are essential for upholding the quality of education and research within academia and ensuring that the work produced serves the broader public good with integrity and reliability. The antithesis of academic integrity is "academic misconduct," which can be defined simply as "cheating." The University of Saskatchewan defines academic misconduct with examples in section II, B of its Academic Misconduct Regulations.
As universities continue to adapt to the widespread availability and popularity of GenAI, students, faculty, and staff must keep lines of communication open and familiarize themselves with developing and evolving policies and guidelines. Policies and guidelines regarding the ethical use of GenAI in studying, completing assignments, and conducting and disseminating research can be issued by an instructor, supervisor, department, college, program, unit, division, publisher, collaborative entity, or institution.
Some key considerations for appropriate use of GenAI |
---|
How can you prepare? |
---|
GenAI and User Data |
---|
Datafication is the process of transforming all aspects of our lives, including our behaviours and attributes, into measurable digital data that can be collected, stored, analyzed, and used for various purposes.
Datafication is driven by the widespread use of digital technologies, sensors, and connected devices, raising important questions about privacy, ethics, and security. To ensure responsible data collection, usage, and protection, we need appropriate legislation to establish safeguards and guidelines. However, GenAI users should be mindful that in many jurisdictions, online privacy legislation may be absent, incomplete, or out-of-date, so they may have limited recourse to combat bad actors and cyber criminals who use data for nefarious activities.
Privacy Considerations in the Age of GenAI |
---|
GenAI systems are often referred to as a ‘black box,’ meaning that there is uncertainty about how user data and inputs are used by the companies that own these systems.
Assumed uses include
However, these and other uses may not be clearly disclosed to users, including whether information will be shared with or sold to third parties.
Given the lack of transparency around how data is used, users should be very cautious about the nature of information that they share with these systems, and should not share any sensitive data (for example, personal, propriety or copyrighted information, health information).
Security Considerations in the Age of GenAI |
---|
From a security perspective, GenAI increases the potential for creating realistic scams. While malicious actors and cybercriminals have long used the internet and other technologies to deceive, manipulate, or dox individuals, the emergence of GenAI makes it even easier for them to cause harm through more sophisticated and convincing tactics.
For instance, GenAI can enable the production of highly convincing voice clones that mimic a target individual, which can then be used in various scams. These voice clones may be used in romance scams, where a scammer pretends to be a potential romantic partner and tricks the victim into sending money or personal information. Similarly, job scams can involve impersonating a hiring manager or recruiter to gain access to sensitive information or even request payment for fake job opportunities.
GenAI's ability to produce large volumes of convincing content rapidly poses an increased threat as it enables cybercriminals to scale up their malicious activities. As a result, individuals, businesses, and governments must adapt by implementing enhanced security measures, such as multi-factor authentication and increased user awareness and education.
Copyright, Fair Use, and Creative Commons |
---|
Generative AI models rely heavily on large amounts of data during their training process and make use of that same material when producing outputs, often with human input but without oversight. This lack of oversight makes it essential to consider the ethical implications related to copyright and the rights of creators that may have been ignored during the training of these models.
Copyright law is partly designed to protect the rights of creators and their ability to control how their works are used and distributed. Using copyrighted material without permission or compensation can undermine these rights and harm creators through a disruption in income, misrepresentations or distortions of their original work, or damage to their reputation and integrity.
However, maintaining a balance between user rights and the maintenance of a rich public sphere where ideas and creative works can easily be accessed, shared, and discussed are also priorities in copyright. Overly restrictive practices can significantly impede access, stifle innovation, and limit the development of technologies. Sometimes, this means providing exceptions and limitations to copyright, such as fair use and fair dealing, which allow users to access, use, and sometimes even modify copyrighted works under certain conditions.
While AI models can learn from open resources and the public domain, the ethical implications of using other copyrighted materials in generative AI models depend on the specific context and circumstances. See, in particular, important considerations around intellectual property and the public domain when it comes to Indigenous knowledge and traditional cultural expressions.
The goal should be a balance that carefully considers, as well as respects, the rights of creators and users.
In the context of a university setting, using GenAI responsibly means considering the following factors:
There are important considerations around artificial intelligence and using a Creative Commons (CC) license.
Understanding CC Licenses and Generative AI - Creative Commons discusses how CC licenses relate to generative AI, focusing on the legal and ethical issues of using licensed works for AI training and the application of licenses to AI outputs.
The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.
© University of Saskatchewan
Disclaimer|Privacy