Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI (PDF)
Artificial Intelligence and Life in 2030: The One Hundred Year Study on Artificial Intelligence
The Institute for Ethical AI in Education
An overview of current trends in regulating AI in different regions and discusses the key ethical issues to establishing fair and inclusive regulatory systems at the global level - The United Nations Educational, Scientific and Cultural Organization (UNESCO
Stanford Encyclopedia of Philosophy
The Alan Turing Institute
A key challenge with AI is the potential for bias in the text it produces. Large language models learn from vast amounts of online data and text, and because they are designed to predict the most likely word sequences, they can reflect and even amplify existing biases found in that data. Furthermore, some AI systems use human feedback to refine their responses, but this process can also introduce bias if the human testers are not neutral. Consequently, generative AI like ChatGPT has been shown to produce socio-politically biased content, sometimes including sexist, racist, or offensive material.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), March 3–10, 2021, Virtual Event, Canada. https://doi.org/10.1145/3442188.3445922.
Browne, Grace. “AI Is Steeped in Big Tech’s ‘Digital Colonialism.’” Wired, May 25, 2023. https://www.wired.com/story/abeba-birhane-ai-datasets/.
Buolamwini, Joy. Unmasking AI: A Story of Hope and Justice in a World of Machines. New York: Random House, 2023.
Glazko, Kate, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff. “Identifying and Improving Disability Bias in GPT-Based Resume Screening.” In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), June 03–06, 2024, Rio de Janeiro, Brazil. https://doi.org/10.1145/3630106.3658933.
“How Artificial Intelligence Bias Affects Women and People of Color.” UCB-UMT, December 8, 2021. https://ischoolonline.berkeley.edu/blog/artificial-intelligence-bias/.
Lizarraga, Lori. “How Does a Computer Discriminate?” NPR Code Switch, November 8, 2023. https://www.npr.org/2023/11/08/1197954253/how-ai-and-race-interact.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: University Press, 2018.
Omiye, Jesutofunmi A., Jenna C. Lester, Simon Spichak, Veronica Rotemberg, and Roxana Daneshjou. “Large Language Models Propagate Race-Based Medicine.” NPJ Digital Medicine 6, no. 1 (2023): 1–4. https://doi.org/10.1038/s41746-023-00939-z.
The environmental cost of increasingly used generative AI, driven by its energy needs (Saenko, 2024), is largely unknown due to industry secrecy. However, concerns are rising that the computing power needed could significantly inflate data centers' energy consumption and carbon footprint (Calma, 2023).
How exactly does generative AI impact the environment?
Researchers have shown that it is possible to reduce the energy costs of generative AI by using more renewable energy, implementing sustainable construction of data centers, and scheduling computation during certain times of the day (Saenko, 2024). These practices would require transparency and commitment from tech companies and advocacy from users and policymakers.
Berreby, David. "As Use of AI Soars, So Does the Energy and Water it Requires." Yale Environment 360, February 6, 2024. https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions.
Calma, Justine. "The Environmental Impact of the AI Revolution is Starting to Come into Focus." The Verge, October 10, 2023. https://www.theverge.com/2023/10/10/23911059/ai-climate-impact-google-openai-chatgpt-energy.
Crawford, Kate. "Generative AI's environmental costs are soaring -- and mostly secret." Nature, February 20, 2024. https://www.nature.com/articles/d41586-024-00478-x.
Saenko, Kate. "A Computer Scientist Breaks Down Generative AI's Hefty Carbon Footprint." Scientific American, May 25, 2023. https://www.scientificamerican.com/article/a-computer-scientist-breaks-down-generative-ais-hefty-carbon-footprint/.
Contrary to the idea of purely machine-generated text, human laborers are essential to chatbots like ChatGPT. They label data, provide feedback for human-like responses, and detect toxic content (Dzieza, 2023). This often exploits underpaid workers in the Global South, a practice some call "digital neocolonialism" (Browne, 2023; Perrigo, 2023).
Browne, Grace. "AI is Steeped in Big Tech's 'Digital Colonialism.'" Wired, May 25, 2023. https://www.wired.com/story/abeba-birhane-ai-datasets/.
Dzieza, Josh. "AI is a Lot of Work." The Verge, June 20, 2023. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots.
Perrigo, Billy. "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic." TIME, January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/.
© Copyright 2025 National University. All Rights Reserved.