In recent years, artificial intelligence, especially generative AI, has seen massive strides in both advancement and spread. With these changes, especially its spread, it has become increasingly important to discuss the ethics of using AI.
There are two large ethical concerns when it comes to generative AI usage. For one, the extreme ease with which AI can generate content has led some to raise serious concerns about humans using AI to generate writings and images and passing it off as their own. The second ethical concern is environmental; the massive computational power that AI requires leads to significant energy (and often water) consumption. Thus, using AI with reckless abandon has a large negative impact on the environment.
It’s no staggering revelation that using AI-generated content verbatim and claiming authorship of it isn’t ethically sound. The same goes for paraphrasing AI content; neither of these actions include the human to have any actual input in that which was written (save for wording changes in the latter option), so it would be dishonest (and therefore unethical) for a human to declare that they had written AI content.
That begs a question, however― where, then, is the line drawn? Just how involved can AI be in your writing process without compromising your honesty? The ethical boundary, in my mind, is drawn where human creativity and judgment no longer play a role in the work. For example, if a writer uses AI to compile information on a topic in a digestible way, that would be far more ethical than copying and pasting a piece of the explanation verbatim into the writer’s piece. That being said, even that application of AI has its own ethical drawbacks, theoretically. The sheer act of reading the AI content could pollute the writer’s wording when they write their own piece, but this is a very small (and only theoretically existent) offence.
AI-generated content isn’t always factually accurate, either. The crux of AI ethics is such: AI is a tool, not a crutch. If it’s used as such (e.g., to overcome writer’s block or process data), but the human user uses their own critical thinking skills and unique perspectives to select, interpret, and draw inspiration from the output to create original work with an original claim, wording, and structure, then the work can be considered ethically and authentically created. However, when AI’s work comprises the bulk of the content with minimal human intervention or input, it’s dishonest to claim full authorship. The involvement of AI in a work exists on a spectrum, from gathering sources to brainstorming to outlining to drafting to revising, and with each successive step the ethical concerns increase. Using AI to summarize research, for instance, is far more ethical than having it write entire sections of a piece with minimal changes.
Even if a writer properly discloses AI usage, another ethical risk arises. AI-generated content isn’t always accurate, and relying on its outputs without proper verification can lead to the unintentional spread of misinformation. AI models pull information from all available sources (some reliable, some less so) and paraphrase it (most of the time without citing its sources unless explicitly told to). This, by definition, is plagiarism. Thus, ethical AI use in writing isn’t solely about preserving one’s originality and being transparent about AI’s role in the work but also about ensuring accuracy and avoiding unwittingly plagiarizing sources.
An overreliance on AI will have a long-term negative impact on human creativity and the development of critical thinking skills. What’s more, AI-generated content, even when ethically used, diminishes the authenticity and value of human expression. Since the first words were written multiple millennia ago, every piece of writing has had at least one of four different purposes: to inform, to persuade, to entertain, or to express oneself. Each of these four purposes requires an understanding of humanity. Humans can complete each purpose more effectively than AI because we can understand, based on ourselves, the most effective methods of getting our point across. AI would only be able to produce an amalgam of a variety of human-written work without truly comprehending what it’s saying. With AI increasingly having a role in more and more works, the human experience as well as the soul that only a human can provide to a piece are decreasing in presence. This, arguably, is an ethical consequence of AI as well; as we increasingly use AI for creative expression, we erase the core quality that makes art and thought profound and well-made: humanity. When creativity is reduced to an algorithm, it robs art of the experiences, emotions, and personal insights that shaped it and gave it its soul.
If artificial intelligence continues to grow in the art realm, it will continue to draw from old artwork and other pieces of art generated by AI. This will in turn make the landscape of art far more homogeneous over time. It massively harms our capacities for critical and innovative thought, both of which are essential for personal development and societal progress. Harriet Beecher Stowe’s “Uncle Tom’s Cabin”, for instance, helped rouse anti-slavery ideals in the North before the Civil War. AI is incapable of writing such a piece; it can only offer new allegories and metaphors based on original human ideas.
I am no Luddite; I acknowledge that AI has its appropriate uses, however. AI is a useful and effective tool when completing grunt work that doesn’t require (or benefit from) human input. Its ability to rapidly analyze large datasets with little error, for instance, means that researchers can delegate heavy lifting to a model designed for efficiency. AI is also skilled at managing repetitive processes or making plans, among many other uses. AI’s strengths lie in processing and organizing large amounts of data, providing basic rundowns on concepts, guiding users with certain processes, and other tasks that don’t require nuanced interpretation or a personal touch. Tasks that involve critical decision making, empathy, or creativity are best left to humans.
According to an MIT News article written last month[1], AI, and especially generative AI, has massive environmental consequences as well. The article mentions how the computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid. Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.
The above paragraph was AI-generated; I gave ChatGPT a copy of the text within the article and then told it to summarize the article, curious about what it would say. Sure enough, the first paragraph of the summary (the whole of which I included) contained two sentences from the MIT News article verbatim, demonstrating that users should beware of unintentional plagiarism from AI. The paragraph still holds merit, however; there are indeed massive environmental concerns related to AI usage due to its high energy demand.
For example, it’s been estimated that one prompt in ChatGPT uses up 100 times as much energy as a Google search[2](and that was two years ago; ChatGPT and other generative AI likely use even more energy today). Its high energy demand leads its computers to heat up, leading many to institute water-based cooling systems to prevent the computers from overheating. This, in turn, leads to high water consumption. Both the high demand for electricity and the high demand for water put a strain on local resources and lead to large amounts of carbon dioxide emissions, to the detriment of both the environment and the people living in those areas. These environmental concerns add another layer to the ethics of AI usage.
While AI has great potential, in its current iteration, the cost is extremely high. There are arguably very few uses of AI that are “worth” the water loss, electricity loss, and increased carbon dioxide in the atmosphere. As AI technology advances further, finding ways to reduce or eradicate its ecological impact will be crucial in ensuring that its benefits outweigh its consequences (environmental degradation and the loss of resources). With this knowledge, most uses of AI become unethical because they aren’t “worth” the losses described above.
This leaves one more question: how do we address these ethical concerns? AI is far too promising and valuable a tool to simply not use because of these consequences, so it’s imperative that we address these consequences. Both AI users and developers can take certain steps to this end.
For AI users, transparency and temperance are very important. Users should always disclose when AI-generated content is used. This transparency ensures that the user is being fully honest. AI outputs should be treated as starting points, not finished products; using AI to organize your ideas or assist with research would be more ethical than copying AI content and claiming authorship of it would be less so. Ethical AI users should also fact-check AI-generated content to prevent the inadvertent spread of misinformation. AI users can also reduce their AI usage as much as they can to minimize their environmental impact, only using AI when needed (which, in actuality, is never) until AI’s environmental consequences are lessened.
AI developers bear a broader responsibility when it comes to AI ethics. Developers can decrease plagiarism by having AI cite all of its sources. One way that developers have already made strides in ethical AI is allowing users to report responses as inaccurate, combating the spread of
misinformation. Developers can also implement green AI initiatives by optimizing algorithms, using more efficient hardware, and transitioning to renewable energy. By building models that require less computational power, developers can mitigate AI’s environmental impact while maintaining performance. Together, AI users and developers can advocate for policies and practices that prioritize ethical and sustainable AI. Supporting endeavors into more efficient and sustainable AI practices, adopting rules and regulations that promote transparency in AI use, and requiring generative AI models to credit its sources are all important steps.
As AI technology continues to evolve, people must be aware of the ethical drawbacks of certain applications of AI. It’s a great resource, but it’s important for our environment and the preservation of human creativity that we only use it properly.
Sources
- Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News | Massachusetts Institute of Technology.
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
- van Rijmenam, M. (2023, December 10). Building a greener future: The importance of sustainable ai. The Digital Speaker.
https://www.thedigitalspeaker.com/greener-future-importance-sustainable-ai/