The Dark Side of AI

Industry Insights
User Experience (UX)
Profile Image
Jessica McDaniel
Published Sep 2024
featured image
Table of contents

I recently attended the Information Architecture Conference in Seattle, and this year’s theme was “IA in the Age of AI.” As you can imagine the majority of the lectures were focused on artificial intelligence…what it’s good at, what it’s bad at, how it can be helpful, how it can be harmful, and much more. One aspect of AI that was repeated by most of the speakers and therefore stuck in my brain was the disturbing social, ethical, and environmental implications of AI. I’ve always been a little skeptical of AI and overall have understood its shortcomings at a high level, but the information shared piqued my interest and led me down this path of research. 

The Bright Side of AI 

Before we get into the details of the negative aspects of AI, there are some positives we should discuss. AI can help to free up your time and make you more efficient by taking menial, less important tasks off your plate. You can then concentrate on the more important things on your to-do list. A few things AI can be helpful with include: 

  • Organization – AI can streamline organizing and categorizing large amounts of data. 
  • Summarization  –  AI tools can help summarize large documents and even conversations and provide you with key information and synthesized notes. 
  • Translation & Transcription – AI can take daunting tasks like translation and transcription and get them done quickly and efficiently. 

Environmental Impact

Data Centers & Training AI 

AI models require giant data centers to house their servers in order to do their jobs. These data centers require a massive amount of energy to run and emit an astoundingly large carbon footprint, not to mention an alarming amount of water to keep them cool. A study reports that Microsoft used approximately 700,000 liters of freshwater during GPT-3’s training in its data centers. The world’s data centers currently account for 2.5-3.7% of greenhouse gas emissions globally! 

That is just the data centers! The internet already creates about 1.6 billion tons (4% of global greenhouse gas emissions) in greenhouse gas emissions annually and that number is expected to increase by up to 5 times with the merging of search and AI. 

So now that we’ve covered data centers and the internet we have to look at what the cost is to train new AI models. Training a single AI model can emit over 626,000 pounds of carbon and this seems to be for the smaller models because in a recent study it was reported that training GPT-3 resulted in 502 metric tons of carbon. GPT-3 had 175 billion training parameters, which are the variables that the model learns during training. The more parameters the more accurate the AI model can be, but it is also suggested that with the increase in parameters there is also an increase in energy consumption during training. Can you guess how many parameters GPT-4 has? ONE TRILLION! 

AI Inference 

Now after training is complete, and all of those hundreds of thousands of pounds of CO2 have been emitted, we can move on to the environmental impact of actually using AI. With 4 billion devices already running on AI-powered voice assistants and 27% of Americans using AI multiple times a day it is easy to conclude that training the AI models isn’t the only environmentally unfriendly thing going on. Studies have shown that training is only responsible for about 40% of AI’s energy consumption and the other 60% comes from its usage! 

With the combination of big tech companies not being transparent about the energy usage of their AI models and the lack of standardized measurement in the industry, there aren’t any clear numbers on the total amount of carbon emissions produced from the use of AI. But there are some specific numbers we can look at: 

  • Using AI to generate a single image takes as much energy as fully charging your smartphone.
  • A short conversation with ChatGPT (between 20 and 50 questions) requires 500ml of water
  • Every time a single message is sent on ChatGPT 4.32 grams of CO2 is produced.
  • Under the assumption that ChatGPT runs on 16 GPUs, it was estimated that the carbon output was 8.4 tons of CO2 annually, and recently it was reported that it takes 30k units which means the CO2 output would be significantly higher! 

Social & Ethical Implications

AI certainly is making an impact on Mother Earth, but what about its impact on us? When I was little, if my brother and I were watching something on TV that my mom didn’t approve of, she’d say, “garbage in, garbage out.” Until I attended IAC I had never heard anyone else say that phrase. It should have been the theme of IAC 2024. The reason being is that AI works on only the information given to it…information that is created by imperfect humans. The problem of bad data exponentially grows when the AI model is scrapping the internet for information because as we all know the internet is filled with inaccurate and biased information. Hence, garbage in, garbage out. 

There are so many examples of AI mishaps, and predictions of manipulation, too many to list here. So we will just cover some of the big themes on how AI is affecting us humans. 

Inequality & Biases 

Inequality and bias are issues we are already contending with around the world, so it is no surprise that the idea of AI making those problems worse is horrifying. People are warning against the potential for an even bigger gap in social inequality due to the lack of access. Only a few large companies have access to the really advanced AI giving them a significant advantage over smaller companies…which could lead to the rich becoming richer and making it harder for anyone else to catch up. Bring your focus down to the individual, a retailer with access to AI tools is going to make more sales (and therefore money) by leveraging AI-generated content such as reviews, descriptions, imagery, etc. than a retailer who has to do all the same things manually. 

“As you can see, unequal access to AI can create a snowball effect, widening the gap between different groups in terms of those who have and those who don’t have access to such tools.” – Sherice Jacob, Author at Originality.AI

Biases are already being shown in AI outputs because of training with imperfect data and dirty data intentionally being used to change the model’s output. There are many examples of bias in AI out there but this article shows four pretty disturbing examples committed by big brands and even government programs! 

Privacy

With companies using customer data in AI models to increase revenue and decrease overhead people are concerned about the safety of their data. How is it being used? Where is it being stored? Who is it being shared with? AI has the capability to create personal data without the consent or knowledge of the individual. People nowadays are paying more attention to their privacy and here are two eye-opening takeaways from a survey completed by Genpact in 2018 that back up that fact:

  • 71% said they don’t want companies to use AI if it infringes on their privacy, even if those technologies improve their customer experiences.
  • 63% said they’re worried AI will make decisions about their lives without their knowledge.

In addition to the security of data, we also have to take some time to think about how AI can weaponize private data and use it in a predatory way. In this article called, 13 Societal Costs of Undetectable AI Content, number 8 talks about surveillance capitalism and how AI could potentially take targeted ads to an extreme by using mental and emotional states to manipulate people into making purchases. This would be a gross misuse of data!

Disinformation 

Like everything before this wasn’t scary enough…this section terrifies me the most! With the help of AI disinformation and treachery are even easier than ever! 

Data poisoning 

We’ve already discussed how imperfect data is being used to train AI models and therefore producing imperfect and a lot of times biased results. Well, data poisoning is kinda like that, except it is on purpose. Data poisoning is when data is purposely manipulated to alter the output of an AI model. The spreading of disinformation through data poisoning has the potential to leave people misinformed, divided, angry, and the list goes on. Not to mention the vast amount of scenarios that data poisoning could disrupt or harm. 

On the flip side, there are some people out there using data poisoning for good. For instance, there has been a report on artists fighting back against generative AI that produces imagery. Art online gets scraped without consent from the artists and goes into an AI’s training set. But these artists are using a platform called Nightshade to poison their art that disrupts the AI’s training sets and therefore its output! 

Deepfakes 

It used to be that if someone wanted to go through the trouble of creating a fake photo it took a lot of time and expertise to do it all manually. Now, with the help of AI deepfake videos, images, audio, and text can all be created within minutes.  

Deepfake content can be created to make it look like someone has done something, been somewhere, or said something when they haven’t. Creating these realistic lies for the masses to consume is very dangerous and harmful. On a large scale think elections and on a small scale think about a young person on social media getting bullied. In New Jersey last year a group of high school boys created pornographic images using AI of nearly 30 of their female classmates! 

Other criminal activities such as identity theft and fraud can be added to the really long list of negative implications of using AI to create deepfakes. 

Human-ness 

There are 2 main aspects of our human-ness that AI can potentially impact: our personalities and our brains. 

This study called “Artificial Intelligence in Communication Impacts Language and Social Relationships,” shows that when people think you are using AI to communicate they are more likely to find you uncooperative and they are more likely to have negative feelings towards you. Using AI for communication is letting whoever has control over AI training sets and algorithms also have control over our interactions and our own voices. This can have major impacts on our relationships. 

I don’t know about you but ever since I started using GPS I can’t remember how to get anywhere on my own. A worry is that similar effects of overreliance will happen with the use of AI. AI tools are becoming available on almost every website, platform, and app out there. AI can write for us, create for us, research for us, talk for us, etc. etc. So, if we don’t have to write, research, create or do any of these other things for ourselves it seems like it is only a matter of time when people (probably in a generation or two) won’t be able to problem solve, design or innovate for themselves. These are vital skills for a successful and independent society…so what happens when we no longer have these skills? 

How do we move forward?  

No one really has the answer to this question yet. The only things we can do are be prepared and constantly improve. Whatever your interaction with AI is going to be, go in with your eyes wide open to its flaws. If you know its flaws, you can interact with this powerful technology more responsibly. 

If you are creating an AI tool, put guardrails in place to prevent some of the things we’ve talked about, and remember it is never finished! Keep watch and be vigilant, when mistakes happen, correct them. Keep adding guardrails as new things come up. Keep making it better over time. 

And always remember to AI responsibly! 😉 

P.S. No AI was used in the writing of this article. 

What topics would you like to see us write about next?