Skip to Content
Categories:

The artificial intelligence echo chamber

AI is eating itself alive: What happens when AI is trained on data that it created?
The artificial intelligence echo chamber

Generative artificial intelligence has spread to every corner of our lives and is approaching critical mass. Students use it to work, teachers use it to teach, employers use it to reject applicants and our president uses it to depict himself as Superman. AI has become a significant part of our culture, and its extremely high usage rate is beginning to cause problems for its continued development — and for the dissemination of information as we understand it.

An article published in The Atlantic this September described AI usage in a public high school classroom. In a discussion about the book “Narrative of the Life of Frederick Douglass,” a student stated, “What was meant to be a reflective, thought-provoking discussion on slavery and human resilience was flattened into copy-paste commentary.” The idea that AI is killing our critical thinking skills is a horrifying thought that is becoming a reality. As a computer science major, it is a tool I know I must adapt to — even my employers are saying as such — but what are the consequences of our growing reliance on the tool?

As AI becomes more popular and an irremovable part of the papers, research, dissertations and theses that get published into our information network, AI will become increasingly self-assured. The things it teaches will become its own study material, and eventually, it will become completely detached from reality.

AI cannot critically think about history, but given enough time and enough data, it will begin to write its own. The AI market is growing rapidly, and there is not enough data in the world to satiate its colossal appetite. It indiscriminately consumes anything it can get its hands on, even its own content. Even AI companies have expressed their own powerlessness over their creations.

AI companies use something called “synthetic data” to simulate human research. Synthetic data is AI-generated data that is created to mimic its real counterparts. It serves as a supplement to real data, in order to fulfill the massive data requirements of these machines. While this is very efficient for objective information, such as math problems, it also causes a lot of isues. Even with the monumental leaps in quality being made for generative tools, a lot of them remain flawed, filled with biases, falsehoods and delusion.

Our chatbots are trained on this mixed data, which is then handed over to the population, before being integrated into real-world data and sent right back into the chatbot’s “brain” to be distributed and studied all over again. It is a self-fulfilling prophecy, and as our work begins to use more and more AI, the cracks will begin to show. A medical AI trained on biased historical medical information has already been seen to encourage suboptimal care. Right now, if you ask a chatbot about its faults, it may acknowledge this fact, but given enough time, will it eventually tell a doctor to treat a female patient differently from an identical male one?

When AI first made its debut onto the market, I made a presentation about it as the great equalizer. My dream for this technology was for it to become a tool that could be used by people all across the world to easily fill gaps in their education. With a quick message to a chatbot, you could have your own personal chemistry tutor or financial advisor. Unfortunately, instead of becoming a tool to enhance learning, especially for those less fortunate, it has become a thief of individual thought. After all, why spend hours learning a challenging algorithm when I can copy-paste a version of it into my code?

Although the reality I’ve described is rather extreme, AI is part of our lives now, and not in the way I wished. Instead of augmenting our ability to synthesize information, we now need to ensure it doesn’t become the only way we can think. We need to hold fast to our ability to learn, else we become as blindly self-confident as the AI that speaks through us. As a STEM student, partcipating in more creative work like writing columns for the Trinitonian is how I express my voice. I fear what could happen if we lose the ability to express ourselvs. Perhaps we will all become victims of the reality depicted by “WALL-E.” I do, however, have extreme confidence that eventually, AI is going to become so stupid and cyclical in its learning that we have to walk away or fundamentally change it as we know it, but what will we do if we don’t have the critical thinking skills to solve that problem?

View Story Comments
More to Discover