Let me be clear–I am a huge proponent of Generative AI. I’ve been able to make huge efficiency improvements in my professional and personal life, and I’ve learned new ways to solve all kinds of problems from meal-planning to keyword scrubs. But there is a darker side to this new AI world we’re living in, and it’s only a matter of time before it catches up to us. Below, we’ll review the major threats and consequences of AI use, and what marketers need to be aware of before adopting AI on a wide scale.
AI Business Death
Ever heard of an AI death spiral? I hadn’t–new professional fear unlocked. Coined by Founder/CEO of Aible, Arijit Sengupta, “AI death spiral” is a phenomenon when a company adopts AI as a means to improve ROI and ends up circling the drain on non-incremental revenue and tanking profitability. According to Sengupta, AI leans towards conservative outputs in order to maintain accuracy and credibility (recommending only a small handful of prospects to pursue, for example), and can result in a decrease in leads. This triggers a cycle where the company continues to invest in acquisition tactics, but can’t convert anything because the AI agent becomes even more conservative as volume grows.
Conventional AI is usually trained on academic metrics like accuracy rather than what organizations really want: business impact and ROI.
–Arijit Sengupta, “Beware The AI Death Spiral”
How do marketers avoid the death spiral? It’s all in the training of the model. Companies looking to incorporate AI data sciences into their marketing strategy should understand their business goals (such as profitability) and train the AI model accordingly.
The Slop Problem
Sometimes AI generated content can be downright cringey. John Oliver recently did a great piece on Last Week Tonight about AI slop–low quality brainrot-type content that is gaining popularity on social media. While hyper-realistic simulations of what it might look like if Edward and Bella from Twilight ran a couscous cart in Morocco are incredibly fun, some AI “slop” can be harmful. The spread of misinformation is even more insidious when the content is so realistic, that the average person can’t immediately spot its inauthenticity. This ultimately leads to more and more “fake news”, which–as classic clickbait tends to do–is incentivized by the platforms as a revenue driver. The introduction of slop has made misinformation more pernicious than ever, especially with Mark Zuckerberg’s announcement that Meta would be rolling back fact-checking in 2025. Scary stuff.
Your AI Has a Carbon Footprint (and a Water Bill)
In April 2025, ChatGPT’s CEO made headlines when he revealed that saying “please” and “thank you” was costing the company millions–a story that made its way around social media as a funny reminder to be polite to your robots. But the unspoken story here is the environmental impact of every interaction with LLMs. Training massive models like ChatGPT requires a ton of computational power, which translates to substantial energy usage and carbon emissions. Then, once the model is launched, the model continues to require energy to function as well as vast amounts of water to cool data centers. I could go on for days about the ripple effect on local communities and electronic waste, but suffice it to say that the rise in use of LLMs has a very real footprint.
We Might Be Getting More Stupider
To absolutely no one’s surprise, recent studies show that the increase in generative AI use is effectively making us dumber. In their April 2025 study, The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers, Microsoft surveyed 319 knowledge workers to examine how GenAI impacts their critical thinking skills and effort. While increased use of GenAI improved efficiency, it also promoted overreliance on the tool and decreased critical engagement. Future employers will have to evolve organizational processes to increase oversight and rethink performance evaluation metrics to ensure employees are not only efficient, but also critically engaged.
Copyright Chaos
Powerful LLMs require a colossal amount of data in order to train the models, much of which is scraped from the internet without prejudice. This creates a huge headache that is only getting more complicated for the creators producing original content and IP. Early legal battles are leaning towards the idea that simply training a model does not equal copyright infringement, but this doesn’t solve the bigger issue of the content getting used and passed off as original by GenAI users.
What happens when an LLM, trained on countless books, articles, and images, starts spitting out content that’s eerily similar to something already copyrighted? Or worse, what if the training data itself was pirated? Current copyright laws are not prepared for this new issue, and we’re living in an online wild west–the scale of which we haven’t seen since the boom of the internet.
AI is Delusional
Hallucinations, or inaccurate outputs from LLMs, are still a pervasive problem, and may even be getting worse. According to a study conducted in April 2025 by OpenAI, its newer models hallucinate up to 48% of the time (which is more than double the rate of its older o1 model). While hallucinations can actually lead to more creative content outputs, they can undoubtedly cause harm when subtle or hard-to-verify information is posed as fact. There is some discussion about “retrieval-automated generation” with structured reasoning as a mitigation tactic, but there is no current solution to the hallucination problem–so be very, very cautious about the information you receive from LLMs.
Where Do Marketers Go From Here?
For marketers, the key to incorporating AI content into a creative strategy in 2025 is going to be restraint. It’s easy to imagine a world where every piece of content or creative asset is pushed out en masse by an AI solution – but just because something is easy, doesn’t mean it’s good! Even if you’ve successfully incorporated AI creative into your portfolio, it’s more important than ever to have a strong creative strategy with intentional messaging and heavy human oversight. Consistent review and many checkpoints before launching AI-generated creative can help prevent brands from blending into the noise of over-used, generic slop that is so pervasive.
Marketers also need to be aware that while AI is currently unregulated (a blog post for another day), there may come a time when public opinion on LLMs and the over-use of AI turns sour. Connecting with your customers on a human level is never going to be an outdated tactic, so while it’s exciting to uncover a world of automated solutions, brands should continue to invest in lifestyle creative and authentic, human-centric content.
Remember, the human touch isn’t just good for business – it’s essential for survival.