CEO
Director of Accounts
Please fill out the information below. Someone on our team will reach out in 3-5 business days to schedule a call if we see a good fit.
Thanks for your interest in booking WISE CEO Patrick Dillon on your podcast! Please complete the form below, and a team member will contact you shortly.
by WISE Digital Partners
January 7, 2025
What does the future hold for AI-powered content creation? As is often the case, you may find clues by following the money trail.
An August report published by Goldman Sachs estimates big tech is ready to invest over $1 trillion in AI chips, infrastructure, data centers, and power grids.
Their goal is to automate complex tasks and boost productivity enough to justify the immense development costs.
But justifying that cost requires overcoming two key conundrums:
Our conclusion? Hang onto your copywriters and designers.
From our perspective, the future of AI lies somewhere between the hype and hesitation. Yes, AI can and will assist creative humans. But at this point, it is neither financially feasible nor technologically possible to entirely hand off your brand identity to AI.
We’ll leave it to economists to debate the productivity gains of this technology. In the meantime, let’s explore some of the emerging AI trends that are already shaping how we create, consume, and interact with content.
Personalization is the process of using AI and customer data to create individualized content experiences—something most of us experience every day.
The Amazon email that addresses you by name and recommends products based on your purchase history is a perfect example of old-school personalization.
Hyper-personalization takes it a step further, using AI to analyze clicks and engagement times, adjusting content and even tone based on your behavior.
In 2025, we suspect growing hyper-personalization across email, streaming services, news sites, social media feeds, and other digital content. What does this mean for humans?
While AI will likely automatically generate different versions of content feeds, humans will still oversee the process:
When OpenAI rolled out ChatGPT in 2022, spotting AI-generated content was relatively easy—particularly to the trained eye.
Copy was often rife with cliches, awkward transitions, and suspiciously perfect grammar that lacked the human touch.
That’s changing.
In 2024, it’s much harder to distinguish between AI and human-created content. And at face value, that sounds awesome—particularly to struggling creatives or budget-conscious business owners who want to automate content creation to trim overhead.
But a closer look reveals a glaring downside: If we cannot distinguish human-written from AI-generated content, we risk eroding trust in all digital content. Looking ahead, we expect to see widespread adoption of:
If standardized, these measures won’t just prevent deepfakes—they'll ensure greater quality control, more accountability, and more authoritative content that’s generated, or at the very least overseen, by experts.
As of 2023, ChatGPT can process and generate text, images, and audio. Results vary, but we see this is a catalyst for what’s to come: seamless creation of text, images, audio, and video from a single hub.
To some degree, our content marketing team is excited about these advancements. Our team uses multimodal features to streamline tedious and time-consuming tasks, but it’s all done with intense oversight. Not only because AI is error-prone but also because its capabilities are limited. Currently, AI can only analyze, reconfigure, and mimic what humans have already created.
No human. No AI. We give these platforms the data and training they need. And as smart and efficient as it is, AI struggles to process cultural nuances and emotion—the essentials for conveying our clients’ brand identity, voice, and tone.
While the advantages of AI are incredible, we cannot predict with certainty how it will impact privacy, content quality, job security, and the economy.
What’s more certain is that both developers and users are throwing caution to the wind. Only time will tell what happens next, but here are a few challenges humans will likely have to overcome in 2025.
It’s tempting to feed AI a prompt, copy and paste the generic output into your system, and then hit publish.
It’s just as easy to scrape copy from a competitor’s site, modify it using automated tools, and claim it as your own.
What many content creators don’t know is that Google quality guidelines prohibit these practices.
While the search engine does not punish websites for using AI-powered content creation tools, it does:
Unfortunately, many small business owners do not know this. And while they may use AI to streamline production, their content output may actually inhibit growth—not augment it.
AI platforms have been programmed to restrict hate speech, violence, and overt bias, particularly when it comes to prickly subject matter. But in terms of accuracy, that safeguard has limits.
While many platforms have built-in fact-checking mechanisms, developers cannot keep pace with the rapid proliferation of information. Nor can they verify the soundness of the information they feed AI systems.
Although reliable statistics on the amount of web-based content generated by AI are not currently available, research shows that it is growing.
A 2024 Copyleaks study found that within a sample of 1 million web pages, the number containing AI-generated content went from 185 pages in December 2022 to over 15,000 in 2024. What’s perhaps more concerning is that the same analysis revealed that 59.7% of ChatGPT 3.5’s output contained some form of plagiarism.
What does that say about the content you interact with every day?
That’s difficult to answer for three reasons:
What we do know is that human-generated content is shrinking. And it’s being replaced by content of questionable origins.
AI has a relentless hunger for data, and developers would prefer you not know the origins of its food source.
However, as we learned from an April New York Times article, OpenAI had already exhausted its data supply of reputable English-language text back in 2021. To harvest more, they developed new technology to scrape over one million hours of YouTube videos and fed them to GPT-4. And they did it knowing they were violating YouTube’s terms of use.
Meanwhile, Meta, which owns Facebook and Instagram, has their eyes on acquiring another data source: publisher Simon & Schuster. They’ve also been loose-lipped about their intention to snag copyrighted material from the web, even if it means facing lawsuits.
In simple terms, AI systems are swallowing data faster than humans can produce it. And if developers have to violate copyright laws to satiate AI’s appetite, they aren’t being shy about it.
What does this mean for creatives—the writers, designers, and video creators whose work is being appropriated for what AI developers call “fair use?”
The Times sued OpenAI and Microsoft in 2023 for using copyrighted articles without permission. So has the Author’s Guild, which claims that the company scraped more than 100,000 books to train GPT-3. Other lawsuits are pending.
Only time will tell what happens next. What’s certain is that the industry needs extensive regulatory oversight. And until we see more protections, AI execs will do whatever is necessary to feed the beast.
There’s a reason WISE Digital Partners employs a full-time copy team: We deliver content that builds trust, demonstrates expertise, and drives real results for clients. If you’re looking for marketing content that sets your business apart, let’s talk. Contact us today!
Share
Stay ahead of the digital marketing curve and never miss a lucrative trend or insightful tidbit – subscribe to our WISE blog!
Get WISE about digital marketing with advanced services, industry experts, and cutting-edge tools designed for long-term, sustainable growth.