We not only have to deal with political lies—here's a story of how AI can lead us astray. One of the ground rules for historians is that you must have two independent sources for every fact to be reasonably sure it actually happened. How disheartening to realize there's another source of “misinformation” in today's world that you have to double check and recheck.
AI Issues Fake New Year’s Invite
Guest Post by Ed Condon
[This story appeared today in The Pillar, a Catholic news substack.]
Spare a thought, if you would, for the thousands of people who gathered in Centenary Square in the English city of Birmingham for a “spectacular midnight show” of live music and aerial pyrotechnics that didn’t just not happen, it never existed at all.
Thousands of people packed the square, including families with kids, to await a promised New Year’s Eve big bash, despite no event ever being planned, announced, or even considered by city authorities. In fact, they came despite several public announcements from city hall and the police advising that no such event was happening.
So why did they go? Apparently because AI told them to.
Several websites and social media feeds carried supposed details of the fictitious event, including round-up articles on “The UK’s Top New Year’s Eve Fireworks Displays to Ring in 2025.”
According to The Times [a leading London newspaper], it seems that the articles were all generated by AI software, which populated websites and Instagram feeds with hundreds of thousands of followers like “@bhamupdates.”
The best guess is that one or more AI algorithms just assumed that since Birmingham is a major city, and because the city used to hold New Year’s Eve firework events (until 2016), and because all these events tend to be pretty boilerplate in their descriptions — “a mix of performances, local food vendors, and a spectacular midnight show” — there would be one, and so it generated a description of what it assumed would happen.
It did not go well. As you would expect. A lot of people were annoyed, to put it mildly, and the city had to manage a huge (and angry) crowd for which it had laid on no preparations.
Now, I have to admit to finding the whole thing pretty funny, from my safe distance. And I have long been a vocal skeptic of AI, so forgive me a little smugness.
But this does go to the serious point that a lot of media sites are using AI to generate copy, and a lot more are tempted to do so. This is apparently the ultimate, farcical example of what you get: literal fake news that leaves thousands out in the cold.
I get how this happens: sites that make their money from clicks and traffic need clickbait listicles to grab eyes quickly and keep the hit count rolling. Ideally, they need that copy to be free to produce, because when a site is free to read, your margins are razor thin and staff is the ultimate luxury item.
That’s where a whole section of the media industry is headed right now. Nobody likes it, but we — all of us, myself included — got used to news being free to read online over the last two decades. But the Google ad revenue bubble that fueled much of that industry has been deflating for a while now, with ruinous consequences for a lot of outlets, both in terms of finances and professional standards.
It’s not good. But it does mean that now, more than ever, you need to know and trust your media. And you can trust me when I say we [The Pillar] aren’t ever going to let robots write our copy or throw a New Year’s party.
While I can chuckle at the absurdity of the event, it really does not bode well for a future, where generative AI is ubiquitous and a sizable part of the populace shows itself to be cognitively incapable of differentiating between what's real and what's fake. The past few elections have already shown that many are simply unable to check incendiary newsstories for their factuality and I can sadly only see things getting worse from here, because the way people use their devices actively works toward decaying their mental faculties by offering them an overabundance of algorithmically curated content, made to entertain or enrage, but not for fostering critical thought.
Why nobody is trying to regulate this is a mystery to me. It just takes one serious incident to happen due to fake news of maybe a riot or a crime and everyone will be running around like headless chickens trying to shut everything down. AI doesn't even need to get to singularity or even be truly intelligent, human stupidity or gullibility is probably enough...