And not in a good way
The Fédération Internationale de Football Association governs professional football, beach soccer and five-a-side indoor football. Little known fact: FIFA’s Social Media Protection Service (SMPS) monitors and, where possible, hides social media abuse aimed at players, officials and supporters. A recent SMPS report highlights the dangers.
The risks and mental health challenges associated with being victim of online abuse have a direct and immediate effect on players. Hatred and discrimination in the online environment can be damaging at both a personal and professional level, negatively impacting the players’ ability to be and perform at their best.
For the 2022 Qatar World Cup SMPS monitored some 1,921 accounts (as above). Their AI system also analyzed 20 million posts and comments across all major social media platforms.
The SMPS flagged 434k potentially abusive comments posted from 12,600 different accounts, and hid an additional 287k posts. The 721k posts represent 3.605 percent of the total.
Not a lot, considering the passions engendered by worldwide professional football? Perhaps.
Looking ahead, there are now dozens of AI programs that automatically post content on social media. How long before SMPS’ AI confronts the exponentially greater threat of AI-generated social media abuse?
Yes, haters who gotta hate, hate, hate are generally reactive people who like to put their own spin on their bile, and aim it at a specific target. But haters also like to hang in groups. Soccer hooligans, for example, are organized thugs.
AI’s automated efficiency will empower both individuals and hate groups to launch a mega-tsunami of AI-generated abusive posts across the full spectrum of social media. The platforms won’t be able to keep up. Truth be told, they can’t keep up now.
Forget AI “hallucination.” Think AI pollution. Coming to an online retailer near you! All online retailers near you. Building on a long, ignoble and profitable history of fake reviews.
For decades, unscrupulous companies have paid shadowy “review farms” to generate fake reviews for Amazon, Best Buy, Walmart and every significant online seller you can name. Reviews that praise an unworthy product and/or disparage worthy competitors.
According to a report by the World Economic Forum, fake reviews cost businesses around $152b every year. A report written in 2021, before the AI explosion.
Using official figures and self-reporting by the world's leading e-commerce sites (including Trip Advisor, Yelp, TrustPilot and Amazon) on average we find that 4% of all online reviews are fake.
Back in the day, it was pretty easy for a reasonably literate person to spot a fake review; most were written by workers for whom English was a second language.
As you know, AI blows that shit out of the water. A chatbot can generate a highly specific, entirely literate review in seconds. Like this (excerpt from Bard AI).
I've been a reader of Robert Farago's Substack blog for a few months now, and I'm consistently impressed with his in-depth insights on a wide range of topics, including artificial intelligence, men's wristwatches, motorcycles, and hypnosis.
Farago has a deep understanding of artificial intelligence, and he's able to explain complex concepts in a clear and concise way. He's also not afraid to challenge the status quo, and he's often critical of the way that AI is being used today…
I particularly enjoy Farago's writing style. He's a gifted storyteller, and he's able to weave together personal anecdotes, historical references, and technical jargon in a way that's both informative and entertaining.
Now consider the fact that there are AI apps that can generate an unlimited number of fake reviews and automatically post them online. Not just text. AI can create fake video.
Let’s not kid ourselves: even the most literate user can’t tell the difference between a human and AI-generated fake content.
AI fake review detectors can’t save us from the deluge. With so much money and “pride” at stake, AI abusers will find a way to bypass the big seller’s gatekeeper AI.
Chat GPT opened Pandora’s box. All that’s left is the hope that the forces of honesty, integrity and respect somehow find a way to douse the flames of bigotry and criminality. Ideas?
Comments