top of page
Robert Farago

AI Caught Faking Michael Schumacher Interview

Sometimes Being Prompt is a Bad Idea - Even For A German


It's not entirely fair to say character.ai faked an interview with semi-comatose F1 driver Michael Schumacher. For one thing, "faked" isn't the AI industry's preferred term for output that isn't factual or, in this case, true. AI's guardians like to call this kind of thing a "hallucination." For another, in this case, AI was just doing what it was told.


The article was the intentional result of a prompt by Die Aktuelle Editor-in-Chief Anne Hoffmann or one of her minions. The German EIC's been fired and a public apology issued. "This tasteless and misleading article should never have appeared,"


Funke media group MD Bianca Pohlmann admitted in a statement. "It in no way meets the standards of journalism that we - and our readers - expect."


Surely Ms. Hoffman didn't think her magazine could get away with a fake interview with someone millions of F1 fans know is mentally damaged. That said, the “long road to recovery” theme tugs on his fans’ heartstrings but good, and the AI-generated quotes were milquetoast enough to pass the smell test.



"I can with the help of my team actually stand by myself and even slowly walk a few steps . . . My wife and my children were a blessing to me and without them I would not have managed it. Naturally they are also very sad, how it has all happened... They support me and are standing firmly at my side."


Including the Schumacher family lawyers. There’s at least two potential defenses against the forthcoming lawsuit. The mag can claim it was Ein Witz! – an attempt at humor from the country that invented the word schadenfreude. Alternatively, it was a War of the Worlds warning about AI’s dangers to journalists, readers and society. The strapline tells the tale: "It sounded deceptively real."


The new reality: newsrooms are already using AI to crank out more stories faster than human journalists could produce on their own. Insider global editor-in-chief Nicholas Carlson admitted to Axios that they're "experimenting with ways to leverage AI in its journalism." Leverage as in reduce costs, increase profit and remove the human touch.


Meanwhile, NewsGPT is up and running. The “all AI all the time” website's press release claims the service is… wait for it… unbiased. “For too long, news channels have been plagued by bias and subjective reporting,” CEO Alan Levy kvetched. “With NewsGPT, we are able to provide viewers with the facts and the truth, without any hidden agendas or biases.”


It's bad enough that experienced journalists are being shown the door. The idea that AI is somehow "the truth" is worse. Hello? As we reported, Chat GPT won't praise fossil fuels without making a pitch for renewable resources. It will write a poem praising President Biden but refuses to do the same for Donald Trump. In other words, AI bias is baked in.


Well of course it is. While NewsGPT's coverage seems free from political dog whistles, deciding which stories to cover and which to ignore is a form of bias. Does AI make that call? If it does, on what basis? Did AI decide to label a headline over Russia accidentally bombing Russia "shocking"? Is that a sign that AI has sensationalism in its code? If it doesn't now, it's only a matter of time before it will. Because money.



And then there's the issue of "deep fakes." Not just faked photos – an obvious challenge in the increasingly illiterate social media world in which we live. What about fake news? With unattributed AI interviewing people who can’t or won’t be interviewed, are we at the point where the media is inherently untrustworthy?


Mike Rowe thinks so, and he's OK with that. The Dirty Jobs celeb asserted that the existence of astoundingly "real" AI deep fakes will make the public skeptical about anything they read, hear or see. Less mindless sheep! Few experts share Mr. Rowe's optimism – if indeed it belongs to Mr. Rowe.


Hence Union. Unionized journalists – journalists whose work is Certified Human – will carve-out a valued space in the AI news world of tomorrow. There will come a day when you can know that what you read, see or hear was made by humans for humans. Will it make the information more or less trustworthy? Yes. Yes it will.

0 views0 comments

Comments


bottom of page