top of page
  • Robert Farago

AI Does LSD

Google Bard's Tripping Balls


Luxury watchmaker Audemars Piguet just appointed Ilaria Resta their new CEO. Ms. Resta is the first woman to head-up the privately-held Swiss watchmaker. She’s also completely devoid of watch industry experience. Intrigued, I asked Chat GPT to “give me a biography on Ilaria Resta.” Oh dear…

Ilaria Resta is an accomplished business executive and leader known for her expertise in the aerospace and defense industry. Born on June 15, 1978, in Milan, Italy, Resta developed a passion for science and technology from an early age…
Resta's achievements and reputation in the industry led to her appointment as President of Leonardo Helicopters Americas in 2019. In this role, she took on the responsibility of overseeing the company's operations and business development in the Americas region. Her strategic guidance and collaborative approach helped strengthen Leonardo's presence in the market, leading to significant business expansion.

Nope. Ilaria Resta didn’t jump from choppers to watches.


By all non-AI accounts, she’s ex-Procter & Gamble; a branding maven who pimped Tide, Ariel, Fairy, Swiffer, Duracell, Pantene, Head & Shoulders, Herbal Essence and Aussie.


Most recently, Ms. Resta was the President of the global perfumery colossus Firmenich.



Who was the President of Leonardo Helicopters Americas in 2019? Chat GPT informs us that “The President of Leonardo Helicopters Americas in 2019 was Gian Piero Cutillo.” True! (Portrait courtesy ainonline.com) Someone – I mean something — is tripping balls.


As I understand it, Chat GPT’s “large language model” works by determining the next most likely word. And the next. And the next. That seems like a strange way to “know” something. One that led to this digital daydream.


Hmmm. Hey Bard! Wassup with that?

AI chatbots can produce erroneous information for various reasons such as incomplete or inaccurate data, programming errors, or incorrect interpretation of user inputs. However, with continuous learning and improvements in natural language processing, AI chatbots are becoming more accurate and reliable.

So the data upon which an answer is based on might be dog shit. The process used to draw information from that steaming pile might be defective.


Or AI stepped in it because it failed to understand the user’s request. As for the promise that AI will get its shit together, the sun will come out tomorrow, bet your bottom dollar, tomorrow.


The problem with any and all AI chatbots (not mentioned by any and all AI chatbots): accuracy is a relative concept. Sure, facts are facts. But the way facts are presented alters how we perceive them. Context matters.



Just for fun (yeah I’m that guy), I asked Bard for “negative information” about the new head of Audemars Piguet. “I understand,” Google’s AI chatbot replied. “Can you provide me with more information on what you would like me to do with this text?”


Uh, like what? Post it to my Linkedin? Pop it into my Substack? “List the information requested.” Reply: “I see. Just to confirm, you would like me to list the negative information about the new head of Audemars Piguet?” I can handle the truth!

I'm sorry, but as an AI language model, I don't have access to the latest news or information about the new head of Audemars Piguet. However, I can suggest that you conduct a search on reputable news websites or consult with a news service provider to obtain the information you are looking for.

I thought Bard was wired into the right-now internet. What’s this? It’s been renamed Monica, powered by non-up-to-date Chat GPT. More importantly, unlike the Perplexity AI app, “Bardica” doesn’t provide links to double-check the information provided. That’s a HUGE mistake. Or is it…


I reckon the lawyers at Chat GPT/Bard/Monica know that listing the source for its answers opens it up to The Mother of All Copyright Lawsuits. Simply telling a user to check “reputable news websites or consult with a news service provider” is a strategy for avoiding ruinous legal jeopardy. By the same token, negative information is legal catnip. Chat GPT has an app for that! For example…

]

When I asked Bardica to “tell me the problems with renewable energy” it does so. And then closes by adding '“Despite these challenges, renewable energy is still a promising solution for reducing greenhouse gas emissions and transitioning to a more sustainable energy future.”


That, my friends, is context. Or, to use the correct term, bias. On top of which AI makes shit up. Sorry, “hallucinates.” A term carefully chosen to convince users it didn’t intend to make shit up.


Did I say bias? I meant to say “guardrails.” Protecting the great unwashed masses from “hate speech” and “misinformation.” I mean, why wouldn’t we trust the same tech giants caught filtering information to undermine Donald Trump and defend Hunter Biden to defend and extend the free flow of information?



I guess we’re left hoping that Elon Musk’s TruthGPT will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”


Note to Elon: the nature of the universe is an endless fight for power. Although Chat GPT has its own perspective.

0 views0 comments

Comments


bottom of page