Existential threat my ass
Back in May, The New York Times reported that “A group of industry leaders warned that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.” And there I was thinking AI was dangerous.
The COVID-19 pandemic killed around 9.5m people. Our planet is home to 8.5b souls. The Wuhan flu took out a mere 0.1117 percent of the population. Most of whom were overweight. If the AI big wigs had compared AI to refined sugar and processed foods, then I’d be scared.
Yeah, I know: the Bubonic Plague. The Black Death wiped out 60 percent of the European population. But that’s not 100 percent and look what it did for wage growth amongst peasants.
Also note: the Plague wreaked havoc centuries before Pfizer, Moderna, AstraZeneca and Johnson & Johnson discovered force majeur. I mean used modern science to save lives.
As for AI and nuclear war, the comparison reminds me of a bumper sticker from the ‘70’s: “One nuclear bomb can ruin your whole day.” I said it then, I’ll say it now: that depends. Nuking Ukraine would suck, but I don’t think my local Starbucks would be affected.
In fact, when it comes to nuclear bombs, the more the merrier! If it wasn’t for the threat of mutually assured destruction, we would’ve lived Red Dawn a long time ago. It’s more like “One nuclear bomb can stop another nuclear bomb.”
I don’t mean to downplay the cataclysmic danger posed by nuclear war. Much. After all…
Last January, the Science and Security Board of the Bulletin of the Atomic Scientists reset the “Doomsday Clock.”
Their latest estimate: it’s 100 seconds to midnight - and nothing good happens after midnight. Hang on! A minute forty? That’s worrying precise. How’s that then?
The Doomsday Clock, originally developed to identify how close the world was to nuclear annihilation, has been updated recently to include additional threats beyond just nuclear. It now includes climate change threats as well as other threats driving us ever closer to extinction.
This mission creep intel was brought to you by the Electric Infrastructure Safety Council. The EIS wants you to know that a Black Sky Event – the collapse of the worldwide electrical grid – is a greater threat than both nukes and AI.
They would say that wouldn’t they? Still….
Even if AI does try to kill us all because we’re annoying and inconvenient and stand in the way of efficient paper clip production (apparently), meh. If there’s one thing humans are good at, it’s surviving.
Ipso facto. Hence all the secret underground shelters Davos congregants built to survive AI petulance, nuclear war, plague, black sky events and the next Marvel movie.
Bottom line: we beat Skynet once. We can do it again.
Here in the real world, Elon Musk’s Neuralink is developing implantable brain-machine interfaces (BMIs) to connect human brains to computers and other digital devices. Ba-bam! We won’t fight AI. We’ll be AI.
Meanwhile, aliens…
There’s this strange belief that any alien civilization advanced enough to conquer the vastness of the universe would have evolved not to be murderous, uh, whatevers. This accounts for mankind’s greatest mistake (so far): putting a map to earth on the Pioneer 10 and Pioneer 11 spacecraft.
Truth be told, life – all life – is about obtaining resources. Other than the alien equivalent of selfies, there’s only one reason they’d visit Earth: to take our shit.
While we might cut a deal, chances are aliens will get the better part of it. They’ll suck up our oceans, farm us for food, do something that could spell End of Days™.
Fear not. If we face an existential threat from a more technologically advanced life form, we can do what the stone-age Jivaro did when confronted by more advanced civilizations: kill our enemies and eat them. (Head shrinking’s a bonus.)
Too glib? Rest assured there are some fine minds working on kicking alien ass. An Introduction to Planetary Defense: A Study of Modern Warfare Applied to Extra-Terrestrial Invasion is an excellent guide.
Speaking to Reuters, author Travis S. Taylor gives us some invaluable insight into protecting or species.
Failure to prepare may mean mankind will have to dig in and fight with improvised weapons and hit-and-run tactics, much the same way Islamic extremists have battled the U.S. military in Iraq, Taylor said.
“You’d have to create an insurgency, a mujahideen-type resistance,” Taylor said. “The insurgents know how to win this war against us. It also tells us that if we were attacked by aliens, this is our best defense.”
Terminator redux!
Along those lines, I watched a documentary where a scientist suggested digging a big hole, putting a huge iron cover over it, then blasting it into space to destroy alien space ships. And what explosive would we use for this mammoth manhole cover? A nuclear bomb!
Breathe. Humans have been singing it’s the end of the world as we know it long before R.E.M. and AI. Plague, famine, tsunamis, genocide, super-volcanos, the Ice Age, Marvel movies – you name it, we’ve survived it. I see no reason why humanity can’t overcome Frankensteinian AI.
It won’t be easy. It won’t be fun. But yes we can! Is the risk of AI turning against us worth developing better video games and eliminating the drudgery of Powerpoint presentations? What do you think?
Yorumlar