top of page
  • Robert Farago

How The Government Should Regulate AI

Supreme Court's Warhol vs. Goldstein ruling is in...




OpenAI CEO Sam Altman stepped up to the microphone. The man gave him the news. He said you must be joking son. Where did you get those shoes? Those are lyrics from Steely Dan’s Pretzel Logic. I can use them because a) no one’s coming for me and b) they’re allowable under “fair use” provisions. As for Chat GPT…



If I haven’t said it before, I’ll say it now: AI chatbots are the greatest theft in human history. OpenAI, Google and rest scraped the entirety of the worldwide web since day one – hoovering-up billions of man hours of other people’s work. They use it to sell their services without paying anyone a dime.


They stole – yes stole – hundreds of millions of websites’ copyrighted content. Yeah about that…

Yesterday. the Supreme Court just issued its ruling on Warhol vs. Goldstein.


In a 7 - 2 decision, the Supremes ruled that Warhol’s images of Prince – based on photographer Lynn Goldsmith’s portrait — did not constitute “fair use” under U.S. copyright law. It’s a blow to Midjourney and other AI photo apps that modify copyrighted imagery, already facing legal retribution.


“The ruling will cramp artists and creativity because any artwork that uses existing materials is in danger of violating copyright law,” Justice Kagan wrote in the dissenting opinion. So… all of AI? Here’s hoping.

The only way to “regulate” AI is to make them pay up. The amount of money that would be required to right that wrong is in the tens of billions of dollars. It’s no wonder Altman Et al. are already spending tens of millions of dollars lobbying Congress to increase government oversight. You know, instead.



Mr. Altman is asking the government to create AI regulations that have nothing to do with copyright. According to Bard (Google’s AI), Altman called for politicians to…


  1. Create a new agency to oversee AI. Altman called for the creation of a new agency, similar to the Food and Drug Administration (FDA), that would be responsible for regulating AI. This agency would be responsible for setting safety standards for AI systems, reviewing new AI systems before they are released to the public, and investigating any potential harms caused by AI systems.

  2. Require AI systems to be transparent. Altman called for AI systems to be transparent, so that users can understand how they work and how they make decisions. This would help to ensure that AI systems are not used in discriminatory or harmful ways.

  3. Hold AI developers accountable. Altman called for AI developers to be held accountable for the harms caused by their systems. This would help to ensure that AI developers take steps to prevent their systems from causing harm.


Government bureaucracy! What could possibly go right? Accountability for harm! Nothing like inviting Uncle Sam to the table to stifle innovation (i.e. competition for Mr. Altman). Transparency? Run that by me again willya Bard?


  1. Require AI developers to publish information about their systems. This information should include the data that the system was trained on, the algorithms that the system uses, and the system's performance on a variety of tasks.

  2. Require AI developers to provide users with control over how their data is used. Users should be able to choose whether or not their data is used to train AI systems, and they should be able to see how their data is being used.

  3. Create a public database of AI systems. This database would list all AI systems that are currently in use, along with information about the systems' developers, the systems' capabilities, and the systems' potential risks.

  4. Creating user interfaces that allow users to understand how AI systems work. These interfaces could be used to explain the system's decisions, to show the system's reasoning process, and to allow users to experiment with the system's parameters.

  5. Developing tools that can be used to audit AI systems for bias and discrimination. These tools could be used to identify potential problems with the system's training data, the system's algorithms, or the system's outputs.

  6. Establishing standards for AI transparency. These standards could be used to ensure that AI systems are developed and used in a transparent and accountable manner.


Uh, tools to audit bias and discrimination? Who decides what constitutes discrimination bias or discrimination? The government of course! With the help of media moguls, who did such a good job on COVID, the Trump Russia investigation and Hunter Biden laptop coverage.


But again, show me the money! You know, for content creators. * crickets chirping * The simple truth is that big companies love them some regulation. As stated above re: lawsuits, the more regs in an industry the smaller the number of players.


Regulation costs money. BIG money. That’s why there are only nine major car makers doing business in the United States: General Motors, Ford, Toyota, Honda, Volkswagen, Hyundai, Nissan, Chrysler, Kia and BMW. In the 1940’s, before Uncle Sam got heavily involved, there were 100. Regs literally decimated the industry.


So Sam reckons promoting government AI regulations is the best investment he can make in protecting his company’s lead in the AI chatbot industry. There are dozens if not hundreds of new AI products launching right now. Regulation will cut the majority off at the knees.


As you can tell from this article, I have nothing against AI per se. Hallucinatory as it sometimes is, now, AI is the research tool to end all research tools. The summary above saved me hours of research time wading through news websites that can’t be trusted. Bard provided concise and, I hope, accurate information in the blink of an eye.

That doesn’t change the simple fact that Bard, OpenAI and the rest are based entirely on out-and-out thievery. The Mother of All Class Action Lawsuits is the only way to stop The Mother of All Copyright Theft. The Supremes opened that door.


A metaphor that brings to mind the old adage about shutting the barn door after the horses have bolted. AI chatbots are well on their way to a billion users. Not only that, but you can’t stop the signal.


Which brings me to my number one recommendation for government AI oversight: fuck off.

Let the free market decide.


If and when AI decides to take over the world, oh well. Politicians, like the rest of us, will be out of a job. That might be better than letting politicians decide how, when and what AI can do. Pretzel logic?


0 views0 comments

Comments


bottom of page