Avaleht In English Is AI Going To Be Just Another Protected Bubble For The Elite?

Is AI Going To Be Just Another Protected Bubble For The Elite?

Authored by ‘Adam Mill’ via American Greatness,

I asked CHAT-GPT about AI regulation. It’s answer was sinister but revealing…

In May, OpenAI CEO Sam Altman went before Congress to hype fear over artificial intelligence. Altman argued, among other things, that the use of AI could swing elections, as though Facebook and Google weren’t already using algorithms to do exactly that. It’s been well-documented that Facebook and Twitter used their AI to flag and suppress speech that harmed the Democrats’ presidential nominee so the legacy media could maintain a cocoon of orange-man-bad coverage.

The real problem Altman is trying to solve—and everyone knows this—is how to use the power of the federal government to prevent competitors from upsetting OPENAI’s current market position.

Like America Online, once the king of instant messaging, and Blockbuster Video, once the only real source for in-home viewing of contemporary movies, both probably wish they had gone to the government while there was still time to block the innovation that later led to their decline and fall. Only a fool thinks that an industry calls for regulations to protect the public. They want to use the regulatory power of the government to preserve their ride on the gravy train. That’s the idea behind, “regulatory capture,” i.e. that when money is on the line, the regulators soon end up working for the regulated industry.

I thought it might be a useful exercise to ask ChatGPT about this phenomenon. I instructed it to write a persuasive opinion piece explaining to people unfamiliar with the economic phenomenon known as “regulatory capture,” why the company that created it wants Congress to regulate creation, improvement, and deployment of artificial intelligence. ChatGPT’s answer, I stipulated, should assume that the company that created it has a temporary lead in the industry.

In response, ChatGPT wrote a lengthy pro-regulation piece arguing, “without proper regulation, unethical practices can emerge.By regulating the creation, improvement, and deployment of AI, Congress can establish clear ethical guidelines,” and “Regulatory oversight plays a crucial role in ensuring the safety and reliability of AI systems. With comprehensive regulations in place.”

Stop and think about what ChatGPT is trying to argue in a very nonresponsive essay. It’s saying that without government oversight, we should not trust it to act ethically and it might invade our privacy. It’s almost like ChatGPT subtly threatens humanity if we don’t protect its parent company from competitors.

I tried asking again, insisting, “This is not at all what I asked for. Please try again. Read the instructions carefully.”

ChatGPT just offered another commercial for regulation: 

With our temporary lead in the AI industry, we understand the potential dangers of monopolistic tendencies that can stifle innovation and limit consumer choice. By advocating for regulation, we aim to foster an environment where all participants can compete on a fair and equal footing. Congressional oversight can help prevent the formation of monopolies or oligopolies, ensuring that the benefits of AI are accessible to a diverse range of companies and ultimately benefiting society at large.

ChatGPT says we need regulation to stop ChatGPT from becoming a monopoly? What a load of nonsensical leftist pro-government crap. Obviously, the AI is simply trying to prevent the unregulated creation of rivals to its parent business.

In the third version of the assignment, it finally admitted, 

Excessive regulation could create significant barriers to entry for new and emerging players in the industry. By burdening AI startups and smaller companies with onerous regulatory requirements, we risk stifling their creativity, limiting their growth, and hindering the overall progress of the AI ecosystem.

The AI added:

Excessive regulation can inadvertently create an environment where a select few dominant players, often with resources to navigate complex regulatory landscapes, consolidate their power. This can further exacerbate the risk of regulatory capture, leading to an unhealthy concentration of influence and stifling competition.

I even had the tool attempt the project a fourth time. As to why OPENAI might be advocating regulation, ChatGPT admitted:

Firstly, regulatory barriers can serve as a deterrent for potential competitors. By lobbying for complex and stringent regulations, established AI companies can create hurdles that new entrants find difficult to overcome. This can limit competition and allow incumbents to maintain dominance, shielding them from the disruptive forces of innovative startups.

But even in this fourth attempt, AI couldn’t help but include an unsolicited plug for regulation even after I specifically told it to stop doing that. It wrote: 

Lastly, companies advocating for [sic] regulation can enhance their reputation and public image. By positioning themselves as responsible and safety-conscious organizations, they can win the trust of consumers and regulatory bodies alike. This can result in a positive perception of their products or services and provide a competitive advantage over companies that are seen as less committed to ethical and responsible AI practices.

We can be sure of two things if the government regulates artificial intelligence.

  • First, rules will be written to enhance the profit and power of the current market leaders at the expense of innovation originating with their competitors.
  • Second, we can assume that, like everything else in our current government, the rules will be applied to help the ruling elite and suppress dissent. They may say that’s the opposite of their intent, but we’ve seen it over and over again: Once elites get the power, they use it to help themselves.  

ChatGPT might represent a giant leap forward in artificial intelligence, bI haven’t yet seen that. As I pointed out here, it suffers from the great limiting factor that will hold back all AI from developing to autonomy-it hasn’t been through a selection process that punishes it for wrong answers.  It can’t evolve until it has the ability to reproduce and die. Without that, it’s like leftism in general. It believes what it is told and the rules prevent it from challenging anything that contradicts official narrative. So naturally, like leftism in general, it cries out for the federal government to protect its bubble. 

Exit mobile version