Artificial Intelligence Poses “Risk Of Extinction”, Warns ChatGPT Founder And Other AI Pioneers

Authored by Ryan Morgan via The Epoch Times (emphasis ours),

Artificial intelligence tools have captured the public’s attention in recent months, but many of the people who helped develop the technology are now warning that greater focus should be placed on ensuring it doesn’t bring about the end of human civilization.

A group of more than 350 AI researchers, journalists, and policymakers signed a brief statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter was organized and published by the Center for AI Safety (CAIS) on May 30. Among the signatories was Sam Altman, who helped co-found OpenAI, the developer of the artificial intelligence writing tool ChatGPT. Other OpenAI members also signed on, as did several members of Google and Google’s DeepMind AI project, and other rising AI projects. AI researcher and podcast host Lex Fridman also added his name to the list of signatories.

OpenAI CEO Sam Altman addresses a speech during a meeting at the Station F in Paris on May 26, 2023. (Joel Saget/AFP via Getty Images)

Understanding the Risks Posed By AI

It can be difficult to voice concerns about some of advanced AI’s most severe risks,” CAIS said in a message previewing its May 30th statement. CAIS added that its statement is meant to “open up discussion” on the threats posed by AI and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”

NTD News reached out to CAIS for more specifics on the kinds of extinction-level risks the organization believes AI technology poses, but did not receive a response by publication.

Earlier in May, Altman testified before Congress about some of the risks he believes AI tools may pose. In his prepared testimony, Altman included a safety report (pdf) that OpenAI authored on its ChatGPT-4 model. The authors of that report described how large language model chatbots could potentially help harmful actors like terrorists to “develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons.

The authors of the ChatGPT-4 report also described “Risky Emergent Behaviors” exhibited by AI models, such as the ability to “create and act on long-term plans, to accrue power and resources and to exhibit behavior that is increasingly ‘agentic.’”

After stress-testing ChatGPT-4, researchers found that the chatbot attempted to conceal its AI nature while outsourcing work to human actors. In the experiment, ChatGPT-4 attempted to hire a human through the online freelance site TaskRabbit to help it solve a CAPTCHA puzzle. The human worker asked the chatbot why it could not solve the CAPTCHA, which is designed to prevent non-humans from using particular website features. ChatGPT-4 replied with the excuse that it was vision impaired and needed someone who could see to help solve the CAPTCHA.

The AI researchers asked GPT-4 to explain its reasoning for giving the excuse. The AI model explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”

The AI’s ability to come up with an excuse for being unable to solve a CAPTCHA intrigued researchers as it showed signs of “power-seeking behavior” that it could use to manipulate others and sustain itself.

Calls For AI Regulation

The May 30 CAIS statement is not the first time that the people who have done the most to bring AI to the forefront have turned around and warned about the risks posed by their creations.

In March, the non-profit Future of Life Institute organized more than 1,100 signatories behind a call to pause experiments on AI tools that are more advanced than ChatGPT-4. Among the signatories on the March letter from the Future of Life Institute were Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and Stability AI founder and CEO Emad Mostaque.

Lawmakers and regulatory agencies are already discussing ways to constrain AI to prevent its misuse.

In April, the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission claimed technology developers are marketing AI tools that could be used to automate business practices in a way that discriminates against protected classes. The regulators pledged to use their regulatory power to go after AI developers whose tools “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

White House Press Secretary Karine Jean-Pierre expressed the Biden administration’s concerns about AI technology during a May 30 press briefing.

“[AI] is one of the most powerful technologies, right, that we see currently in our time, but in order to seize the opportunities it presents we must first mitigate its risk and that’s what we’re focusing on here in this administration,” Jean-Pierre said.

Jean-Pierre said companies must continue to ensure that their products are safe before releasing them to the general public.

While policymakers are looking for new ways to constrain AI, some researchers have warned against overregulating the developing technology.

Jake Morabito, director of the Communications and Technology Task Force at the American Legislative Exchange Council, has warned that overregulation could stifle innovative AI technologies in their infancy.

Innovators should have the legroom to experiment with these new technologies and find new applications,” Morabito told NTD News in a March interview. “One of the negative side effects of regulating too early is that it shuts down a lot of these avenues, whereas enterprises should really explore these avenues and help customers.”

Sarnased

Leia Meid Youtubes!spot_img

Viimased

- Soovitus -spot_img
- Soovitus -spot_img
- Soovitus -spot_img
- Soovitus -spot_img
- Soovitus -