More OpenAI researchers slam company on safety, call for ‘right to warn’ to avert ‘human extinction’

by | Jun 4, 2024 | Technology

Time’s almost up! There’s only one week left to request an invite to The AI Impact Tour on June 5th. Don’t miss out on this incredible opportunity to explore various methods for auditing AI models. Find out how you can attend here.

A group of 11 researchers who currently or formerly worked at OpenAI, as well as a current member of Google DeepMind who previously worked at Anthropic, and another former DeepMind researcher, have signed a new open letter online calling for OpenAI and similar companies to commit to four principles protecting whistleblowers and critics who raise issues surrounding AI safety.

“We also understand the serious risks posed by these technologies,” the letter, titled “Right to Warn,” states: “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

What is a ‘Right to Warn’ for AI systems?

Among the concerns expressed in the letter are the lack of proper oversight, the influence of profit motives, and the suppression of dissenting voices within organizations working on cutting-edge AI technologies.

The four principles the signatories want AI companies to voluntarily agree to abide by to rectify these are as follows:

June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure optimal performance and accuracy across your organization. Secure your attendance for this exclusive invite-only event.

Refraining from enforcing agreements that prohibit disparaging comments or retaliation for risk-related criticism

Establishing a verifiable anonymous process for raising risk-related concerns to the company’s board, regulators, and independent organizations

Encouraging a c …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Time’s almost up! There’s only one week left to request an invite to The AI Impact Tour on June 5th. Don’t miss out on this incredible opportunity to explore various methods for auditing AI models. Find out how you can attend here.

A group of 11 researchers who currently or formerly worked at OpenAI, as well as a current member of Google DeepMind who previously worked at Anthropic, and another former DeepMind researcher, have signed a new open letter online calling for OpenAI and similar companies to commit to four principles protecting whistleblowers and critics who raise issues surrounding AI safety.

“We also understand the serious risks posed by these technologies,” the letter, titled “Right to Warn,” states: “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

What is a ‘Right to Warn’ for AI systems?

Among the concerns expressed in the letter are the lack of proper oversight, the influence of profit motives, and the suppression of dissenting voices within organizations working on cutting-edge AI technologies.

The four principles the signatories want AI companies to voluntarily agree to abide by to rectify these are as follows:

June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure optimal performance and accuracy across your organization. Secure your attendance for this exclusive invite-only event.

Refraining from enforcing agreements that prohibit disparaging comments or retaliation for risk-related criticism

Establishing a verifiable anonymous process for raising risk-related concerns to the company’s board, regulators, and independent organizations

Encouraging a c …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This