OpenAI’s former superalignment leader blasts company: ‘safety culture and processes have taken a backseat’

by | May 17, 2024 | Technology

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Earlier this week, the two co-leaders of OpenAI’s superalignment team — Ilya Sutskever, former chief scientist and Jan Leike, a researcher — both announced within hours they were resigning from the company.

This was notable not only given their seniority at OpenAI (Sutskever was a co-founder), but because of what they were working on: superalignment refers to the development of systems and processes to control superintelligent AI models, ones that exceed human intelligence.

But following the departures of the two superalignment co-leads, OpenAI’s superalignment team has reportedly been disbanded, according to a new article from Wired (where my wife works as editor-in-chief).

Now today Leike has taken to his personal account on X to post a lengthy thread of messages excoriating OpenAI and its leadership for neglecting “safety” in favor of “shiny products.”

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

As he put it in one message of his thread on X: “over the past years, safety culture and processes have taken a backseat to shiny products.”

But over the past years, safety culture and processes have taken a backseat to shiny products.— Jan Leike (@janleike) May 17, …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Earlier this week, the two co-leaders of OpenAI’s superalignment team — Ilya Sutskever, former chief scientist and Jan Leike, a researcher — both announced within hours they were resigning from the company.

This was notable not only given their seniority at OpenAI (Sutskever was a co-founder), but because of what they were working on: superalignment refers to the development of systems and processes to control superintelligent AI models, ones that exceed human intelligence.

But following the departures of the two superalignment co-leads, OpenAI’s superalignment team has reportedly been disbanded, according to a new article from Wired (where my wife works as editor-in-chief).

Now today Leike has taken to his personal account on X to post a lengthy thread of messages excoriating OpenAI and its leadership for neglecting “safety” in favor of “shiny products.”

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

As he put it in one message of his thread on X: “over the past years, safety culture and processes have taken a backseat to shiny products.”

But over the past years, safety culture and processes have taken a backseat to shiny products.— Jan Leike (@janleike) May 17, …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This