Is OpenAI’s superalignment team dead in the water after two key departures?

by | May 15, 2024 | Technology

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

It wasn’t just Ilya Sutskever, the former Chief Scientist and co-founder of OpenAI, who departed the company yesterday.

Sutskever was joined shortly after out the door by colleague Jan Leike, co-lead of OpenAI’s “superalignment” team, who posted about his departure with the simple message “I resigned” on his account on X.

Leike joined OpenAI in early 2021, posting on X at the time stating that he “love[d] the work that OpenAI has been doing on reward modeling, most notably aligning #gpt3 using human preferences. Looking forward to building on it!” and linking to this OpenAI blog post.

Leike described some of his work at OpenAI over on his own Substack account “Aligned,” posting in December 2022 that he was “optimistic about our alignment approach” at the company.

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

Prior to joining OpenAI, Leike worked at Google’s DeepMind AI laboratory.

The departure of the two co-leaders of OpenAI’s superalignment team had many on X cracking jokes and wondering about whether or not the company has given up on or is in trouble with its effort to design ways to control powerful new AI systems, including OpenAI’s even …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

It wasn’t just Ilya Sutskever, the former Chief Scientist and co-founder of OpenAI, who departed the company yesterday.

Sutskever was joined shortly after out the door by colleague Jan Leike, co-lead of OpenAI’s “superalignment” team, who posted about his departure with the simple message “I resigned” on his account on X.

Leike joined OpenAI in early 2021, posting on X at the time stating that he “love[d] the work that OpenAI has been doing on reward modeling, most notably aligning #gpt3 using human preferences. Looking forward to building on it!” and linking to this OpenAI blog post.

Leike described some of his work at OpenAI over on his own Substack account “Aligned,” posting in December 2022 that he was “optimistic about our alignment approach” at the company.

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

Prior to joining OpenAI, Leike worked at Google’s DeepMind AI laboratory.

The departure of the two co-leaders of OpenAI’s superalignment team had many on X cracking jokes and wondering about whether or not the company has given up on or is in trouble with its effort to design ways to control powerful new AI systems, including OpenAI’s even …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This