Money and politics continue to merge in AI safety — including a new Super PAC | The AI Beat

by | Mar 12, 2024 | Technology

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

Back in January, I spoke to Mark Beall, a co-founder and then-CEO of Gladstone AI, a consulting firm that released a bombshell AI safety report yesterday, commissioned by the State Department. The announcement was first covered by TIME, which highlighted the report’s AI safety action-plan recommendations — that is, “how the US should respond to what it argues are significant national security risks posed by advanced AI.”

When I first spoke to Beall, we chatted for a story I was writing about the debate among AI and policy leaders about a “web” of effective altruism adherents in AI security circles in Washington, DC. There was no doubt that Beall, who told me he was a former head of AI policy at the U.S. Department of Defense, felt strongly about the need to manage the potential catastrophic threats of AI. In a post on X that shared my story, Beall wrote that “common sense safeguards are needed urgently before we get an AI 9/11.”

For many, the term “AI safety” is synonymous with tackling the “existential” risks of AI — some may be drawn to those concerns through belief systems such as effective altruism (EA), or, as the report maintained, from working in ‘frontier’ AI labs like OpenAI, Google DeepMind, Anthropic and Meta. The Gladstone AI authors of the report said they spoke with more than 200 government employees, experts, and workers at frontier AI companies as part of their year-long research.

However, others pushed back on the report’s findings on social media: Communication researcher Nirit Weiss-Blatt pointed out that Gladstone AI co-author Eduoard Harris has weighed in on what many consider a far-out, unlikely “doomer” scenario called the “paperclip maximizer” problem. On the community blog …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

Back in January, I spoke to Mark Beall, a co-founder and then-CEO of Gladstone AI, a consulting firm that released a bombshell AI safety report yesterday, commissioned by the State Department. The announcement was first covered by TIME, which highlighted the report’s AI safety action-plan recommendations — that is, “how the US should respond to what it argues are significant national security risks posed by advanced AI.”

When I first spoke to Beall, we chatted for a story I was writing about the debate among AI and policy leaders about a “web” of effective altruism adherents in AI security circles in Washington, DC. There was no doubt that Beall, who told me he was a former head of AI policy at the U.S. Department of Defense, felt strongly about the need to manage the potential catastrophic threats of AI. In a post on X that shared my story, Beall wrote that “common sense safeguards are needed urgently before we get an AI 9/11.”

For many, the term “AI safety” is synonymous with tackling the “existential” risks of AI — some may be drawn to those concerns through belief systems such as effective altruism (EA), or, as the report maintained, from working in ‘frontier’ AI labs like OpenAI, Google DeepMind, Anthropic and Meta. The Gladstone AI authors of the report said they spoke with more than 200 government employees, experts, and workers at frontier AI companies as part of their year-long research.

However, others pushed back on the report’s findings on social media: Communication researcher Nirit Weiss-Blatt pointed out that Gladstone AI co-author Eduoard Harris has weighed in on what many consider a far-out, unlikely “doomer” scenario called the “paperclip maximizer” problem. On the community blog …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This