OpenAI powers a robot that can hand people food, pick up trash, put away dishes, and more

by | Mar 13, 2024 | Technology

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

Despite reports of enterprises getting cold feet around embracing generative AI due to cost and accuracy issues, it’s clear that in the world of robotics, the AI age is just starting to take off.

Today, Figure, a robotics startup valued at $2.6 billion, founded less than two years ago by former workers at Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation, showed off its first collaboration with new investor and partner OpenAI, maker of ChatGPT, and it is undeniably impressive.

[embedded content]

Figure co-founder and CEO Brett Adcock took to his account on the social platform X to post a video demo of a Figure full-sized humanoid robot, the Figure 01 (pronounced “Figure One”), demonstrating its capabilities to interact with a nearby human and its environment, showing the robot following the person’s orders, locating and handing them an object (an apple, in this case), describing what it’s doing and conversing with the person (albeit with slightly delayed reaction time from what we would expect in a typical human-to-human conversation), and identifying, planning and carrying out helpful tasks on its own (in this case, picking up trash and putting dishes into a drying rack).

OpenAI + Figureconversations with humans, on end-to-end neural networks:→ OpenAI is providing visual reasoning & language understanding→ Figure’s neural networks are delivering fast, low level, dexterous robot actions(thread below)pic.twitter.com/trOV2xBoax— Brett Adcock (@adcock_brett) March 13, 2024

In a scene straight out of a sci-fi film, the video begins with the human saying “Hey Figure One, what do you see right now?” The robot responds: “I see a red apple on the plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table.”

VB Event
The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discus …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

Despite reports of enterprises getting cold feet around embracing generative AI due to cost and accuracy issues, it’s clear that in the world of robotics, the AI age is just starting to take off.

Today, Figure, a robotics startup valued at $2.6 billion, founded less than two years ago by former workers at Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation, showed off its first collaboration with new investor and partner OpenAI, maker of ChatGPT, and it is undeniably impressive.

[embedded content]

Figure co-founder and CEO Brett Adcock took to his account on the social platform X to post a video demo of a Figure full-sized humanoid robot, the Figure 01 (pronounced “Figure One”), demonstrating its capabilities to interact with a nearby human and its environment, showing the robot following the person’s orders, locating and handing them an object (an apple, in this case), describing what it’s doing and conversing with the person (albeit with slightly delayed reaction time from what we would expect in a typical human-to-human conversation), and identifying, planning and carrying out helpful tasks on its own (in this case, picking up trash and putting dishes into a drying rack).

OpenAI + Figureconversations with humans, on end-to-end neural networks:→ OpenAI is providing visual reasoning & language understanding→ Figure’s neural networks are delivering fast, low level, dexterous robot actions(thread below)pic.twitter.com/trOV2xBoax— Brett Adcock (@adcock_brett) March 13, 2024

In a scene straight out of a sci-fi film, the video begins with the human saying “Hey Figure One, what do you see right now?” The robot responds: “I see a red apple on the plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table.”

VB Event
The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discus …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This