Humans learn new skills from vids on the regular, so why not ? That’s the crux of a new preprint paper (““) published by researchers at the Istituto Italiano di Tecnologia in Genova, Italy and the Australian Centre for Robotic Vision, which describes a deep learning framework that translates clips to natural language commands which can be used to train semiautonomous machines.

“While humans can effortlessly understand the actions and imitate the tasks by just watching someone else, making the robots to be able to perform actions based on observations of human activities is still a major challenge in robotics,” the paper’s authors wrote. “In this work, we argue that there are two main capabilities that a robot must develop to be able to replicate

Read More At Article Source | Article Attribution