Toyota Research Institute utilized generative AI in a 'kindergarten for robots' to teach robots how to make breakfast - or at least the individual tasks required to do so - and it didn't take hundreds of hours of coding and mistakes and bug fixing. The researchers did so by giving robots a sense of touch, plugging them into an AI model, and then showing them how to do so.
The sense of touch is a crucial enabler, according to researchers at the National Institute of Mental Health. By giving the robots the big, pillowy thumb that you see in the video below, the model can 'feel' what it's doing, giving it more information. The researchers say they're attempting to create similar to how LLMs are trained by noting patterns in human writing, Toyota's LBMs would learn by observation, then 'generalize, performing a new skill that they've never been taught,' says Russ Tedrake, MIT robotics professor and VP of robotics research at TRI.
Using this process, the researchers say, they have trained over 60 challenging skills, such as pouring liquids, using tools, and manipulating deformable objects. By the end of 2024, they want to increase that number to 1,000.
Toyota's robots, similar to the approach of Toyota's researchers, use the knowledge they have been given to determine how to do things. Theoretically, a robot trained by AI could eventually carry out tasks with little to no instruction other than the kind of general direction you would give a human being.
But Google's robots, at least, have a way to go, according to the New York Times, which notes that the search giant's research has been conducted. The Times quotes that this type of work is usually slow and labor-intensive, and providing enough training data is much harder than just feeding an AI model gobs of data you downloaded from the internet, as the article shows when describing a robot that identified a banana's color as white.