Multitask AI models help robots tackle complex tasks with improved efficiency

AI News

Multitask AI models help robots tackle complex tasks with improved efficiency
AI ModelAi RobotHumanoid

New study finds multitask-trained robots learn faster and adapt to complex tasks with less training.

Robots may soon learn new skills faster and with far less training, thanks to advances in imitation learning. New research from a team at the Toyota Research Institute shows that policies trained on large, multitask datasets—known as large behavior models —significantly outperform traditional single-task approaches.

Using nearly 1,700 hours of training data and 1,800 real-world trials, these systems demonstrated the ability to handle complex manipulation tasks, from assembling a breakfast tray to installing a bicycle brake rotor. According to researchers, the findings highlight how multitasking learning could accelerate the deployment of more adaptable, efficient robotic systems.“Our findings largely support the recent surge in popularity of LBM-style robot foundation models, adding to evidence that large-scale pretraining on diverse robot data is a viable path toward more capable robots,” said Jose Barreiros, a researcher at TRI, in a statement. Smarter robot learningResearchers are advancing toward general-purpose robots capable of operating in real-world environments. While robots are physically capable, true autonomy remains limited. Visuomotor learning—especially behavior cloning from human demonstrations—is helping bridge this gap, enabling robots to perform complex tasks in challenging conditions without explicit programming.However, traditional single-task models often struggle to generalize beyond their training scenarios, limiting their adaptability. To address this, researchers are increasingly turning to LBMs, which are trained on extensive multitask datasets. In the new study, scientists trained multiple LBMs on approximately 1,700 hours of robot demonstrations spanning more than 500 diverse tasks, combining both proprietary and publicly available data. Tasks spanned from basic pick-and-place actions to more advanced, multi-step activities such as slicing an apple or assembling a breakfast tray.According to researchers, the models were rigorously evaluated through 1,800 real-world trials and large-scale simulations, including complex, multi-step tasks requiring precision and tool use. Efficient AI roboticsThe results of the study showed that fine-tuning LBMs into task-specific specialists delivers stronger performance than training models from scratch. With the same amount of data, fine-tuned models perform better, and in many cases, achieve similar results using three to five times fewer demonstrations. This data efficiency is particularly valuable for robotics, where collecting task-specific demonstrations can be costly and time-consuming.Researchers found that LBMs trained on diverse, multitask datasets adapt more effectively to new tasks and unfamiliar conditions. Their performance advantage becomes even more evident under distribution shifts, when real-world scenarios differ from training environments. The models also showed steady improvement as pretraining data increased, with no clear performance plateau at the tested scale.However, multitask models without fine-tuning did not consistently outperform single-task systems. This limitation is partly linked to weaker language guidance in current architectures, though larger vision-language-action models may help address this issue in future work.According to the TRI team, the study also underscores challenges in evaluating robotic systems. Despite extensive trials, factors such as environmental variability and training differences can influence outcomes. The researchers highlight the importance of large sample sizes, controlled experiments, and rigorous statistical methods to ensure reliable comparisons. Overall, the team claims the findings reinforce multitask pretraining as a promising approach for building more adaptable and efficient robotic systems.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

IntEngineering /  🏆 287. in US

AI Model Ai Robot Humanoid Large Behavior Models Lbms Multitasking Robots Robot Dog Robotics Toyota Research Institute TRI

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Reclaiming Bengali New Year in New YorkReclaiming Bengali New Year in New YorkMore than a decade after leaving Bangladesh, writer Anikah Shaokat recreates her beloved Noboborsho feast in her Brooklyn home.
Read more »

Amid volunteer firefighter shortage, N.Y. rolling out a new push to bring in new recruitsAmid volunteer firefighter shortage, N.Y. rolling out a new push to bring in new recruitsFire departments across New York are opening their doors this weekend in a bid to attract sorely needed new volunteers.
Read more »

New Jersey Transit unveils new multi-level train cars going into service this yearNew Jersey Transit unveils new multi-level train cars going into service this yearThe new multi-level train cars will be in service by the end of the year and the plan to revamp the entire system is on track for 2031.
Read more »

New Spider-Man Footage Brings Back Depressed Peter Parker In Brand New DayNew Spider-Man Footage Brings Back Depressed Peter Parker In Brand New DaySony Pictures and Marvel Studios reveal more details on Tom Holland's Spider-Man: Brand New Day, as more footage of the film has been unveiled.
Read more »

New ‘Spider-Man: Brand New Day’ Footage Reveals the Devastating Impact of 'No Way Home'New ‘Spider-Man: Brand New Day’ Footage Reveals the Devastating Impact of 'No Way Home'Spider-Man: Brand New Day teaser trailer Stills
Read more »

‘Spider-Man: Brand New Day’ Swings Into CinemaCon With New Scene, Posters‘Spider-Man: Brand New Day’ Swings Into CinemaCon With New Scene, PostersTom Holland stars in his fourth solo Spidey feature, which has a release date of July 31.
Read more »



Render Time: 2026-04-17 09:36:42