Home Tech Inside Google’s 7-year mission to give AI a robotic body

Inside Google’s 7-year mission to give AI a robotic body

0 comments
Inside Google's 7-year mission to give AI a robotic body

Often during the evenings and sometimes on weekends, when the robots weren’t busy with their daily tasks, Catie and her makeshift team would assemble a dozen robots in a large atrium in the middle of X. Flocks of robots began moving together, sometimes hesitantly, but always in interesting patterns, with what often seemed like curiosity and sometimes even grace and beauty. Tom Engbersen is a roboticist from the Netherlands who painted replicas of classical masterpieces in his spare time. He began a side project in collaboration with Catie on an exploration of how dancing robots might respond to music or even play an instrument. At one point, he had a novel idea: What if robots became instruments? This kicked off an exploration in which each joint of the robot played a sound when it moved. When the base moved, it played a bass sound; when a claw opened and closed, it produced a bell sound. When we turned on music mode, the robots created unique orchestral scores every time they moved. Whether they were walking down a hallway, sorting trash, cleaning tables, or “dancing” as a flock, the robots moved and sounded like a new kind of approachable creature, unlike anything I had experienced before.

This is just the beginning

By late 2022, conversations about potentially implementing an end-to-end system or a hybrid system were still intense. Peter and his teammates, along with our colleagues at Google Brain, had been working on applying reinforcement learning, imitation learning, and transformers (the architecture behind LLMs) to various robot tasks. They were making good progress in showing that robots could learn tasks in ways that made them general, robust, and resilient. Meanwhile, the application team led by Benjie was working on taking AI models and using them with traditional programming to create robot prototypes and services that could be deployed between people in real-world settings.

Meanwhile, Project Starling, as Catie’s multi-robot installation ended up being called, was changing the way I felt about these machines. I noticed how people were drawn to robots with wonder, joy, and curiosity. It helped me understand that as Robots will move among us and their sound will provoke deep human emotions; it will be an important factor in how, or if, we welcome them into our daily lives.

In other words, we were about to cash in on the biggest bet we’d ever made: AI-powered robots. AI was giving them the ability to… They understand what they heard (spoken and written language) and translate it into actions, or understand what they saw (camera images) and translate it into scenes and objects on which they could act. And as Peter’s team had shown, robots had learned to pick up objects. After more than seven years, we were deploying fleets of robots in several Google buildings. A single type of robot performed a variety of services: autonomously cleaning cafeteria tables, inspecting conference rooms, sorting trash, and more.

That’s when, in January 2023, two months after OpenAI introduced ChatGPT, Google shut down Everyday Robots, citing overhead cost issues. The robots and a small number of people ended up at Google DeepMind to conduct research. Despite the high cost and long timeline, everyone involved was left shaken.

A national imperative

In 1970, for every person over 64 in the world, there were 10 people of working age. By 2050, there will likely be fewer than four. We are running out of workers. Who will care for the elderly? Who will work in factories, hospitals, restaurants? Who will drive trucks and taxis? Countries like Japan, China, and South Korea understand the immediacy of this problem. There, robots are not an option. Those nations have made it a national imperative to invest in robotic technologies.

Embodying AI in the real world is both a matter of national security and a massive economic opportunity. If a tech company like Google decides it can’t invest in “fighter” initiatives like AI-powered robots that will complement and supplement the workers of the future, who will? Will Silicon Valley or other startup ecosystems jump on board, and if so, will there be access to patient, long-term capital? I have my doubts. The reason we call Everyday Robots a for-profit venture is that building highly complex systems at this scale far exceeded the patience that venture-funded startups have historically had. While the United States is at the forefront of AI, building its physical manifestation—robots—requires skills and infrastructure that other countries, most notably China, are already at the forefront of.

Robots didn’t arrive in time to help my mother, who passed away in early 2021. Our frequent conversations toward the end of her life convinced me more than ever that a future version of what we started at Everyday Robots is coming. In fact, it can’t come soon enough. So the question we’re left with is: how does this kind of change and future happen? I remain curious and concerned.


Tell us what you think about this article. Send a letter to the editor at email@wired.com.

You may also like