The Everyday Robot project originated from a multimillion-dollar mess. In 2013, Google executive Andy Rubin stepped down from leading the company’s Android mobile software division and went on a robot spending spree with the company checkbook. Google acquired a gaggle of startups with technologies ranging from full humanoids to industrial robotic arms to the prancing legged creations of MIT spinout Boston Dynamics.
Rubin never publicly articulated a clear strategy for that mechanical menagerie. The problem was left to others when he departed Google in late 2014, an exit later reported to have been caused by sexual assault allegations against him.
Brondmo joined X in 2016, after Alphabet’s leaders decided the lab was the best home for much of its disjointed robotics talent and technology. (Boston Dynamics was sold to Japanese conglomerate SoftBank in 2017.) X’s leadership created multiple moonshot projects from Google’s robot leftovers. Everyday Robot, led by Brondmo, is the first to become public.
The project’s heart, on the second floor of the X building, could be seen as a satire on office life. Mixed in with the desks of X engineers, in a prime spot near a window overlooking turning foliage, nearly 30 gray, one-armed robots toil at individual workstations. Each stands before three trays filled with trash, and spends the day sorting it into trays for recycling, compost, and landfill. When a robot has put everything is in its place, it lifts a handle on each tray to tip the sorted trash into a bin below and a human supervisor places a new collection of refuse to sort. X engineers call it the playpen.
The Sisyphean trash sorting is a test of X’s plan to make robots useful by having them learn from experience. Robots traditionally follow specific instructions written by human coders. That works in controlled environments such as factories, but a robot assisting people in a home or office faces too many varying circumstances for coders to anticipate and respond to them all. “It just becomes this game of whack-a-mole,” says Benjie Holson, a bearded software engineer with a tin robot on his shirt, as the robots sort away in the playpen. “Our big bet is to write programs that have the robot play whack a mole by practicing in the field.”
Google’s artificial intelligence research group helped stake that bet. It specializes in machine learning—algorithms that pick up skills from example data—and began applying it to robot control around five years ago. X engineers collaborated on the project and hosted the hardware.
The first fruit of the collaboration was dubbed the arm farm: Fourteen industrial robot arms with simple grippers in front of trays of miscellaneous items such as pens, plush toys, and paint brushes. Researchers wrote some initial code to direct the robots to grasp the objects and set them doing it over and over. Data from their successes and failures fed machine learning algorithms that gradually refined the robots’ abilities. After two months and 800,000 attempts to grab things, it succeeded in grasping objects more than 80 percent of the time.