reader comments
28 with
On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.
But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.
On the demo website for “Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning,” the researchers frame the merciless toppling of the robots as a key part of a “robustness to pushes” evaluation, writing, “Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way.”
Reading between the technical jargon, machine learning experts will undoubtedly glean a technical breakthrough somewhere in there. But like us, people on social media instead focused on the obvious: Can’t they leave those cute little guys alone?
tweeted Kenneth Cassel. “Just let him play soccer in peace.”
Viewer reaction reminds us of the famous Boston Dynamics demo videos where robots repeatedly get pushed over by sticks, tripped, and thwarted in various ways. All in the name of testing, of course.
So back to DeepMind, and we’ll be serious for a second. What’s behind the little robot’s ability to keep getting up on its relentless drive to score? The researchers used deep reinforcement learning, a type of AI, to train the humanoid robots to play a one-on-one soccer game. They first trained the skills in isolation, then composed them to the match setting. (Other demonstration videos on the paper’s demo site show the performance of two tiny metal humanoids playing soccer.)
“The resulting policy exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more; and transitions between them in a smooth, stable, and efficient manner—well beyond what is intuitively expected from the robot,” the researchers write. “The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots.”
It’s impressive work, but one cannot help to think that if someday, hypothetically, a machine intelligence becomes embodied and aware enough to start looking back and understanding its history, maybe it’s unwise to topple the little guys with such glee. Or, as someone on Twitter put it, “It’s all fun and games until the robot starts pushing back.”