Should we learn or optimize the movements of our robots?
Speaker: Nicolas Mansard, LAAS-CNRS / ANITI
Generating fine movements of advanced robots, both for locomotion or manipulation, remains an open challenge. We recently saw recent progresses in legged locomotion that might announce that we are close to a solution, yet to be defined. Two paradigms of motion generation are simultaneously explored, both aiming at searching the robot behavior as the solution of a so-called optimal control problem: either by optimizing a prediction in the near future of the robot movement when it is moving (aka predictive control); or by optimizing off-line the control policy (aka reinforcement learning). In this presentation, we will discuss both approaches and show that the future is maybe in the convergence of both paradigms into a unique numerical approach, which would learn off-line and generalize on-line. We will also insist that many of the recent progresses in robotics are due to novelties in robot hardware, where artificial intelligence is also expected to help in optimizing the design of our robots.