(alphabetical order)

Workshop in Honor of Jean-Paul Laumond

11 July 2022, Collège de France, Paris

Alin Albu-Schäffer

Alin Albu-Schäffer received his M.S. in electrical engineering from the Technical University of Timisoara, Romania in 1993 and his Ph.D. in automatic control from the Technical University of Munich in 2002. Since 2012 he is the head of the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR). Moreover, he is a professor at the Technical University of Munich, holding the Chair for “Sensor Based Robotic Systems and Intelligent Assistance Systems” at the Computer Science Department. His personal research interests include robot design, modeling and control, nonlinear control, flexible joint and variable compliance robots, impedance and force control, physical human-robot interaction, bio-inspired robot design and control. He received several awards, including the IEEE King-Sun Fu Best Paper Award of the Transactions on Robotics in 2012 and 2014; several ICRA and IROS Best Paper Awards as well as the DLR Science Award. He was strongly involved in the development of the DLR light-weight robot and its commercialization through technology transfer to KUKA.Her researches focus on the statistical analysis of medical databases in order to: understanding the common features of populations, stratify patients, anticipate diagnosis and design decision support systems.

Title : A geometric perspective to robot nonlinear oscillations with application to locomotion

Controlling motion at low energetic cost, both from mechanical and computational point of view, certainly constitutes one of the major locomotion challenges in biology and robotics. Jean-Paul Laumond inspired the robotics research community by his geometric approach to understanding biological locomotion principles and transferring them to robots. On this line of thoughts, we attempt to demonstrate that robots can be designed and controlled to walk highly efficient by exploiting resonance body effects, increasing the performance compared to rigid body designs. To do so, however, legged robots need to achieve limit cycle motions of the highly coupled, non-linear body dynamics. This led us to the research of the still not very well understood phenomena of nonlinear system modal oscillations. It turns out that differential geometry is the adequate tool to frame this theory. I will present recent results in this direction from my ERC Advanced Grant project M-Runners.

Daniel Andler

Daniel Andler is emeritus professor at Sorbonne Université and a member of the Académie des sciences morales et politiques. He began his academic career as a mathematician, specializing in logic and then turned to philosophy of science. He is chiefly interested in cognitive science and artificial intelligence, and in their import for education, decision-making and public policy. He was the founder and first director of the department of cognitive studies at the Ecole normale superieure in Paris. His latest books are La Silhouette de l’humain, quelle place pour le naturalisme aujourd’hui ? and La Cognition, du neurone à la société (co-authored). His book on the significance of the present surge of artificial intelligence is forthcoming.

Title: Is having a body a sufficient and / or necessary condition for an AI system to attain intelligence?

Artificial intelligence is of two minds as regards the issue of intelligence tout court. According to the official position, to ask whether AI systems are truly intelligent is pointless, either because no one has a clear enough notion of what intelligence consists in, or because AI’s aim is not to make intelligent machines, but to make machines which can fill in for intelligent behavior. At the same time, some top AI scientists like to point out that AI systems are “still very stupid”, and the field seems intent on making them less so. Either way, one can ask whether machines could be intelligent. It is often speculated that the body plays an essential role in human cognition, and thus by parity that an AI system requires a body of its own to be intelligent. Is such a body actually necessary? Would it be sufficient? No clear answers are available at this time, but much hangs on what having a body or being embodied consists in for a machine.

Antonio Bicchi

Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before taking the first chair in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology a Genova, which he still leads. Since 2013 he is Adjunct Professor at Arizona State University, Tempe,
AZ. His work has been recognized with many international awards and has earned him five prestigious grants from the European Research Council (ERC). He launched initiatives such as the WorldHaptics conference series, the IEEE Robotics and Automation Letters,, and the
Italian Institute of Robotics and Intelligent Machines.

Title: Of bots and men

The Embodied Intelligence philosophy sees cognition as determined and anticipated by the physics of the body which mediates our interaction with the world. Bio-inspired robotics – and probably all robotics – looks at natural systems and tries to reproduce their functions in
artifacts. But how can we learn any lesson on the intelligent control of machines made of silicon and steel or polymers, by studying a completely different body, made of neurons and muscles?

Furthermore, the impressive evolution recently undergone by artificial intelligence, virtual reality, and robotics has reached a point where it is now possible to fuse these technologies and create another body for the self. This possibility poses new questions at the core of embodied intelligence.

I will discuss these paradoxes through few examples, showing how mathematical models of reality can help us abstract from the complexity of natural systems and bring these ideas to bear on novel robotic dimensions. I will also briefly examine a few of the technical, scientific, and philosophical issues related to using robots as avatars and “being a bot”.

John Canny

John Canny is a professor in computer science at UC Berkeley. He is an ACM dissertation award winner and a Packard Fellow. He has worked in computer vision, robotics, machine learning and human-computer interation. In robotics, he has worked on algebraic roadmap methods, non-holonomic motion planning and grasping, interests he shared with Jean-Paul Laumond. His most recent work is on explainable and advisable self-driving vehicle control. This work seeks to improve the acceptance of trust of these new systems and to make their workings more accessible and maleable for humans.

Title “Explainable and Advisable Self-Driving Vehicle Control”

Robots tomorrow will need to interact seamlessly with humans in a variety of settings. This is also true of some robots today, and issues of mutual understanding, trust and negotiating actions are already important for self-driving vehicles. In this talk I will cover our work on explainable and advisable self-driving vehicle models which spans these issues. Visual attention provides the first mechanism for understanding the objects in a scene that influence a controller’s behavior. Raw visual attention was augmented with causal filtering and object-level attention to improve interpretability. A more human-friendly presentation of the model’s attention is via text explanations such as “I slowed down because the light turned yellow”, but still rely on visual attention for grounding. Our text explanation work highlighting disparities between human and machine drivers, and that the latter under-attended to cues such as pedestrians. To augment machine models with better real-world understanding, we developed new models which accept “advice” from humans, using the same visual grounding mechanism and training data. I will close with a proposal to systematically evaluate the quality of explanations.

Justin Carpentier

Justin Carpentier is a researcher in Robotics at Inria and Ecole Normale Supérieure, inside the Willow research team. His research lies at the interface of Optimisation, Machine Learning, Vision and Control for Robotics, with dexterous manipulation and agile locomotion as the primary target applications. His research activities also ambition to provide roboticists with advanced and versatile software solutions for controlling robots. Previously, he was a PhD candidate and then post-doc at LAAS-CNRS in the Gepetto team specializing in the movement of anthropomorphic systems. His thesis « Computational foundations of anthropomorphic locomotion » aimed to highlight and question the underlying principles behind the locomotion of natural and artificial anthropomorphic systems.

Title: Robotics: computational foundations of artificial motion

I met Jean-Paul ten years ago during his lecture at the Collège de France. At that time, my vision of robotics was very candid: it was simply about moving a mechanical system with motors from one position to another to automate repetitive tasks. Jean-Paul’s course taught me that robotics is much more than that!  Uniquely, Jean-Paul could convey and show that the language of mathematics was a natural substrate to describe the motion generation problem in robotics, allowing us to perceive robotics as the science of artificial motion, a science in its own right at the interface with other disciplines.

Today, it is this particular vision of robotics, which Jean-Paul has helped to establish and promote throughout his career, that I would like to share with you. During this presentation, I will introduce the research work we conducted together within the Gepetto team on the computational foundations of anthropomorphic locomotion, which aimed to establish the links between natural and artificial movement. It will allow me to continue on the significant scientific and societal challenges which animated the debates and reflections that we had recently with Jean-Paul.

Alessandro De Luca

Alessandro De Luca is Professor of Robotics and Automation at DIAG, Sapienza University of Rome. He has been the first Editor-in-Chief of the IEEE Transactions on Robotics (2004-08), RAS Vice-President for Publication Activities in 2012-13, General Chair of ICRA 2007, and Program Chair of ICRA 2016. He has been the founding Director of the Master in Control Engineering at Sapienza (2014-19). He received three conference awards (Best paper at ICRA 1998 and BioRob 2012, Best application paper at IROS 2008), the Helmholtz Humboldt Research Award in 2005, the IEEE-RAS Distinguished Service Award in 2009, and the IEEE George Saridis Leadership Award in Robotics and Automation in 2019. He is an IEEE Fellow, class of 2007. His research interests cover modeling, motion planning, and control of robotic systems (flexible manipulators, kinematically redundant arms, underactuated robots, wheeled mobile robots), as well as physical human-robot interaction. He was the scientific coordinator of the FP7 project SAPHARI – Safe and Autonomous Physical Human-Aware Robot Interaction (2011-15). More info at www.diag.uniroma1.it/deluca.

Title: On the Control of Physical Human-Robot Interaction

I will survey the research activities performed in the last years at Sapienza University of Rome on physical Human-Robot Interaction (pHRI), ranging from on-line collision avoidance to collision detection, isolation and reaction, up to the safe handling of intentional contacts for collaborative applications. The solutions have been obtained within a hierarchical control architecture that generates consistent robot behaviors, organized in three layers for safety, coexistence, and active collaboration. Safety requires reliable detection of physical contacts that may occur anywhere on the robot body, preferably using only proprioceptive sensing (model-based residuals or motor currents), with a fast robot reaction to the event. Human-robot coexistence needs workspace monitoring and efficient collision avoidance schemes, driven by exteroceptive sensors (e.g., RGB-D sensors). When an active physical collaboration is engaged between the human and a robot, the exchanged forces and the common motion at the contact should be estimated, possibly without using tactile or force sensors, and then regulated in a task-oriented way for the specific application. I will illustrate the performance of the developed basic control methods on lightweight manipulators (a KUKA LWR4 and a Universal Robots UR10) and on two industrial robots (a medium-size KUKA KR5 Sixx and an ABB IRB1600).

Ken Goldberg

Ken Goldberg is the William S. Floyd Distinguished Chair in Engineering at UC Berkeley and co-Founder and Chief Scientist at Ambi Robotics. Ken holds joint appointments in EECS, Art Practice, and the School of Information at UC Berkeley, and Radiation Oncology at UCSF. He and his students have authored over 300 peer-reviewed papers and 9 US Patents in robotics and AI. Ambi Robotics, based in Berkeley, provides state-of-the-art package handling robots to address increasing demand in the supply chain. Ken co-founded the Berkeley AI Research (BAIR) Lab, the Berkeley Center for New Media (BCNM) and the African Robotics Network (he was born in Nigeria). Ken is also an award-winning artist, filmmaker, and public speaker on AI and robotics. His artwork has been featured in 70 art exhibits including the 2000 Whitney Biennial and Sundance Film Festival, and he co-directed (with his wife Tiffany Shlain) the Emmy-Nominated short documentary film “Why We Love Robots.” Ken earned dual degrees in Electrical Engineering and Economics from the University of Pennsylvania (1984) and MS and PhD degrees from Carnegie Mellon University (1990). Ken was awarded the Presidential Faculty Fellowship in 1995 by Bill Clinton, the Joseph Engelberger Robotics Award in 2000, and elected IEEE Fellow in 2005.

Website and social media links:
Twitter: @Ken_Goldberg
Website: http://goldberg.berkeley.edu

Title: Art and Robots: A Tribute to JPL

200 years after Mary Shelley’s masterwork appeared in print, “Artificial Intelligence” is running amok, provoking extreme claims of opportunities and threats. Many assert that AI is an “exponential technology”, a “new electricity” that will transform every industry. Advocates claim that fully autonomous cars and robots with human dexterity are just around the corner. At the same time, headlines report that robots will soon steal the majority of our jobs. A number of well-known and otherwise reasonable scientists and technologists state with confidence that AI will soon achieve a “Singularity” where Artificial General Intelligence (AGI) will vastly surpass human abilities and that such “superintelligence” is an existential threat to humanity.

Goldberg traces the history of Frankenstein to the Jewish 16th century myth of the Golem and to ETA Hoffman’s classic The Sandman from 1816. In 1919, a year before the word “robot” was coined, Sigmund Freud published his influential essay, Das Unheimliche, later
translated into English as “The Uncanny” leading later to what Martin Jay described as the “master trope” of critical theory in the 1990’s. Professor Goldberg links the Uncanny with contemporary robotics through the concept of the Uncanny Valley.

Potential Reading: Cultivating the Uncanny: The Telegarden and Other Oddities. Elizabeth Jochum and Ken Goldberg. Chapter 8 of Robots and Art: Exploring an Unlikely Symbiosis. Edited by Damith Herath, Christian Kroos, and Stelarc. Springer Press. Summer 2016:

Vincent Hayward

Vincent Hayward joined the Department of Electrical and Computer Engineering at McGill University in 1987 to become full professor in 2006. He joined the Université Pierre et Marie Curie in 2008 and took a leave a absence in 2017-2018 to be Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship, following a six-year period as an advanced ERC grantee. His main research interests are touch and haptics, robotics, and control. Since 2016, he spends part of his time contributing to the development of a start-up company in Paris, Actronika SAS, dedicated to the development of haptic technology.  He was elected a Fellow of the IEEE  in 2008 a member of the French Academy of Sciences in 2019.

Title: Tactile mechanics and its impact on the messages sent to the brain.

There is no clear picture of the computational problems that the tactile neural system solves. Hearing begins in the middle ear and vision in the retina. Touch begins with the tactile mechanics of the skin. The tactile counterpart of the retina is a cluster of nuclei at the top of the spinal cord that receive inputs from far-flung skin receptors. This organisation places the neurones of these nuclei in a relationship analogous to that of the bipolar and ganglion cells of the retina relative to the primary receptors. It is however glaringly clear that the organisation of the early touch nuclei bears little resemblance with the vertebrate retina or the olivary nuclei in audition.

Martial Hebert

Martial Hebert is University Professor at Carnegie Mellon University, where is currently Dean of the School of Computer Science, and was Director of the Robotics Institute. His research interests include computer vision, machine learning for computer vision, 3-D representations and 3-D computer vision, and autonomous systems. 

Steven M. LaValle

Steven M. LaValle is Professor of Computer Science and Engineering, in Particular Robotics and Virtual Reality, at the University of Oulu. Since 2001, he has also been a professor in the Department of Computer Science at the University of Illinois.  He has also held positions at Stanford University and Iowa State University.  His research interests include robotics, virtual and augmented reality, sensing, planning algorithms, computational geometry, and control theory.  In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields.  In industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook in 2014,here he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences.  From 2016 to 2017 he was Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, Ltd.  He has authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality.  He currently leads an Advanced Grant from the European Research Council on the Foundations of Perception Engineering. 

Stéphane Mallat

Nicolas Mansard

Nicolas Mansard is a researcher in robotics in Gepetto, LAAS-CNRS, where he mostly studies advanced numerical methods (constrained optimization, optimal control, learning) for robot control and legged locomotion. He is leading the Chair “Artificial and Natural Motion” at ANITI, the IA institute of Toulouse, and serve as coordinator of the EU project “Memory of Motion” and “Agimus”. He has been mentored in his early career by Jean-Paul, and will be proud to act as one of the chairmen for this day in his memory.

Matt Mason

Matt Mason is Professor Emeritus of Robotics and Computer Science at Carnegie Mellon University, and the Chief Scientist at Berkshire Grey.  He earned the BS, MS, and PhD degrees in Computer Science at MIT.  He was the Director of the CMU Robotics Institute from 2004 to 2014.  He is a Fellow of the AAAI, the IEEE, and the ACM.  He is a winner of the System Development Foundation Prize, the IEEE Robotics and Automation Society’s Pioneer Award, and the IEEE Technical Field Award in Robotics and Automation.  Mason’s research interests are in autonomous robotic manipulation.

Title:  How should a wheelchair think about distance? In memory of Jean-Paul Laumond

In 1998, despite my best efforts to avoid it, I discovered that I needed optimal control theory. This talk describes a project with Devin Balkcom, inspired and aided by the work of Jean-Paul Laumond and his colleagues Philippe Sou\'{e}res and Jean Daniel Boissonnat.  Through this work and through my interactions with Jean-Paul, I came be a believer and enthusiast for optimal control theory.

Katja Mombaur

Katja Mombaur is Professor and Canada Excellence Research Chair in Human-Centred Robotics and Machine Intelligence at the University of Waterloo in Canada. She holds a diploma in Aerospace Engineering from the University of Stuttgart and a PhD in Mathematics from Heidelberg University. Katja spent two years as a visiting researcher LAAS-CNRS in Toulouse working with Jean-Paul Laumond prior to becoming a professor at Heidelberg University in Germany.  Her research focuses on understanding human movement by a combined approach of model-based optimization and experiments and on using this knowledge to improve motions of humanoid robots and the interactions of humans with exoskeletons, prostheses and external physical devices.

Title: From optimality principles in human movement to motion intelligence in human-centred robots

From 2008 – 2010 I had the great pleasure to work with Jean-Paul Laumond within the Locanthrope project and to be part of the Gepetto team in Toulouse. Coming from a scientific computing background, this project allowed me to bring mathematical models and optimization in context with human experiments and humanoid robots, and to develop a wider perspective of human movement, combining aspects of (bio)mechanics and neuroscience. Together with Jean-Paul, I developed the inverse optimal control approach to identify optimality principles underlying human movement from recorded motion data. These optimality criteria can then be transferred to humanoid robots or used for human movement prediction in human robot interaction. In this talk, I will present some of our joint research as well as several follow-up projects inspired by the fruitful discussions with Jean-Paul, leading towards motion intelligence in human-centred robots.  

Céline Pieters

Céline Pieters is a Postdoctoral researcher in Rhetoric at the University of Vienna. She is currently involved in a project about responsible entrepreneurship and new technology within the research group of “Philosophy of Media and Technology” directed by Prof. Mark Coeckelbergh. Previously, she carried out her doctoral research at LAAS-CNRS in Toulouse, France, within the team GEPETTO specialized in the movement of anthropomorphic systems. In 2020, she received the Ph.D. degree in rhetoric and robotics from the ULB, Belgium and the INSA Toulouse. Her thesis “The words of robotics: a rhetorical approach” questioned the role of language on our representations of robots. She also co-edited the book Wording Robotics (Springer, 2019) following the 4th Anthropomorphic Motion Factory at the LAAS-CNRS Toulouse (2017). Her research interests include linguistics, rhetoric, creative writing, epistemology and ethics of technology.

Title: Talking about moving machines: language and representations

Intelligent machines, autonomous robots, moral agents… How do we talk about robotics? How do the words used to describe robots shape our understanding of their capacities?

On the one hand, the state-of-the-art shows the intrinsic link between the cognitive process of the attribution of intentions to moving machines (also known as mentalizing) and the agentive lexicon. Beyond the matter of anthropomorphism, our perception of movement impacts our representations of the world and the way that we talk about the world. On the other hand, natural language plays an important role in the way that we figure and represent the movements and actions of robots. Consequently, when talking about moral machines for instance, there is a concern that the words used to describe robots might sow confusion between the machines and the living. Are there alternatives to think and talk about robots? How can we debate about robotics beyond the dreams and nightmares?

Jean Ponce

Jean Ponce is a research director at Inria and a visiting researcher at the NYU Center for Data Science, on leave from Ecole Normale Superieure (ENS) / PSL Research University, where he is a professor, and served as director of the computer science department from 2011 to 2017. Before joining ENS and Inria, Jean Ponce held positions at MIT, Stanford, and the University of Illinois at Urbana-Champaign, where he was a full professor until 2005. Jean Ponce is an IEEE Fellow and an ELLIS Fellow. He chaired the IEEE Conference on Computer Vision and Pattern Recognition in 1997 and 2000, and the European Conference on Computer Vision in 2008, and will serve as General Chair for the 2023 International Conference on Computer Vision. He currently serves as Sr. Editor-in-Chief of the International Journal of Computer Vision and as scientific director of the PRAIRIE Interdisciplinary AI Research Institute in Paris. Jean Ponce is the recipient of two US patents, an ERC advanced grant, the 2016 and 2020 IEEE CVPR Longuet-Higgins prizes, and the 2019 ICML test-of-time award. He is the author of “Computer Vision: A Modern Approach”, a textbook translated in Chinese, Japanese, and Russian.

Title: A swept-volume metric structure for configuration space

Abstract: Borrowing elementary ideas from solid mechanics and differential geometry, I will show that the volume swept by a regular solid undergoing a wide class of volume-preserving deformations induces a rather natural metric structure with well defined and computable geodesics on its configuration space. This general result applies to concrete classes of articulated objects such as robot manipulators, and I will demonstrate as a proof of concect the computation of geodesic paths for a rotating and translating rod. This is joint work with Yann de Mont-Marin, Jean-Paul Laumond’s last PhD student, and Jean-Paul himself.

Thierry (Nic) Siméon

Thierry (Nic) Siméon is senior researcher at LAAS-CNRS. After receiving a PhD in Robotics from the University of Toulouse, he spent a postdoctoral year at UPENN in Philadelphia before joining CNRS in 1990 as researcher at LAAS. His main research interests include algorithmic motion planning for robotics and also applications to computational biology. Together with Jean-Paul Laumond he was co-founder of the spin-off company Kineo CAM created in 2001 (now Kineo-Siemens). He participated to several European projects in robot motion planning since the MOLOG project (1999-2001) also co-coordinated  with Jean-Paul. He served as co-chair for the IEEE RAS Technical Committee on Robot Motion Planning (2006-09) and as associated editor for the IEEE Transactions on Robotics (2009-12). More recently, he served CNRS at  national level as a member of the CoNRS committee (2012-16) and as scientific advisor for Robotics at the INS2I-CNRS institute (2019-2021). 

Philippe Souères

Philippe Souères is director of research at CNRS, head of the Robotics Department of LAAS-CNRS in Toulouse since 2016. He was the second PhD student of Jean-Paul Laumond from 1990 to 1993. After several years spent studying motion planning and control of wheeled robots, aerial robots and sensor-based control, he joined the Brain and Cognition Laboratory of Toulouse in 2005-2006 to study multisensory integration and sensor-based control in primates. In this context he led several interdisciplinary projects at the interface of robotics and neurosciences. In 2006 he contributed with Jean-Paul Laumond to the creation of the Gepetto team of LAAS specialized in the study of the motion of anthropomorphic systems and humanoid robots. He then led of this team from 2011 to 2020. His research interests include humanoids and legged-robots control, motor control and modeling and analysis of the human motion.

Title: The adventure of optimal paths for nonholonomic wheeled robots.

Abstract: During the 90’s I had the pleasure to work with Jean-Paul Laumond at the interface of motion planning and optimal control of wheeled robots. Contrary to manipulator arms with a fixed base, these machines were called “mobile robots” because they could freely move on the ground. The Hilare robot of LAAS-CNRS was one of the pioneer in this domain. However, due to the rolling-without-slipping constraints of the wheels on the floor and the limit on the turning angle, the problem of automating maneuvers for this type of vehicle turned out to be very difficult. The search for optimal trajectories quickly appeared to be the key way to generate new metrics and to provide the foundations of this algorithmic. This presentation will recall some of the main steps of this adventure and a set of results obtained in collaborations with specialists in automatic control, optimal control and robotics. 

Eiichi Yoshida

Eiichi Yoshida received Ph. D degree from the University of Tokyo and joined AIST in 1996. He served as Co-Director of AIST-CNRS JRL (Joint Robotics Laboratory) at LAAS-CNRS, Toulouse, France, from 2004 to 2008, and at AIST, Tsukuba, Japan from 2009 to 2021. From 2022, he is Professor of Tokyo University of Science at Faculty of Advanced Engineering. He was awarded the titles of Chevalier l’Ordre National du Merite and IEEE Fellow. His research interests include robot task and motion planning, human modeling and humanoid robotics.

Title: Human and humanoid together – looking back our scientific adventure and into future
The days I passed with Jean-Paul Laumond as French-Japanese joint laboratory at LAAS-CNRS was undoubtedly the most enjoyable moments in my research life. I had a privilege of undertaking an adventure of humanoid research, which was minor in Europe at that time, with him almost twenty years ago. I was looking only at robotics but he already had much broader view about anthropomorphic systems that I just start understanding. He introduced unified approach of non-holonomic vehicle, human walking and humanoid motion. As an exceptional scientist, I was always admired by his supreme mindset of how to push the scientific frontiers. In this presentation, I will try to trace this fantastic experience and provide the perspective on the science of human and humanoid.