Buy Learning-from-Observation 2.0 by Katsushi Ikeuchi
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Computing and Information Technology > Computer science > Artificial intelligence > Computer vision > Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)
Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)

Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)


     0     
5
4
3
2
1



Out of Stock


Notify me when this book is in stock
X
About the Book

This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible.

About the Author :
Katsushi Ikeuchi received his Ph.D. in Information Engineering from the University of Tokyo. He worked at MIT-AI as a postdoctoral researcher, CMU-RI as a research professor, U Tokyo as a professor before joining Microsoft in 2015. At MIT-AI, he was engaged in the development of algorithms for the world first bin-picking system using photometric stereo. At CMU-RI, he started the Learning-from-Observation project, focusing on developing robots that can acquire behavior from human demonstrations. At U Tokyo, he applied this LfO to develop a humanoid robot that can perform the Aizu Bandaisan Dance, Knot typing, and assemble mechanical parts. He has served as general or program chair for a dozen international conferences including IROS1995, CVPR1996, ICRA 2009 and ICCV 2017. He served as Editor-in-Chief of Springer-IJCV for more than 10 years. He has received Distinguished Researcher Award from IEEE-PAMI-TC and the Medal of Honor with purple ribbon from the Emperor of Japan. He is a Fellow of IEEE, IEICE, IPSJ, RSJ, and IAPR. Naoki Wake received his Ph.D. degree in Information Science and Technology at the University of Tokyo, Japan in 2019. He currently works at Microsoft as a Research Scientist for Industrial Solutions and Engineering. His current research involves the development of multimodal perception systems for robots and co-speech gesturing systems. His past research has spanned auditory neuroscience, neurorehabilitation, and speech processing. Jun Takamatsu received his Ph.D. degree in Computer Science from the University of Tokyo, Japan, in 2004. From 2004 to 2008, he was with the Institute of Industrial Science, the University of Tokyo. In 2007, he was with Microsoft Research Asia as a Visiting Researcher. From 2008 to 2021, he was with Robotics Laboratory, Nara Institute of Science and Technology, Japan, as an Associate Professor. He was also with Carnegie Mellon University as a Visitor in 2012 and 2013 and with Microsoft as a Visiting Scientist in 2018. Now, he is with Microsoft as a Senior Research Scientist. His research interests are in robotics including learning-from-observation, task/motion planning, feasible motion analysis, 3D shape modeling and analysis, and physics-based vision.Kazuhiro Sasabuchi received his Ph.D. degree in Information Science and Technology at the University of Tokyo, Japan in 2019. He has worked across various fields in robotics including human-robot interaction, hardware design, field robotics, robot systems, robot teaching, reinforcement learning, and mobile manipulation. He currently works at Microsoft as a Research Scientist for Industrial Solutions and Engineering. His interests are in practical robot systems which leverage composable skills, cloud operations, large language models, human interaction, simulation, and machine-learning.


Best Sellers


Product Details
  • ISBN-13: 9783032034441
  • Publisher: Springer Nature Switzerland AG
  • Publisher Imprint: Springer Nature Switzerland AG
  • Height: 240 mm
  • No of Pages: 204
  • Returnable: N
  • Returnable: N
  • Returnable: N
  • Returnable: N
  • Series Title: Synthesis Lectures on Computer Vision
  • Width: 168 mm
  • ISBN-10: 3032034442
  • Publisher Date: 05 Dec 2025
  • Binding: Hardback
  • Language: English
  • Returnable: N
  • Returnable: N
  • Returnable: N
  • Returnable: N
  • Returnable: N
  • Sub Title: Automatic Acquisition of Robot Behavior from Human Demonstration


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)
Springer Nature Switzerland AG -
Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration(Synthesis Lectures on Computer Vision)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    Fresh on the Shelf


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!