Multi-Agent Coordination
Home > Computing and Information Technology > Computer science > Artificial intelligence > Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)
Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)

Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)

|
     0     
5
4
3
2
1




International Edition


About the Book

Discover the latest developments in multi-robot coordination techniques with this insightful and original resource Multi-Agent Coordination: A Reinforcement Learning Approach delivers a comprehensive, insightful, and unique treatment of the development of multi-robot coordination algorithms with minimal computational burden and reduced storage requirements when compared to traditional algorithms. The accomplished academics, engineers, and authors provide readers with both a high-level introduction to, and overview of, multi-robot coordination, and in-depth analyses of learning-based planning algorithms. You'll learn about how to accelerate the exploration of the team-goal and alternative approaches to speeding up the convergence of TMAQL by identifying the preferred joint action for the team. The authors also propose novel approaches to consensus Q-learning that address the equilibrium selection problem and a new way of evaluating the threshold value for uniting empires without imposing any significant computation overhead. Finally, the book concludes with an examination of the likely direction of future research in this rapidly developing field. Readers will discover cutting-edge techniques for multi-agent coordination, including: An introduction to multi-agent coordination by reinforcement learning and evolutionary algorithms, including topics like the Nash equilibrium and correlated equilibrium Improving convergence speed of multi-agent Q-learning for cooperative task planning Consensus Q-learning for multi-agent cooperative planning The efficient computing of correlated equilibrium for cooperative q-learning based multi-agent planning A modified imperialist competitive algorithm for multi-agent stick-carrying applications Perfect for academics, engineers, and professionals who regularly work with multi-agent learning algorithms, Multi-Agent Coordination: A Reinforcement Learning Approach also belongs on the bookshelves of anyone with an advanced interest in machine learning and artificial intelligence as it applies to the field of cooperative or competitive robotics.

Table of Contents:
Preface xi Acknowledgments xix About the Authors xxi 1 Introduction: Multi-agent Coordination by Reinforcement Learning and Evolutionary Algorithms 1 1.1 Introduction 2 1.2 Single Agent Planning 4 1.2.1 Terminologies Used in Single Agent Planning 4 1.2.2 Single Agent Search-Based Planning Algorithms 10 1.2.2.1 Dijkstra’s Algorithm 10 1.2.2.2 A∗ (A-star) Algorithm 11 1.2.2.3 D∗ (D-star) Algorithm 15 1.2.2.4 Planning by STRIPS-Like Language 15 1.2.3 Single Agent RL 17 1.2.3.1 Multiarmed Bandit Problem 17 1.2.3.2 DP and Bellman Equation 20 1.2.3.3 Correlation Between RL and DP 21 1.2.3.4 Single Agent Q-Learning 21 1.2.3.5 Single Agent Planning Using Q-Learning 24 1.3 Multi-agent Planning and Coordination 25 1.3.1 Terminologies Related to Multi-agent Coordination 25 1.3.2 Classification of MAS 26 1.3.3 Game Theory for Multi-agent Coordination 28 1.3.3.1 Nash Equilibrium 31 1.3.3.2 Correlated Equilibrium 36 1.3.3.3 Static Game Examples 38 1.3.4 Correlation Among RL, DP, and GT 40 1.3.5 Classification of MARL 40 1.3.5.1 Cooperative MARL 42 1.3.5.2 Competitive MARL 56 1.3.5.3 Mixed MARL 59 1.3.6 Coordination and Planning by MAQL 84 1.3.7 Performance Analysis of MAQL and MAQL-Based Coordination 85 1.4 Coordination by Optimization Algorithm 87 1.4.1 PSO Algorithm 88 1.4.2 Firefly Algorithm 91 1.4.2.1 Initialization 92 1.4.2.2 Attraction to Brighter Fireflies 92 1.4.2.3 Movement of Fireflies 93 1.4.3 Imperialist Competitive Algorithm 93 1.4.3.1 Initialization 94 1.4.3.2 Selection of Imperialists and Colonies 95 1.4.3.3 Formation of Empires 95 1.4.3.4 Assimilation of Colonies 96 1.4.3.5 Revolution 96 1.4.3.6 Imperialistic Competition 97 1.4.4 Differential Evolution Algorithm 98 1.4.4.1 Initialization 99 1.4.4.2 Mutation 99 1.4.4.3 Recombination 99 1.4.4.4 Selection 99 1.4.5 Off-line Optimization 99 1.4.6 Performance Analysis of Optimization Algorithms 99 1.4.6.1 Friedman Test 100 1.4.6.2 Iman–Davenport Test 100 1.5 Summary 101 References 101 2 Improve Convergence Speed of Multi-Agent Q-Learning for Cooperative Task Planning 111 2.1 Introduction 112 2.2 Literature Review 116 2.3 Preliminaries 118 2.3.1 Single Agent Q-learning 119 2.3.2 Multi-agent Q-learning 119 2.4 Proposed MAQL 123 2.4.1 Two Useful Properties 124 2.5 Proposed FCMQL Algorithms and Their Convergence Analysis 128 2.5.1 Proposed FCMQL Algorithms 129 2.5.2 Convergence Analysis of the Proposed FCMQL Algorithms 130 2.6 FCMQL-Based Cooperative Multi-agent Planning 131 2.7 Experiments and Results 134 2.8 Conclusions 141 2.9 Summary 143 2.A More Details on Experimental Results 144 2.A.1 Additional Details of Experiment 2.1 144 2.A.2 Additional Details of Experiment 2.2 159 2.A.3 Additional Details of Experiment 2.4 161 References 162 3 Consensus Q-Learning for Multi-agent Cooperative Planning 167 3.1 Introduction 167 3.2 Preliminaries 169 3.2.1 Single Agent Q-Learning 169 3.2.2 Equilibrium-Based Multi-agent Q-Learning 170 3.3 Consensus 171 3.4 Proposed CoQL and Planning 173 3.4.1 Consensus Q-Learning 173 3.4.2 Consensus-Based Multi-robot Planning 175 3.5 Experiments and Results 176 3.5.1 Experimental Setup 176 3.5.2 Experiments for CoQL 177 3.5.3 Experiments for Consensus-Based Planning 177 3.6 Conclusions 179 3.7 Summary 180 References 180 4 An Efficient Computing of Correlated Equilibrium for Cooperative Q-Learning-Based Multi-Robot Planning 183 4.1 Introduction 183 4.2 Single-Agent Q-Learning and Equilibrium-Based MAQL 186 4.2.1 Single Agent Q-Learning 187 4.2.2 Equilibrium-Based MAQL 187 4.3 Proposed Cooperative MAQL and Planning 188 4.3.1 Proposed Schemes with Their Applicability 189 4.3.2 Immediate Rewards in Scheme-I and -II 190 4.3.3 Scheme-I-Induced MAQL 190 4.3.4 Scheme-II-Induced MAQL 193 4.3.5 Algorithms for Scheme-I and II 200 4.3.6 Constraint ΩQL-I/ΩQL-II(CΩQL-I/CΩQL-II) 201 4.3.7 Convergence 201 4.3.8 Multi-agent Planning 207 4.4 Complexity Analysis 209 4.4.1 Complexity of CQL 210 4.4.1.1 Space Complexity 210 4.4.1.2 Time Complexity 210 4.4.2 Complexity of the Proposed Algorithms 210 4.4.2.1 Space Complexity 211 4.4.2.2 Time Complexity 211 4.4.3 Complexity Comparison 213 4.4.3.1 Space Complexity 213 4.4.3.2 Time Complexity 214 4.5 Simulation and Experimental Results 215 4.5.1 Experimental Platform 215 4.5.1.1 Simulation 215 4.5.1.2 Hardware 216 4.5.2 Experimental Approach 217 4.5.2.1 Learning Phase 217 4.5.2.2 Planning Phase 217 4.5.3 Experimental Results 218 4.6 Conclusion 226 4.7 Summary 226 4.A Supporting Algorithm and Mathematical Analysis 227 References 228 5 A Modified Imperialist Competitive Algorithm for Multi-Robot Stick-Carrying Application 233 5.1 Introduction 234 5.2 Problem Formulation for Multi-Robot Stick-Carrying 239 5.3 Proposed Hybrid Algorithm 242 5.3.1 An Overview of ICA 242 5.3.1.1 Initialization 242 5.3.1.2 Selection of Imperialists and Colonies 243 5.3.1.3 Formation of Empires 243 5.3.1.4 Assimilation of Colonies 244 5.3.1.5 Revolution 244 5.3.1.6 Imperialistic Competition 245 5.4 An Overview of FA 247 5.4.1 Initialization 247 5.4.2 Attraction to Brighter Fireflies 247 5.4.3 Movement of Fireflies 248 5.5 Proposed ICFA 248 5.5.1 Assimilation of Colonies 251 5.5.1.1 Attraction to Powerful Colonies 251 5.5.1.2 Modification of Empire Behavior 251 5.5.1.3 Union of Empires 252 5.6 Simulation Results 254 5.6.1 Comparative Framework 254 5.6.2 Parameter Settings 254 5.6.3 Analysis on Explorative Power of ICFA 254 5.6.4 Comparison of Quality of the Final Solution 255 5.6.5 Performance Analysis 258 5.7 Computer Simulation and Experiment 265 5.7.1 Average Total Path Deviation (ATPD) 265 5.7.2 Average Uncovered Target Distance (AUTD) 265 5.7.3 Experimental Setup in Simulation Environment 265 5.7.4 Experimental Results in Simulation Environment 266 5.7.5 Experimental Setup with Khepera Robots 268 5.7.6 Experimental Results with Khepera Robots 269 5.8 Conclusion 270 5.9 Summary 272 5.A Additional Comparison of ICFA 272 References 275 6 Conclusions and Future Directions 281 6.1 Conclusions 281 6.2 Future Directions 283 Index 285


Best Sellers


Product Details
  • ISBN-13: 9781119699033
  • Publisher: John Wiley & Sons Inc
  • Publisher Imprint: Wiley-IEEE Press
  • Height: 10 mm
  • No of Pages: 320
  • Returnable: N
  • Spine Width: 10 mm
  • Weight: 648 gr
  • ISBN-10: 1119699037
  • Publisher Date: 22 Jan 2021
  • Binding: Hardback
  • Language: English
  • Returnable: N
  • Series Title: IEEE Press
  • Sub Title: A Reinforcement Learning Approach
  • Width: 10 mm


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)
John Wiley & Sons Inc -
Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Multi-Agent Coordination: A Reinforcement Learning Approach(IEEE Press)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    New Arrivals

    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!