Data Orchestration in Deep Learning Accelerators - Bookswagon
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Science, Technology & Agriculture > Electronics and communications engineering > Electronics engineering > Electronics: circuits and components > Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)
37%
Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)

Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)


     0     
5
4
3
2
1



Available


X
About the Book

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

Table of Contents:
Preface.- Acknowledgments.- Introduction to Data Orchestration.- Dataflow and Data Reuse.- Buffer Hierarchies.- Networks-on-Chip.- Putting it Together: Architecting a DNN Accelerator.- Modeling Accelerator Design Space.- Orchestrating Compressed-Sparse Data.- Conclusions.- Bibliography.- Authors' Biographies.

About the Author :
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that, he received an M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT), Delhi in 2007. Before joining Georgia Tech in 2015, he worked as a researcher in the VSSAD Group at Intel in Massachusetts. Dr. Krishna’s research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators, with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro’s Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation (NSF) CRII awardin 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.Hyoukjun Kwon is a research scientist at Facebook AR/VR. He received his Ph.D. in Computer Science from Georgia Institute of Technology in 2020, advised by Dr. Tushar Krishna. He received B.S. degrees in Environmental Materials Science and in Computer Science and Engineering from Seoul National University in 2015. His research interests include communication-centric DNN accelerator designs, modeling of DNN accelerator architecture and mapping, NoC for accelerators, and co-optimization of DNN model, mapping, and accelerator architecture. He is actively leading the development of multiple open-source tools and RTLs in the DNN accelerator domain, including MAESTRO, MAERI, Microswitch NoC, and OpenSMART. One of his papers was selected for IEEE Micro’s Top Picks from computer architecture in 2019, one received honorable mention in 2018, and another won the best paper award at HPCA 2020.Angshuman Parashar is a Senior Research Scientist at NVIDIA. His research interests are in building, evaluating, and programming spatial and data-parallel architectures, with a present focus on automated mapping of machine learning algorithms onto architectures based on explicit decoupled data orchestration. Prior to NVIDIA, he was a member of the VSSAD group at Intel, where he worked with a small team of experts in architecture, languages, workloads, and implementation to design and evaluate a new spatial architecture. Dr. Parashar received his Ph.D. in Computer Science and Engineering from the Pennsylvania State University in 2007, and his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Delhi in 2002.Michael Pellauer is a Senior Research Scientist at NVIDIA. His research interest is building domain specific accelerators, with a special emphasis on deep learning and sparse tensor algebra. Prior to NVIDIA, he was a member of the VSSAD group at Intel,where he performed research and advanced development on customized spatial accelerators. Dr. Pellauer holds a Ph.D. from the Massachusetts Institute of Technology in Cambridge, Massachusetts (2010), a Master’s from Chalmers University of Technology in Gothenburg, Sweden (2003), and a Bachelor’s from Brown University in Providence, Rhode Island (1999).Ananda Samajdar is a Ph.D. student at the school of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology. He completed his B.Tech. (Hons.) in Electronics and Communication Engineering (ECE) from the Indian Institute of Information Technology, Allahabad India (IIIT-A) in 2013. Before joining Georgia Tech, Anand worked as a VLSI design engineer at Qualcomm Bangalore for three years. Anand’s research interest includes designing custom architecture for efficient and deep learning systems. He has authored a number of papers in top-tier computer architecture conferences. Two of his papers received honorablementions in the IEEE MICRO Top Picks 2019, and one was awarded the best paper award at HPCA 2020. He is also the recipient of the silver medal for the ACM student research competition at ASPLOS 2019.


Best Sellers


Product Details
  • ISBN-13: 9783031006395
  • Publisher: Springer International Publishing AG
  • Publisher Imprint: Springer International Publishing AG
  • Height: 235 mm
  • No of Pages: 146
  • Returnable: Y
  • Width: 191 mm
  • ISBN-10: 3031006399
  • Publisher Date: 18 Aug 2020
  • Binding: Paperback
  • Language: English
  • Returnable: Y
  • Series Title: Synthesis Lectures on Computer Architecture


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)
Springer International Publishing AG -
Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Data Orchestration in Deep Learning Accelerators: (Synthesis Lectures on Computer Architecture)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    Fresh on the Shelf


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!