AI Native LLM Security by Adam Dawson at Bookstore UAE
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Computing and Information Technology > Computer security > AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI
AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI

AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI


     0     
5
4
3
2
1



International Edition


X
About the Book

Unlock the secrets to safeguarding AI by exploring the top risks, essential frameworks, and cutting-edge strategies—featuring the OWASP Top 10 for LLM Applications and Generative AI DRM-free PDF version + access to Packt's next-gen Reader* Key Features Understand adversarial AI attacks to strengthen your AI security posture effectively Leverage insights from LLM security experts to navigate emerging threats and challenges Implement secure-by-design strategies and MLSecOps practices for robust AI system protection Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionAdversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework. Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs. Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity. *Email sign-up and proof of purchase required What you will learn Understand unique security risks posed by LLMs Identify vulnerabilities and attack vectors using threat modeling Detect and respond to security incidents in operational LLM deployments Navigate the complex legal and ethical landscape of LLM security Develop strategies for ongoing governance and continuous improvement Mitigate risks across the LLM life cycle, from data curation to operations Design secure LLM architectures with isolation and access controls Who this book is forThis book is essential for cybersecurity professionals, AI practitioners, and leaders responsible for developing and securing AI systems powered by large language models. Ideal for CISOs, security architects, ML engineers, data scientists, and DevOps professionals, it provides insights on securing AI applications. Managers and executives overseeing AI initiatives will also benefit from understanding the risks and best practices outlined in this guide to ensure the integrity of their AI projects. A basic understanding of security concepts and AI fundamentals is assumed.

Table of Contents:
Table of Contents

  1. Fundamentals and Introduction to Large Language Models
  2. Securing Large Language Models
  3. The Dual Nature of LLM Risks: Inherent Vulnerabilities and Malicious Actors
  4. Mapping Trust Boundaries in LLM Architectures
  5. Aligning LLM Security with Organizational Objectives and Regulatory Landscapes
  6. Identifying and Prioritizing LLM Security Risks with OWASP
  7. Diving Deep: Profiles of the Top 10 LLM Security Risks
  8. Mitigating LLM Risks: Strategies and Techniques for Each OWASP Category
  9. Adapting the OWASP Top 10 to Diverse Deployment Scenarios
  10. Designing LLM Systems for Security: Architecture, Controls, and Best Practices
  11. Integrating Security into the LLM Development Life Cycle: From Data Curation to Deployment
  12. Operational Resilience: Monitoring, Incident Response, and Continuous Improvement
  13. The Future of LLM Security: Emerging Threats, Promising Defenses, and the Path Forward
  14. Appendix A
  15. Appendix B


About the Author :
Vaibhav Malik is a security leader with over 14 years of experience in industry. He partners with global technology leaders to architect and deploy comprehensive security solutions for enterprise clients worldwide. As a recognized thought leader in Zero Trust Security Architecture, Vaibhav brings deep expertise from previous roles at leading service providers and security companies, where he guided Fortune 500 organizations through complex network, security, and cloud transformation initiatives. Vaibhav champions an identity and data-centric approach to cybersecurity and is a frequent speaker at industry conferences. He holds a Master's degree in Networking from the University of Colorado Boulder, an MBA from the University of Illinois Urbana-Champaign, and maintains his CISSP certification. His extensive hands-on experience and strategic vision make him a trusted advisor for organizations navigating today's evolving threat landscape and implementing modern security architectures. Ken Huang is a prolific author and renowned expert in AI and Web3, with numerous published books spanning business and technical guides as well as cutting-edge research. He is a Research Fellow and Co-Chair of the AI Safety Working Groups at the Cloud Security Alliance, Co-Chair of the OWASP AIVSS project, and Co-Chair of the AI STR Working Group at the World Digital Technology Academy. He is also an Adjunct Professor at the University of San Francisco, where he teaches a graduate course on Generative AI for Data Security. Huang serves as CEO and Chief AI Officer (CAIO) of DistributedApps.ai, a firm specializing in generative AI-related training and consulting. His technical leadership is further reflected in his role as a core contributor to OWASP's Top 10 Risks for LLM Applications and his participation in the NIST Generative AI Public Working Group. A globally sought-after speaker, Ken has presented at events hosted by RSA, OWASP, ISC2, Davos WEF, ACM, IEEE, Consensus, the CSA AI Summit, the Depository Trust & Clearing Corporation, and the World Bank. He is also a member of the OpenAI Forum, contributing to global dialogue on secure and responsible AI development. Ads Dawson is a self-described “meticulous dude” who lives by the philosophy: Harness code to conjure creative chaos—think evil; do good. He is a recognized expert in offensive AI security, specializing in adversarial machine learning exploitation and autonomous red teaming, with a talent for demonstrating capabilities in offensive security focused tasks using agents. As Staff AI Security Researcher at Dreadnode and founding Technical Lead for the OWASP LLM Applications Project, he architects next-gen evaluation harnesses for cyber operations and AI red teaming. Located in Toronto, Canada and an avid bug bounty hunter, he bridges traditional AppSec with cutting-edge AI vulnerability research, positioning him among the few experts capable of conducting full-spectrum adversarial assessments across AI-integrated critical systems.


Best Sellers


Product Details
  • ISBN-13: 9781836203759
  • Publisher: Packt Publishing Limited
  • Publisher Imprint: Packt Publishing Limited
  • Height: 235 mm
  • No of Pages: 416
  • Returnable: N
  • Sub Title: Threats, defenses, and best practices for building safe and trustworthy AI
  • ISBN-10: 1836203756
  • Publisher Date: 12 Dec 2025
  • Binding: Paperback
  • Language: English
  • Returnable: N
  • Returnable: N
  • Width: 191 mm


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI
Packt Publishing Limited -
AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!
    Hello, User