About the Book
Learn to build and secure MCP servers for agentic AI systems by translating real-world threat models into OAuth 2.1, sandboxing, RBAC, and supply chain defenses that work in production.
Key Features
Threat-model MCP systems across supply chain, runtime, and code-mode attack surfaces
Implement OAuth 2.1, RBAC, tenant isolation, and policy-first authorisation for MCP
Harden MCP servers with sandboxing, validation, monitoring, and supply chain controls
Book DescriptionAs agentic AI shifts from text generation to operational roles, it relies on the Model Context Protocol (MCP) to interface with databases and execute code. While MCP provides essential connectivity, it introduces a sophisticated attack surface. Securing MCP offers a hands-on framework for protecting these autonomous systems throughout their lifecycle.
The book begins by deconstructing MCP architecture to establish a rigorous threat model, categorizing risks across supply chain integrity, runtime execution, and "code-execution" attack vectors. Readers will learn to map these vulnerabilities to testable security controls that mirror adversary behavior. It then details the technical implementation of OAuth 2.1 and scoped authorization, ensuring every interaction is authenticated and auditable.
Beyond identity, the guide explores specialized threats like prompt injection, tool poisoning, and "rug pull" malicious updates. For enterprise production, it covers deployment hardening - including sandboxing, I/O validation, and secrets management - before addressing governance through RBAC, policies, and human-in-the-loop (HITL) mechanisms.
Complete with Python implementations and verification checklists, this book provides the professional roadmap required to deploy agentic AI with institutional-grade security.What you will learn
Design and secure MCP servers for both local and remote agentic deployments
Detect and mitigate agent-native attacks such as prompt injection and tool poisoning
Sandbox MCP tool execution using containers, gVisor, and Firecracker-style isolation
Secure higher-risk MCP patterns, including remote execution and code-mode servers
Harden the MCP supply chain using signing, verification, and dependency controls
Establish monitoring, governance, and human-in-the-loop approval workflows
Who this book is forThis book is for software engineers, security engineers, platform architects, and DevSecOps practitioners who are building, deploying, or securing MCP-based agentic AI systems. It’s also useful for AI/ML engineers integrating third-party MCP servers, security teams assessing agentic AI risk, and engineering leaders defining governance and control requirements for tool-connected assistants. Familiarity with Python, REST APIs, and OAuth is helpful, but not required - core concepts and security patterns are introduced and explained as you go.
Table of Contents:
Table of Contents- Introduction to MCP
- The Security Imperative
- OAuth 2.1 Implementation
- Token Management & Scopes
- Identity Providers Integration
- Rug Pull and Line Jumping Attack
- Tool Poisoning
- CVES & Exploits Analysis - A Case Study
- Local Server Security Fundamentals
- Sandboxing & Isolation
- Input/Output Validation
- Remote MCP Architecture
- Secrets Management
- Multi-Tenancy & Data Isolation
- Permission Models & RBAC
- Security Design Patterns
- Supply Chain Security
- Monitoring & Observability
- Governance & Compliance
- Human-in-the-Loop Controls
About the Author :
Idan Habler is an AI Security Researcher at Cisco, focused on securing agentic and autonomous AI systems across enterprise environments. His expertise spans agentic threat modeling, AI red-teaming, secure tool and agent-to-agent interactions, and defense-in-depth architectures for generative AI systems. Prior to Cisco, Idan worked as an AI Security Researcher at Intuit and served in senior cyber R&D and cybersecurity roles within Israel's military.
Idan holds a Ph.D. in Software and Information Systems Engineering from Ben-Gurion University, where his research focused on cyber risk assessment and advanced threats to complex systems. He is a core team member of the OWASP Securing Agentic Applications initiative and a founding member of AIVSS. Through OWASP, he co-authors security standards and guidance including the Agent Name Service (ANS) and the Agent-to-Agent Secure (A2AS) protocol, and develops practical threat modeling frameworks for multi-agent AI systems.
A recognized contributor to the AI security community, Idan's work appears in leading research and industry venues, and he collaborates with industry, academia, and open-source communities to advance secure-by-design approaches for agentic AI systems at scale. Vineeth Sai Narajala is a Senior Technical Lead for AI Security Research at Cisco, where he leads initiatives to secure AI systems across the company's networking, security, and infrastructure products. His expertise includes model safety guardrails, prompt-injection protections, compute isolation, and secure token management for agentic AI systems. Prior to Cisco, Vineeth served as a Senior Generative AI Security Engineer at Amazon Web Services (AWS) and as a Senior Security Engineer at Meta.
Vineeth is Co-Lead of OWASP AIVSS and a workstream co-lead within OWASP's GenAI security efforts focused on agentic application security, where he advances practical standards, threat modeling guidance, and best practices for the community. He regularly presents at major security conferences including RSA Conference, OWASP Global AppSec, BSides events, and CypherCon.
With deep technical expertise at the intersection of AI and cybersecurity, Vineeth focuses on security solutions that help teams ship agentic AI features safely at scale.