About the Book
A practical guide to applying large language models across the DevOps lifecycle to improve automation, efficiency, and reliability
Key Features
Use large language models to enhance DevOps workflows across development, testing, and operations
Implement GPT, fine-tuning, RAG, and agent-based systems with practical enterprise examples
Boost R&D efficiency, automation, and reliability in modern software delivery pipelines
Book DescriptionLarge language models (LLMs) are rapidly transforming how software is built, delivered, and operated. Practice of Research and Development Efficiency Driven by Large Models provides a comprehensive and practice-oriented guide to applying LLMs across the full DevOps lifecycle from development and testing to operations, security, and project management.
Starting with the foundations of large language models, Transformers, and GPT architectures, you will progresses to advanced topics such as fine-tuning techniques (LoRA, QLoRA, PEFT), retrieval-augmented generation (RAG), and agent-based systems. You will then see how these technologies can be applied in real-world DevOps scenarios, including intelligent operations, automated testing, code generation, incident analysis, and project delivery optimization.
With extensive case studies drawn from enterprise environments, this book bridges theory and practice, helping you improve R&D efficiency, automation, reliability, and decision-making using large language models.What you will learn
Understand the evolution of large language models and Transformer-based architectures
Build and optimize GPT-style models, including fine-tuning and reinforcement learning techniques
Apply RAG and agent architectures to enterprise DevOps and platform engineering scenarios
Use LLMs to automate operations tasks such as log analysis, ticket handling, and root cause analysis
Enhance testing, programming, and CI/CD workflows with large language models
Apply LLMs to project management, risk analysis, and security use cases in DevOps environments
Who this book is forThis book is aimed at DevOps practitioners, including AI researchers, developers, project managers, and operations engineers. It explores how large language models enhance automation, CI/CD pipelines, software delivery, and operational reliability across modern DevOps environments.
Table of Contents:
Table of Contents- Introduction to Large Language Models
- The Cornerstone of Large Language Models—Transformer
- From Transformer to ChatGPT33
- Fine-Tuning Techniques for Large Language Models
- Enterprise AI Application Technology— RAG
- Three Foundational Pillars of Software Delivery
- Practical Applications of Large Language Models in Operations Scenarios
- Practical Applications of Large Language Models in Testing Scenarios
- Practical Applications of Large Language Models in Programming Scenarios
- Practical Applications of Large Language Models in Project Management Scenarios
- Practical Applications of Large Language Models in Security Scenarios
About the Author :
Huangliang Gu is a senior DevOps/R&D Efficiency Specialist with extensive experience in operations and development. Focuses on enterprise IT digital transformation and implementation, dedicated to building intelligent operations systems for businesses. Currently employed at a licensed financial institution. Included in the China Commerce Association Expert Think Tank; Deputy Director of the Think Tank Expert Committee, National Internet Data Center Industry Technology Innovation Strategic Alliance; Candidate Expert for the Jiangsu Banking and Insurance Industry Fintech Expert Committee; Specially Appointed Expert for the Ministry of Industry and Information Technology's Enterprise Digital Transformation IOMM Committee; Specially Appointed Expert for the China Academy of Information and Communications Technology's Trusted Cloud Standard; Specially Appointed Expert for the China Academy of Information and Communications Technology's Low-Code/No-Code Promotion Center; Tencent Cloud Most Valuable Professional (TVP); Alibaba Cloud Most Valuable Professional (MVP). Author of the best-selling books DevOps Authoritative Guide, Enterprise-Level DevOps Practical Cases: Continuous Delivery Edition, and Core Author of the DevOps Capability Maturity Model and Enterprise IT Operations Development White Paper. Frequent speaker at multiple technology summits. Qingzheng Zheng is a senior researcher at the FinTech Research Center. Ph.D. in Computer Science from Durham University, UK; M.Sc. in Computer Software Engineering from Swansea University, UK. Formerly served as Technical Planning Engineer and Image Research Engineer at Huawei. Focuses on financial big data risk control and machine vision. Participated in developing facial recognition, telecom CRM, and in-memory database systems. Published 3 papers and holds 3 authorized patents. Xiaoling Niu is the chair of the DevOps Standards Working Group and Editor of DevOps International Standards. Long-term researcher in DevOps, including cloud service operations management system reviews. Contributed to over 20 domestic and international standards, including: - Cloud Computing Service Agreement Reference Framework - Object Storage - Cloud Database - DevOps Capability Maturity Model - Y.3525 Cloud Computing - Requirements for Cloud Service Development and Operation Management - General Evaluation Method for Intelligent Cloud Computing Operations Conducted DevOps maturity assessments for over 50 projects, possessing extensive experience in standard development and evaluation testing. Xin Che is a deputy director of the Government and Enterprise Digital Transformation Department at the China Academy of Information and Communications Technology (CAICT) Cloud Computing and Big Data Research Institute. Primarily engaged in technical research and transformation consulting planning for areas including the Enterprise Digital Transformation Maturity Model (IOMM), Trusted Digital Services, Integrated Cloud Platforms for Digital Infrastructure, Middleware Series, Low/No-Code, Modularization, Safe Production, and Smart Operations. Responsible for developing relevant standards, conducting evaluation and testing, and organizing technical practice exchanges.