About the Book
Master end-to-end data engineering on Azure Databricks. From data ingestion and Delta Lake to CI/CD and real-time streaming, build secure, scalable, and performant data solutions with Spark, Unity Catalog, and ML tools.
Key Features
Build scalable data pipelines using Apache Spark and Delta Lake
Automate workflows and manage data governance with Unity Catalog
Learn real-time processing and structured streaming with practical use cases
Implement CI/CD, DevOps, and security for production-ready data solutions
Explore Databricks-native ML, AutoML, and Generative AI integration
Book Description"Data Engineering with Azure Databricks" is your essential guide to building scalable, secure, and high-performing data pipelines using the powerful Databricks platform on Azure. Designed for data engineers, architects, and developers, this book demystifies the complexities of Spark-based workloads, Delta Lake, Unity Catalog, and real-time data processing.
Beginning with the foundational role of Azure Databricks in modern data engineering, you’ll explore how to set up robust environments, manage data ingestion with Auto Loader, optimize Spark performance, and orchestrate complex workflows using tools like Azure Data Factory and Airflow.
The book offers deep dives into structured streaming, Delta Live Tables, and Delta Lake’s ACID features for data reliability and schema evolution. You’ll also learn how to manage security, compliance, and access controls using Unity Catalog, and gain insights into managing CI/CD pipelines with Azure DevOps and Terraform.
With a special focus on machine learning and generative AI, the final chapters guide you in automating model workflows, leveraging MLflow, and fine-tuning large language models on Databricks. Whether you're building a modern data lakehouse or operationalizing analytics at scale, this book provides the tools and insights you need.What you will learn
Set up a full-featured Azure Databricks environment
Implement batch and streaming ingestion using Auto Loader
Optimize Spark jobs with partitioning and caching
Build real-time pipelines with structured streaming and DLT
Manage data governance using Unity Catalog
Orchestrate production workflows with jobs and ADF
Apply CI/CD best practices with Azure DevOps and Git
Secure data with RBAC, encryption, and compliance standards
Use MLflow and Feature Store for ML pipelines
Build generative AI applications in Databricks
Who this book is forThis book is for data engineers, solution architects, cloud professionals, and software engineers seeking to build robust and scalable data pipelines using Azure Databricks. Whether you're migrating legacy systems, implementing a modern lakehouse architecture, or optimizing data workflows for performance, this guide will help you leverage the full power of Databricks on Azure. A basic understanding of Python, Spark, and cloud infrastructure is recommended.
Table of Contents:
Table of Contents- The role of Azure Databricks in modern data engineering
- Setting up an end-to-end Azure Databricks environment
- Data ingestion strategies for Azure Databricks
- Deep dive into Apache Spark on Azure Databricks
- Streaming architectures with structured streaming
- Working with Delta Lake: ACID transactions & schema evolution
- Automating data pipelines with Delta Live Tables (DLT)
- Orchestrating data workflows: from notebooks to production
- CI/CD and DevOps for Azure Databricks
- Optimizing query performance and cost management
- Security, compliance, and data governance
- Machine learning, AutoML, and generative AI in Databricks
About the Author :
Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries. Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics. Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities. Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.