Build a Text-To-Image Generator (from Scratch)
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Computing and Information Technology > Computer science > Human–computer interaction > Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions
Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions

Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions


     0     
5
4
3
2
1



International Edition


X
About the Book

AI images flood feeds, yet the models behind them feel mysterious. Relying on black boxes risks bias, errors, and costly creative dead ends. You deserve hands-on skills to build, audit, and improve these generators yourself. This book starts from a blank notebook, guiding every line of Python code. Learn transformers for vision, then craft diffusion models that sharpen noise into art. Finish with a custom system generating high-resolution images from any text prompt. 

  • Vision transformer anatomy: Decode image patches and attention flows for transparent decision paths. 
  • End-to-end diffusion pipeline: Transform random noise into detailed, photorealistic pictures you can trust. 
  • Captioning and classification builds: Extend models to describe or categorize images for downstream tasks. 
  • Fine-tuning walkthroughs: Adapt pretrained networks quickly, saving compute while boosting domain accuracy. 
  • Deepfake detection skills: Differentiate authentic photos from generated fakes to safeguard projects and brands. 
  • Fully runnable notebooks: Experiment, tweak, and visualize results without configuration hassles. 

In Build a Text-to-Image Generator (from Scratch), the author combines clear prose, diagrams, and production-ready Python to deliver practical authority.

Starting with patch tokenization, you implement a vision transformer, then pivot to diffusion. Step-by-step chapters layer theory, code, and visual outputs, ensuring concepts click before you move on. By the final page you can craft, tune, and deploy image generators that suit your data, budget, and ethical standards. You control every hyperparameter and understand every pixel produced. 

Ideal for data scientists and Python-savvy enthusiasts eager to master state-of-the-art image generation. 



Table of Contents:

PART 1: UNDERSTANDING ATTENTION AND TRANSFORMERS 

1 A TALE OF TWO MODELS: TRANSFORMERS AND DIFFUSIONS 

2 BUILD A TRANSFORMER 

3 CLASSIFY IMAGES WITH A VISION TRANSFORMER (VIT)

4 ADD CAPTIONS TO IMAGES 

PART 2: INTRODUCTION TO DIFFUSION MODELS 

5 GENERATE IMAGES WITH DIFFUSION MODELS 

6 CONTROL WHAT IMAGES TO GENERATE IN DIFFUSION MODELS 

7 GENERATE HIGH-RESOLUTION IMAGES WITH DIFFUSION MODELS 

PART 3: TEXT-TO-IMAGE GENERATION WITH DIFFUSION MODELS 

8 CLIP: A MODEL TO MEASURE THE SIMILARITY BETWEEN IMAGE AND TEXT 

9 TEXT-TO-IMAGE GENERATION WITH LATENT DIFFUSION 

10 A DEEP DIVE INTO STABLE DIFFUSION 

PART 4: TEXT-TO-IMAGE GENERATION WITH TRANSFORMERS 

11 VQGAN: CONVERT IMAGES INTO SEQUENCES OF INTEGERS 

12 A MINIMAL IMPLEMENTATION OF DALL-E 

PART 5: NEW DEVELOPMENTS AND CHALLENGES 

13 NEW DEVELOPMENTS AND CHALLENGES IN TEXT-TO-IMAGE GENERATION 

APPENDIX 

INSTALL PYTORCH AND ENABLE GPU TRAINING LOCALLY AND IN COLAB 



About the Author :

Mark Liu is a professor and program director known for translating cutting-edge AI into practical curricula. With years mentoring graduate students and professionals, Mark brings clarity, rigor, and enthusiasm to every page. He distills deep generative-model expertise into step-by-step guidance that empowers readers to build powerful visual AI systems. 



Review :
  • This book stands out for its hands-on, no-fluff approach to text-to-image generation—perfect for practitioners who want to build rather than just theorize. The clear PyTorch implementations, Colab-friendly examples, and practical exercises make even advanced concepts like Diffusion Models feel achievable.
    Simeon Leyzerzon, President, Excelsior Software Ltd. 
  • This book is a great hands-on intro to how text-to-image models like Stable Diffusion actually work under the hood. It explains the roles of transformers, VAEs, and denoising U-Nets in a super approachable way, with lots of code you can run yourself. If you’re curious about generative AI and want to build or tweak your own models, this is a solid place to start.
    Ravikumar Sanapala, Product Manager, Reality Labs, Meta 


Best Sellers


Product Details
  • ISBN-13: 9781633435421
  • Publisher: Manning Publications
  • Publisher Imprint: Manning Publications
  • Height: 235 mm
  • No of Pages: 360
  • Spine Width: 20 mm
  • Weight: 689 gr
  • ISBN-10: 1633435423
  • Publisher Date: 23 Jan 2026
  • Binding: Hardback
  • Language: English
  • Returnable: Y
  • Sub Title: With Transformers and Diffusions
  • Width: 190 mm


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions
Manning Publications -
Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Build a Text-to-Image Generator (from Scratch): With Transformers and Diffusions

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!
    Hello, User