Mastering Computer Vision with PyTorch and Machine Learning
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Book 1
Book 2
Book 3
Home > Computing and Information Technology > Computer science > Artificial intelligence > Machine learning > Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)
Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)

Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)


     0     
5
4
3
2
1



Out of Stock


Notify me when this book is in stock
X
About the Book

This book, together with the accompanying Python codes, provides a thorough and extensive guide for mastering advanced computer vision techniques for image processing by using the open-source machine learning framework PyTorch. Known for its user-friendly interface and Python programming style, PyTorch is accessible and one of the most popular tools among researchers and practitioners in the field of artificial intelligence. Key Features: Hands-on approach using the accompanying practical code examples Codes for all projects listed in the book are in a same style with four parts: data input, data display, data process and data output Emphasis on practical application development for computer vision Includes latest computer vision technologies Offers practical guidance on advanced computer vision techniques Uses freely available and open-source resources like Kaggle and Google Colab

Table of Contents:
Preface Acknowledgements Author biography 1 Mathematical tools for computer vision 1.1 Probability, entropy and Kullback–Leibler divergence 1.1.1 Probability and Shannon entropy 1.1.2 Kullback–Leibler divergence and cross entropy 1.1.3 Conditional probability and joint entropies 1.1.4 Jensen’s inequality 1.1.5 Maximum likelihood estimation and over fitting 1.1.6 Application of expectation-maximization algorithm to find a PDF 1.2 Using a gradient descent algorithm for linear regression 1.3 Automatic gradient calculations and learning rate schedulers 1.4 Dataset, dataloader, GPU and models saving 1.5 Activation functions for nonlinear regressions References 2 Image classifications by convolutional neural networks 2.1 Classification of hand written digits in the MNIST database 2.2 Mathematical operations of a convolution 2.3 Using ResNet9 for CIFAR-10 classification 2.4 Transfer learning with ResNet for STL-10 dataset References 3 Image generation by GANs 3.1 The GAN theory 3.1.1 Implement a GAN for quadratic curve generation 3.1.2 Using a GAN with two fully connected layers to generate MINST Images 3.2 Applications of deep convolutional GANs 3.2.1 Mathematical operations of ConvTranspose2D 3.2.2 Applications of a DCGAN for MNIST and fashion MNIST 3.2.3 Using a DCGAN to generate fake anime-faces and fake CelebA images 3.3 Conditional deep convolutional GANs 3.3.1 Applications of a cDCGAN to MNIST and fashion MNIST datasets 3.3.2 Applications of a cDCGAN to generate fake Rock Paper Scissors images References 4 Image generation by WGANs with gradient penalty 4.1 Using a WGAN or a WGAN-GP for generation of fake quadratic curves 4.2 Using a WGAN-GP for Fashion MNIST 4.3 WGAN-GP for CelebA dataset and Anime Face dataset 4.4 Implement of a cWGAN-GP for Rock Paper Scissors dataset References 5 Image generation by VAEs 5.1 VAE and beta-VAE 5.2 Application of beta-VAE for fake quadratic curves 5.3 Application of beta-VAE for the MNIST dataset 5.4 Using VAE-GAN for fake images of MNIST and Fashion MNIST References 6 Image generation by infoGANs 6.1 Using infoGAN to generate quadratic curves 6.2 Implementation of infoGAN for the MNIST dataset 6.3 infoGAN for fake Anime-face dataset images 6.4 Implementation of infoGAN to the rock paper scissors dataset Reference 7 Object detection by YOLOv1/YOLOv3 models 7.1 Bounding boxes of Pascal VOC database for YOLOv1 7.2 Encode VOC images with bounding boxes for YOLOv1 7.2.1 VOC image augmentations with bounding boxes 7.2.2 Encoding bounding boxes to grid cells for YOLOv1 model training 7.2.3 Chess pieces dataset from Roboflow 7.3 ResNet18 model, IOU and a loss function 7.3.1 Using ResNet18 to replace YOLOv1 model 7.3.2 Intersection over union (IOU) 7.3.3 Loss function 7.4 Utility functions for model training 7.5 Applications of YOLOv3 for real-time object detection References 8 YOLOv7 and YOLOv8 models 8.1 YOLOv7 for object detection for a custom dataset: MNIST4yolo 8.2 YOLOv7 for instance segmentation 8.3 Using YOLOv7 for human pose estimation (key point detection) 8.4 Applications of YOLOv8 models 8.4.1 Image object detection, segmentation, classification and pose estimation 8.4.2 Object counting on an image or a video frame 8.4.3 Car tracking and counting for a video file 8.4.4 Fine tuning YOLOv8 for objection detection and annotation of a custom dataset 8.5 Using YOLO-NAS models for object detection References 9 U-Nets for image segmentation and diffusion models for image generation 9.1 Retinal vessel segmentation by a U-Net for DRIVE dataset 9.2 Using an attention U-Net diffusion model for quadratic curve generation 9.2.1 The forward process in a DDPM 9.2.2 The backward process in the DDPM 9.3 Using a pre-trained U-Net from Hugging Face to generate images 9.4 Generate photorealistic images from text prompts by stable diffusion References 10 Applications of vision transformers 10.1 The architecture of a basic ViT model 10.2 Hugging Face ViT for CIFAR10 image classification 10.3 Zero shot image classification by OpenAI CLIP 10.4 Zero shot object detection by Hugging Face’s OWL-ViT References 11 Knowledge distillation, DINO, SAM, MiDaS and NeRF 11.1 Knowledge distillation for neural network compression 11.2 DINO: emerging properties in self-supervised vision transformers 11.3 DINOv2 for image retrieval, classification and feature visualization 11.4 Segment anything model: SAM and FastSAM 11.5 MiDaS for image depth estimation 11.6 Neural radiance fields for synthesis of 3D scenes 11.6.1 Camera intrinsic and extrinsic matrices 11.6.2 Using MLP with Gaussian Fourier feature mapping to reconstruct images 11.6.3 The physics principle of render volume density in NeRF References Appendix

About the Author :
Dr. Caide Xiao was born in China in and obtained his bachelor's degree in physics from Centre China Normal University in 1979 and was a lecturer of medical physics in Yunyang University. Following his PhD in optical biosensors from Tsinghua University he has subsequently been a research fellow and visiting scholar at the Biotechnology Research Institute in Montreal, Oakland University in Rochester, West Virginia University and the University of Calgary.


Best Sellers


Product Details
  • ISBN-13: 9780750362443
  • Publisher: Institute of Physics Publishing
  • Publisher Imprint: Institute of Physics Publishing
  • Language: English
  • ISBN-10: 0750362448
  • Publisher Date: 29 Apr 2024
  • Binding: Digital (delivered electronically)
  • Series Title: IOP ebooks


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)
Institute of Physics Publishing -
Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Mastering Computer Vision with PyTorch and Machine Learning: (IOP ebooks)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    Fresh on the Shelf


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!