Pattern Recognition and Computer Vision
Home > Computing and Information Technology > Computer science > Image processing > Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)
Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)

Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)

|
     0     
5
4
3
2
1




Available


About the Book

The 4-volume set LNCS 13534, 13535, 13536 and 13537 constitutes the refereed proceedings of the 5th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022, held in Shenzhen, China, in November 2022. The 233 full papers presented were carefully reviewed and selected from 564 submissions. The papers have been organized in the following topical sections: Theories and Feature Extraction; Machine learning, Multimedia and Multimodal; Optimization and Neural Network and Deep Learning; Biomedical Image Processing and Analysis; Pattern Classification and Clustering; 3D Computer Vision and Reconstruction, Robots and Autonomous Driving; Recognition, Remote Sensing; Vision Analysis and Understanding; Image Processing and Low-level Vision; Object Detection, Segmentation and Tracking.

Table of Contents:
​Biomedical Image Processing and Analysis.- ED-AnoNet: Elastic Distortion-Based Unsupervised Network for OCT Image Anomaly Detection.- BiDFNet: Bi-decoder and Feedback Network for Automatic Polyp Segmentation with Vision Transformers.- FundusGAN: A One-Stage Single Input GAN for Fundus Synthesis.- DIT-NET: Joint Deformable Network and Intra-class Transfer GAN for cross-domain 3D Neonatal Brain MRI segmentation.- Classification of sMRI Images for Alzheimer's Disease by Using Neural Networks.- Semi-Supervised Distillation Learning Based on Swin Transformer for MRI Reconstruction.- Multi-Scale Multi-Target Domain Adaptation for Angle Closure Classification.- Automatic glottis segmentation method based on lightweight U-net.- Decouple U-Net: A Method for the Segmentation and Counting of Macrophages in Whole Slide Imaging.- A Zero-training Method for RSVP-based Brain Computer Interface.- An improved tensor network for image classification in histopathology.- DeepEnReg: Joint Enhancement and Ane Registration for Low-contrast Medical Images.- Fluorescence Microscopy Images Segmentation based on Prototypical Networks with a few Annotations.- SuperVessel: Segmenting High-resolution Vessel from Low-resolution Retinal Image.- Cascade Multiscale Swin-Conv Network for Fast MRI Reconstruction.- DEST: Deep Enhanced Swin Transformer toward Better Scoring for NAFLD.- CTCNet: A Bi-directional Cascaded Segmentation Network Combining Transformers with CNNs for Skin Lesions.- MR Image Denoising Based On Improved Multipath Matching Pursuit Algorithm.- Statistical characteristics of 3-D PET imaging: a comparison between conventional and total-body PET scanners.- Unsupervised medical image registration based on multi-scale cascade network.- A Novel Local-global Spatial Attention Network for Cortical Cataract Classification in AS-OCT.- PRGAN:A Progressive Refined GAN for Lesion Localization and Segmentation on High-Resolution Retinal fundus Photography.- Multiscale Autoencoder with Structural-Functional Attention Network for Alzheimer's Disease Prediction.- Robust Liver Segmentation Using Boundary Preserving Dual Attention Network.- msFormer: Adaptive Multi-Modality 3D Transformer for Medical Image Segmentation.- Semi-supervised Medical Image Segmentation with Semantic Distance Distribution Consistency Learning.- MultiGAN: multi-domain image translation from OCT to OCTA_ TransPND: A Transformer based Pulmonary Nodule Diagnosis Method on CT Image.- Adversarial Learning Based Structural Brain-network Generative Model for Analyzing Mild Cognitive Impairment.- A 2.5D Coarse-to-fine Framework for 3D Cardiac CT View Planning.- Weakly Supervised Semantic Segmentation of Echocardiography Videosvia Multi-level Features Selection.- DPformer: Dual-path transformers forgeometric and appearancefeatures reasoning in diabetic retinopathy grading.- Deep Supervoxel Mapping Learning for Dense Correspondence of Cone-Beam Computed Tomography.- Manifold-Driven and Feature Replay Lifelong Representation Learning on Person ReID.- Multi-source information-shared domain adaptation for EEG emotion recognition.- Spatial-Channel Mixed Attention based Network for Remote Heart Rate Estimation.- Weighted Graph Based Feature Representation for Finger-Vein Recognition.- Self-Supervised Face Anti-Spoofng via Anti-Contrastive Learning.- Counterfactual Image Enhancement for Explanation of Face Swap Deepfakes.- Improving Pre-trained Masked Autoencoder with Locality Enhancement for Person Re-identification.- MINIPI : a MultI-scale Neural network based impulse radio ultra-wideband radar Indoor Personnel Identification method.- PSU-Net: Paired Spatial U-Net for hand segmentation with complex  backgrounds.- Pattern Classification and Clustering.- Human Knowledge-Guided and Task-Augmented Deep Learning for Glioma Grading.- Learning to Cluster Faces with Mixed Face Quality.- Capturing Prior Knowledge in Soft Labels for Classification with Limited or Imbalanced Data.- Coupled Learning for Kernel Representation and Graph Tensor in Multi-view Subspace Clustering.- Combating Noisy Labels via Contrastive Learning with Challenging Pairs.- Semantic Center Guided Windows Attention Fusion Framework for Food Recognition.- Adversarial Bidirectional Feature Generation for Generalized Zero-Shot Learning under Unreliable Semantics.- Exploiting Robust Memory Features for Unsupervised Reidentification.- TIR: A Two-stage Insect Recognition method for convolutional neural network.- Discerning Coteaching: A Deep Framework for Automatic Identification of Noise Labels.- VDSSA: Ventral & Dorsal Sequential Self-attention AutoEncoder for Cognitive-Consistency Disentanglement.- Bayesian Neural Networks with Covariate Shift Correction for Classification in -ray Astrophysics.


Best Sellers


Product Details
  • ISBN-13: 9783031189098
  • Publisher: Springer International Publishing AG
  • Publisher Imprint: Springer International Publishing AG
  • Height: 235 mm
  • No of Pages: 723
  • Returnable: N
  • Sub Title: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II
  • ISBN-10: 3031189094
  • Publisher Date: 13 Oct 2022
  • Binding: Paperback
  • Language: English
  • Returnable: N
  • Series Title: 13535 Lecture Notes in Computer Science
  • Width: 155 mm


Similar Products

Add Photo
Add Photo

Customer Reviews

REVIEWS      0     
Click Here To Be The First to Review this Product
Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)
Springer International Publishing AG -
Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part II(13535 Lecture Notes in Computer Science)

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    New Arrivals

    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!