Deep Generative Inpainting with Comparative Sample Augmentation
DOI:
https://doi.org/10.47852/bonviewJCCE2202319Keywords:
comparative data augmentation, deep neural network, generative adversarial networksAbstract
Recent advances in deep learning techniques such as Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels. While more effective than conventional approaches, deep learning models require large datasets and computational resources for training, and inpainting quality varies considerably when training data differs in size and diversity. To address these problems, we present an inpainting strategy called Comparative Sample Augmentation, which enhances the quality of the training set by filtering irrelevant images and constructing additional images using information about the surrounding regions of the target image. Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying levels of diversity, while enhancing the inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.
Received: 18 July 2022 | Revised: 22 August 2022 | Accepted: 24 August 2022
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.