Deep Generative Inpainting with Comparative Sample Augmentation

Authors

  • Boli Fang Indiana University, USA https://orcid.org/0000-0002-3582-550X
  • Miao Jiang Indiana University Bloomington, USA
  • Jerry Shen University of Southern California, USA
  • Björn Stenger Rakuten Institute of Technology Tokyo, Japan

DOI:

https://doi.org/10.47852/bonviewJCCE2202319

Keywords:

comparative data augmentation, generative adversarial networks

Abstract

Recent advances in deep learning techniques such as Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels. While more effective than conventional approaches, deep learning models require large datasets and computational resources for training, and inpainting quality varies considerably when training data differs in size and diversity. To address these problems, we present an inpainting strategy called Comparative Sample Augmentation, which enhances the quality of the training set by filtering irrelevant images and constructing additional images using information about the surrounding regions of the target image. Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying levels of diversity, while enhancing the inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.

Metrics

Metrics Loading ...

Downloads

Published

2022-09-28

How to Cite

Fang, B., Jiang, M., Shen, J., & Stenger, B. (2022). Deep Generative Inpainting with Comparative Sample Augmentation. Journal of Computational and Cognitive Engineering, 1(4), 174–180. https://doi.org/10.47852/bonviewJCCE2202319

Issue

Section

Research Articles