# Inpainting Models¶

## DeepFillv1 (CVPR’2018)¶

### Abstract¶

Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones.

### Citation¶

@inproceedings{yu2018generative,
title={Generative image inpainting with contextual attention},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={5505--5514},
year={2018}
}


### Results and models¶

Places365-Challenge

DeepFillv1 square bbox 256x256 3500k Places365-val 11.019 23.429 0.862 model | log

CelebA-HQ

DeepFillv1 square bbox 256x256 1500k CelebA-val 6.677 26.878 0.911 model | log

## DeepFillv2 (CVPR’2019)¶

### Abstract¶

We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces.

### Citation¶

@inproceedings{yu2019free,
title={Free-form image inpainting with gated convolution},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={4471--4480},
year={2019}
}


### Results and models¶

Places365-Challenge

DeepFillv2 free-form 256x256 100k Places365-val 8.635 22.398 0.815 model | log

CelebA-HQ

DeepFillv2 free-form 256x256 20k CelebA-val 5.411 25.721 0.871 model | log

## Global&Local (ToG’2017)¶

### Abstract¶

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by flling in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the image.

### Citation¶

@article{iizuka2017globally,
title={Globally and locally consistent image completion},
author={Iizuka, Satoshi and Simo-Serra, Edgar and Ishikawa, Hiroshi},
journal={ACM Transactions on Graphics (ToG)},
volume={36},
number={4},
pages={1--14},
year={2017},
publisher={ACM New York, NY, USA}
}


### Results and models¶

Note that we do not apply the post-processing module in Global&Local for a fair comparison with current deep inpainting methods.

Places365-Challenge

Global&Local square bbox 256x256 500k Places365-val 11.164 23.152 0.862 model | log

CelebA-HQ

Global&Local square bbox 256x256 500k CelebA-val 6.678 26.780 0.904 model | log

## PConv (ECCV’2018)¶

### Abstract¶

Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.

### Citation¶

@inproceedings{liu2018image,
title={Image inpainting for irregular holes using partial convolutions},
author={Liu, Guilin and Reda, Fitsum A and Shih, Kevin J and Wang, Ting-Chun and Tao, Andrew and Catanzaro, Bryan},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={85--100},
year={2018}
}


### Results and models¶

Places365-Challenge