Note
You are reading the documentation for MMEditing 0.x, which will soon be deprecated by the end of 2022. We recommend you upgrade to MMEditing 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check out the changelog, code and documentation of MMEditing 1.0 for more details.
Inpainting Models¶
AOT-GAN (TVCG’2021)¶
Abstract¶
State-of-the-art image inpainting approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512x512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn, facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art by a significant margin in terms of FID with 38.60% relative improvement. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release code and models in this https URL.

Results and models¶
Places365-Challenge
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
AOT-GAN | free-form (50-60%) | 512x512 | 500k | Places365-val | 7.07 | 19.01 | 0.682 | model | log |
More results for different mask area:
Metric | Mask Area | Paper Results | Reimplemented Results |
---|---|---|---|
L1 (10^-2) | 1 – 10% | 0.55 | 0.54 |
(lower better) | 10 – 20% | 1.19 | 1.47 |
20 – 30% | 2.11 | 2.79 | |
30 – 40% | 3.20 | 4.38 | |
40 – 50% | 4.51 | 6.28 | |
50 – 60% | 7.07 | 10.16 | |
PSNR | 1 – 10% | 34.79 | inf |
(higher better) | 10 – 20% | 29.49 | 31.22 |
20 – 30% | 26.03 | 27.65 | |
30 – 40% | 23.58 | 25.06 | |
40 – 50% | 21.65 | 23.01 | |
50 – 60% | 19.01 | 20.05 | |
SSIM | 1 – 10% | 0.976 | 0.982 |
(higher better) | 10 – 20% | 0.940 | 0.951 |
20 – 30% | 0.890 | 0.911 | |
30 – 40% | 0.835 | 0.866 | |
40 – 50% | 0.773 | 0.815 | |
50 – 60% | 0.682 | 0.739 |
Citation¶
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
DeepFillv1 (CVPR’2018)¶
Abstract¶
Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones.

Results and models¶
Places365-Challenge
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
DeepFillv1 | square bbox | 256x256 | 3500k | Places365-val | 11.019 | 23.429 | 0.862 | model | log |
CelebA-HQ
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
DeepFillv1 | square bbox | 256x256 | 1500k | CelebA-val | 6.677 | 26.878 | 0.911 | model | log |
Citation¶
@inproceedings{yu2018generative,
title={Generative image inpainting with contextual attention},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={5505--5514},
year={2018}
}
DeepFillv2 (CVPR’2019)¶
Abstract¶
We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces.

Results and models¶
Places365-Challenge
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
DeepFillv2 | free-form | 256x256 | 100k | Places365-val | 8.635 | 22.398 | 0.815 | model | log |
CelebA-HQ
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
DeepFillv2 | free-form | 256x256 | 20k | CelebA-val | 5.411 | 25.721 | 0.871 | model | log |
Citation¶
@inproceedings{yu2019free,
title={Free-form image inpainting with gated convolution},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={4471--4480},
year={2019}
}
Global&Local (ToG’2017)¶
Abstract¶
We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by flling in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the image.

Results and models¶
Note that we do not apply the post-processing module in Global&Local for a fair comparison with current deep inpainting methods.
Places365-Challenge
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
Global&Local | square bbox | 256x256 | 500k | Places365-val | 11.164 | 23.152 | 0.862 | model | log |
CelebA-HQ
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
Global&Local | square bbox | 256x256 | 500k | CelebA-val | 6.678 | 26.780 | 0.904 | model | log |
Citation¶
@article{iizuka2017globally,
title={Globally and locally consistent image completion},
author={Iizuka, Satoshi and Simo-Serra, Edgar and Ishikawa, Hiroshi},
journal={ACM Transactions on Graphics (ToG)},
volume={36},
number={4},
pages={1--14},
year={2017},
publisher={ACM New York, NY, USA}
}
PConv (ECCV’2018)¶
Abstract¶
Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.

Results and models¶
Places365-Challenge
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
PConv | free-form | 256x256 | 500k | Places365-val | 8.776 | 22.762 | 0.801 | model | log |
CelebA-HQ
Method | Mask Type | Resolution | Train Iters | Test Set | l1 error | PSNR | SSIM | Download |
---|---|---|---|---|---|---|---|---|
PConv | free-form | 256x256 | 500k | CelebA-val | 5.990 | 25.404 | 0.853 | model | log |
Citation¶
@inproceedings{liu2018image,
title={Image inpainting for irregular holes using partial convolutions},
author={Liu, Guilin and Reda, Fitsum A and Shih, Kevin J and Wang, Ting-Chun and Tao, Andrew and Catanzaro, Bryan},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={85--100},
year={2018}
}