Tutorial 1: Customize Datasets

Supported Data Format

Image Super-Resolution

  • SRAnnotationDataset General paired image dataset with an annotation file for image restoration.

  • SRFolderDataset General paired image folder dataset for image restoration.

  • SRFolderGTDataset General ground-truth image folder dataset for image restoration, where low-quality image should be generated in pipeline.

  • SRFolderRefDataset General paired image folder dataset for reference-based image restoration.

  • SRLmdbDataset General paired image lmdb dataset for image restoration.

  • SRFacialLandmarkDataset Facial image and landmark dataset with an annotation file.

Video Super-Resolution

  • SRFolderMultipleGTDataset General dataset for video super resolution, used for recurrent networks.

  • SRREDSDataset REDS dataset for video super resolution.

  • SRREDSMultipleGTDataset REDS dataset for video super resolution for recurrent networks.

  • SRTestMultipleGTDataset Test dataset for video super resolution for recurrent networks.

  • SRVid4Dataset Vid4 dataset for video super resolution.

  • SRVimeo90KDataset Vimeo90K dataset for video super resolution.

  • SRVimeo90KMultipleGTDataset Vimeo90K dataset for video super resolution for recurrent networks.

Video Frame Interpolation

  • VFIVimeo90KDataset Vimeo90K dataset for video frame interpolation.


  • AdobeComp1kDataset Adobe composition-1k dataset.


  • ImgInpaintingDataset Only use the image name information from annotation file.


  • GenerationPairedDataset General paired image folder dataset for image generation.

  • GenerationUnpairedDataset General unpaired image folder dataset for image generation.

Support new data format

You can reorganize new data formats to existing format.

Or create a new dataset in mmedit/datasets to load the data.

Inheriting from the base class of datasets will make it easier to create a new dataset

  • BaseSRDataset

  • BaseVFIDataset

  • BaseMattingDataset

  • BaseGenerationDataset

Here is an example of create a dataset for video frame interpolation:

import os
import os.path as osp

from .base_vfi_dataset import BaseVFIDataset
from .registry import DATASETS

class NewVFIDataset(BaseVFIDataset):
    """Introduce the dataset

    Examples of file structure.

        pipeline (list[dict | callable]): A sequence of data transformations.
        folder (str | :obj:`Path`): Path to the folder.
        ann_file (str | :obj:`Path`): Path to the annotation file.
        test_mode (bool): Store `True` when building test dataset.
            Default: `False`.

    def __init__(self, pipeline, folder, ann_file, test_mode=False):
        super().__init__(pipeline, folder, ann_file, test_mode)
        self.data_infos = self.load_annotations()

    def load_annotations(self):
        """Load annoations for the dataset.

            list[dict]: A list of dicts for paired paths and other information.
        data_infos = []
        return data_infos

If you want create a dataset for a new low level CV task (e.g. denoise, derain, defog, and de-reflection), you can inheriting from BaseDataset.

Here is an example of create a base dataset for denoising:

import copy
from abc import ABCMeta, abstractmethod

from import Dataset

from .pipelines import Compose

IMG_EXTENSIONS = ('.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm',
                  '.PPM', '.bmp', '.BMP', '.tif', '.TIF', '.tiff', '.TIFF')

class BaseDnDataset(BaseDataset):
    """Base class for denoising datasets.

    # If any extra parameter is required, please rewrite the `__init__`
    # def __init__(self, pipeline, new_para, test_mode=False):
    #     super().__init__(pipeline, test_mode)
    #     self.new_para = new_para

    def scan_folder(path):
        """Obtain image path list (including sub-folders) from a given folder.

            path (str | :obj:`Path`): Folder path.

            list[str]: image list obtained form given folder.

        if isinstance(path, (str, Path)):
            path = str(path)
            raise TypeError("'path' must be a str or a Path object, "
                            f'but received {type(path)}.')

        images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True))
        images = [osp.join(path, v) for v in images]
        assert images, f'{path} has no valid image file.'
        return images

    def __getitem__(self, idx):
        """Get item at each call.

            idx (int): Index for getting each item.

            dict: The output dict of pipeline.
        results = copy.deepcopy(self.data_infos[idx])
        return self.pipeline(results)

    def evaluate(self, results, logger=None):
        """Evaluate with different metrics.

            results (list[tuple]): The output of forward_test() of the model.

            dict: Evaluation results dict.
        if not isinstance(results, list):
            raise TypeError(f'results must be a list, but got {type(results)}')
        assert len(results) == len(self), (
            'The length of results is not equal to the dataset len: '
            f'{len(results)} != {len(self)}')

        results = [res['eval_result'] for res in results]  # a list of dict
        eval_result = defaultdict(list)  # a dict of list

        for res in results:
            for metric, val in res.items():
        for metric, val_list in eval_result.items():
            assert len(val_list) == len(self), (
                f'Length of evaluation result of {metric} is {len(val_list)}, '
                f'should be {len(self)}')

        # average the results
        eval_result = {
            metric: sum(values) / len(self)
            for metric, values in eval_result.items()

        return eval_result

Welcome to submit new dataset classes to MMEditing.

Customize datasets by dataset wrappers

Repeat dataset

We use RepeatDataset as wrapper to repeat the dataset. For example, suppose the original dataset is Dataset_A, to repeat it, the config looks like the following

dataset_A_train = dict(
        dataset=dict(  # This is the original config of Dataset_A
Read the Docs v: latest
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.