MaskSplit: Self-supervised Meta-learning for Few-shot Semantic Segmentation


Creative Commons License

Amac M. S. , Sencan A., BARAN O. B. , Ikizler-Cinbis N., CİNBİŞ R. G.

22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Hawaii, United States Of America, 4 - 08 January 2022, pp.428-438 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/wacv51458.2022.00050
  • City: Hawaii
  • Country: United States Of America
  • Page Numbers: pp.428-438
  • Keywords: Few-shot, Grouping and Shape, Semi- and Un- supervised Learning Segmentation, Transfer

Abstract

© 2022 IEEE.Just like other few-shot learning problems, few-shot segmentation aims to minimize the need for manual annotation, which is particularly costly in segmentation tasks. Even though the few-shot setting reduces this cost for novel test classes, there is still a need to annotate the training data. To alleviate this need, we propose a self-supervised training approach for learning few-shot segmentation models. We first use unsupervised saliency estimation to obtain pseudo-masks on images. We then train a simple prototype based model over different splits of pseudo masks and augmentations of images. Our extensive experiments show that the proposed approach achieves promising results, highlighting the potential of self-supervised training. To the best of our knowledge this is the first work that addresses unsupervised few-shot segmentation problem on natural images.