MaskSplit: Self-supervised Meta-learning for Few-shot Semantic Segmentation


Creative Commons License

Amac M. S., Sencan A., BARAN O. B., Ikizler-Cinbis N., CİNBİŞ R. G.

22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Hawaii, Amerika Birleşik Devletleri, 4 - 08 Ocak 2022, ss.428-438 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/wacv51458.2022.00050
  • Basıldığı Şehir: Hawaii
  • Basıldığı Ülke: Amerika Birleşik Devletleri
  • Sayfa Sayıları: ss.428-438
  • Anahtar Kelimeler: Few-shot, Grouping and Shape, Semi- and Un- supervised Learning Segmentation, Transfer
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

© 2022 IEEE.Just like other few-shot learning problems, few-shot segmentation aims to minimize the need for manual annotation, which is particularly costly in segmentation tasks. Even though the few-shot setting reduces this cost for novel test classes, there is still a need to annotate the training data. To alleviate this need, we propose a self-supervised training approach for learning few-shot segmentation models. We first use unsupervised saliency estimation to obtain pseudo-masks on images. We then train a simple prototype based model over different splits of pseudo masks and augmentations of images. Our extensive experiments show that the proposed approach achieves promising results, highlighting the potential of self-supervised training. To the best of our knowledge this is the first work that addresses unsupervised few-shot segmentation problem on natural images.