Journal of Visual Communication and Image Representation, cilt.100, 2024 (SCI-Expanded)
Recent self-supervised learning methods, where instance discrimination task is a fundamental way of pretraining convolutional neural networks (CNN), excel in transfer learning performance. Even though instance discrimination task is a well suited pretraining method for classification with its image-level learning, lack of dense representation learning makes it sub-optimal for localization tasks such as object detection. In this paper, we aim to mitigate this shortcoming of instance discrimination task by extending it to jointly learn dense representations alongside image-level representations. We add a segmentation branch parallel to the image-level learning to predict class-agnostic masks, enhancing location-awareness of the representations. We show the effectiveness of our pretraining approach on localization tasks by transferring the learned representations to object detection and segmentation tasks, providing relative improvements by up to 1.7% AP on PASCAL VOC and 0.8% AP on COCO object detection, 0.8% AP on COCO instance segmentation and 3.6% mIoU on PASCAL VOC semantic segmentation respectively.