33rd IEEE Conference on Signal Processing and Communications Applications, SIU 2025, İstanbul, Turkey, 25 - 28 June 2025, (Full Text)
Multi-task learning is an approach that aims to use resources more efficiently during inference by concurrently learning multiple tasks through a single model. This method seeks to improve model generalization and performance by leveraging shared feature extraction, rather than using separate models for different tasks. However, when working with disjoint datasets, completing the missing labels for each task becomes a costly and time-consuming process. In this study, we compare the simultaneous utilization of independent datasets and different multi-task learning methods by adding a classification task to an unsupervised trained segmentation model. Our proposed approach offers a scalable solution by preserving the original labeling structure of the datasets and eliminating the need for multi-label annotation.