Task guided representation learning using compositional models for zero-shot domain adaptation


Liu S., Ozay M.

Neural Networks, cilt.165, ss.370-380, 2023 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 165
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1016/j.neunet.2023.05.030
  • Dergi Adı: Neural Networks
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, BIOSIS, Biotechnology Research Abstracts, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Psycinfo, zbMATH
  • Sayfa Sayıları: ss.370-380
  • Anahtar Kelimeler: Domain adaptation, Representation learning, Zero-shot
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Zero-shot domain adaptation (ZDA) methods aim to transfer knowledge about a task learned in a source domain to a target domain, while task-relevant data from target domain are not available. In this work, we address learning feature representations which are invariant to and shared among different domains considering task characteristics for ZDA. To this end, we propose a method for task-guided ZDA (TG-ZDA) which employs multi-branch deep neural networks to learn feature representations exploiting their domain invariance and shareability properties. The proposed TG-ZDA models can be trained end-to-end without requiring synthetic tasks and data generated from estimated representations of target domains. The proposed TG-ZDA has been examined using benchmark ZDA tasks on image classification datasets. Experimental results show that our proposed TG-ZDA outperforms state-of-the-art ZDA methods for different domains and tasks.