Towards Zero-shot Sign Language Recognition


Creative Commons License

Bilge Y. C. , CİNBİŞ R. G. , Ikizler-Cinbis N.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022 (Peer-Reviewed Journal) identifier identifier identifier

  • Publication Type: Article / Article
  • Publication Date: 2022
  • Doi Number: 10.1109/tpami.2022.3143074
  • Journal Name: IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Journal Indexes: Science Citation Index Expanded, Scopus, Academic Search Premier, PASCAL, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, zbMATH, Civil Engineering Abstracts
  • Keywords: Assistive technologies, Benchmark testing, Gesture recognition, Hidden Markov models, Semantics, Sign language recognition, Videos, Visualization, zero-shot learning

Abstract

IEEEThis paper tackles the problem of zero-shot sign language recognition (ZSSLR), where the goal is to leverage models learned over the seen sign classes to recognize the instances of unseen sign classes. In this context, readily available textual sign descriptions and attributes collected from sign language dictionaries are utilized as semantic class representations for knowledge transfer. For this novel problem setup, we introduce three benchmark datasets with their accompanying textual and attribute descriptions to analyze the problem in detail. Our proposed approach builds spatiotemporal models of body and hand regions. By leveraging the descriptive text and attribute embeddings along with these visual representations within a zero-shot learning framework, we show that textual and attribute based class definitions can provide effective knowledge for the recognition of previously unseen sign classes. We additionally introduce techniques to analyze the influence of binary attributes in correct and incorrect zero-shot predictions. We anticipate that the introduced approaches and the accompanying datasets will provide a basis for further exploration of zero-shot learning in sign language recognition.