Learning Multi-Modal Nonlinear Embeddings: Performance Bounds and an Algorithm


Kaya S., VURAL E.

IEEE TRANSACTIONS ON IMAGE PROCESSING, vol.30, pp.4384-4394, 2021 (Journal Indexed in SCI) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 30
  • Publication Date: 2021
  • Doi Number: 10.1109/tip.2021.3071688
  • Title of Journal : IEEE TRANSACTIONS ON IMAGE PROCESSING
  • Page Numbers: pp.4384-4394

Abstract

While many approaches exist in the literature to learn low-dimensional representations for data collections in multiple modalities, the generalizability of multi-modal nonlinear embeddings to previously unseen data is a rather overlooked subject. In this work, we first present a theoretical analysis of learning multi-modal nonlinear embeddings in a supervised setting. Our performance bounds indicate that for successful generalization in multi-modal classification and retrieval problems, the regularity of the interpolation functions extending the embedding to the whole data space is as important as the between-class separation and cross-modal alignment criteria. We then propose a multi-modal nonlinear representation learning algorithm that is motivated by these theoretical findings, where the embeddings of the training samples are optimized jointly with the Lipschitz regularity of the interpolators. Experimental comparison to recent multi-modal and single-modal learning algorithms suggests that the proposed method yields promising performance in multi-modal image classification and cross-modal image-text retrieval applications.