Multi-modal Egocentric Activity Recognition using Audio-Visual Features


Creative Commons License

Arabacı M. A., Özkan F., Sürer E., Jancovic P., Temizel A.

Diğer, ss.1-10, 2018

  • Yayın Türü: Diğer Yayınlar / Diğer
  • Basım Tarihi: 2018
  • Sayfa Sayıları: ss.1-10
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Egocentric activity recognition in first-person videos has an increasing importance with a variety of applications such as lifelogging, summarization, assisted-living and activity tracking. Existing methods for this task are based on interpretation of various sensor information using pre-determined weights for each feature. In this work, we propose a new framework for egocentric activity recognition problem based on combining audio-visual features with multi-kernel learning (MKL) and multi-kernel boosting (MKBoost). For that purpose, firstly grid optical-flow, virtual-inertia feature, log-covariance, cuboid are extracted from the video. The audio signal is characterized using a "supervector", obtained based on Gaussian mixture modelling of frame-level features, followed by a maximum a-posteriori adaptation. Then, the extracted multi-modal features are adaptively fused by MKL classifiers in which both the feature and kernel selection/weighing and recognition tasks are performed together. The proposed framework was evaluated on a number of egocentric datasets. The results showed that using multi-modal features with MKL outperforms the existing methods.