Bimodal automatic speech segmentation based on audio and visual information fusion


Akdemir E., ÇİLOĞLU T.

SPEECH COMMUNICATION, cilt.53, sa.6, ss.889-902, 2011 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 53 Sayı: 6
  • Basım Tarihi: 2011
  • Doi Numarası: 10.1016/j.specom.2011.03.001
  • Dergi Adı: SPEECH COMMUNICATION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.889-902
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Bimodal automatic speech segmentation using visual information together with audio data is introduced. The accuracy of automatic segmentation directly affects the quality of speech processing systems using the segmented database. The collaboration of audio and visual data results in lower average absolute boundary error between the manual segmentation and automatic segmentation results. The information from two modalities are fused at the feature level and used in a HMM based speech segmentation system. A Turkish audiovisual speech database has been prepared and used in the experiments. The average absolute boundary error decreases up to 18% by using different audiovisual feature vectors. The benefits of incorporating visual information are discussed for different phoneme boundary types. Each audiovisual feature vector results in a different performance at different types of phoneme boundaries. The average absolute boundary error decreases by approximately 25% by using audiovisual feature vectors selectively for different boundary classes. Visual data is collected using an ordinary webcam. The proposed method is very convenient to be used in practice. (C) 2011 Elsevier B.V. All rights reserved.