Bimodal automatic speech segmentation based on audio and visual information fusion


Akdemir E., ÇİLOĞLU T.

SPEECH COMMUNICATION, vol.53, no.6, pp.889-902, 2011 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 53 Issue: 6
  • Publication Date: 2011
  • Doi Number: 10.1016/j.specom.2011.03.001
  • Journal Name: SPEECH COMMUNICATION
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.889-902
  • Middle East Technical University Affiliated: Yes

Abstract

Bimodal automatic speech segmentation using visual information together with audio data is introduced. The accuracy of automatic segmentation directly affects the quality of speech processing systems using the segmented database. The collaboration of audio and visual data results in lower average absolute boundary error between the manual segmentation and automatic segmentation results. The information from two modalities are fused at the feature level and used in a HMM based speech segmentation system. A Turkish audiovisual speech database has been prepared and used in the experiments. The average absolute boundary error decreases up to 18% by using different audiovisual feature vectors. The benefits of incorporating visual information are discussed for different phoneme boundary types. Each audiovisual feature vector results in a different performance at different types of phoneme boundaries. The average absolute boundary error decreases by approximately 25% by using audiovisual feature vectors selectively for different boundary classes. Visual data is collected using an ordinary webcam. The proposed method is very convenient to be used in practice. (C) 2011 Elsevier B.V. All rights reserved.