Generating expressive summaries for speech and musical audio using self-similarity clues

Creative Commons License

Sert M., Baykal B., Yazici A.

IEEE International Conference on Multimedia and Expo (ICME 2006), Toronto, Canada, 9 - 12 July 2006, pp.941-944 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/icme.2006.262675
  • City: Toronto
  • Country: Canada
  • Page Numbers: pp.941-944
  • Middle East Technical University Affiliated: Yes


We present a novel algorithm for structural analysis of audio to detect repetitive patterns that are suitable for content-based audio information retrieval systems, since repetitive patterns can provide valuable information about the content of audio, such as a chorus or a concept. The Audio Spectrum Flatness (ASF) feature of the MPEG-7 standard, although not having been considered as much as other feature types, has been utilized and evaluated as the underlying feature set. Expressive summaries are chosen as the longest patterns by the k-means clustering algorithm. Proposed approach is evaluated on a test bed consisting of popular song and speech clips based on the ASF feature. The well known Mel Frequency Cepstral Coefficients (MFCCs) are also considered in the experiments for the evaluation of features. Experiments show that, all the repetitive patterns and their locations are obtained with the accuracy of 93% and 78% for music and speech, respectively.