Automatic Semantic Content Extraction in Videos Using a Fuzzy Ontology and Rule-Based Model


Yildirim Y., YAZICI A., Yilmaz T.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, cilt.25, sa.1, ss.47-61, 2013 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 25 Sayı: 1
  • Basım Tarihi: 2013
  • Doi Numarası: 10.1109/tkde.2011.189
  • Dergi Adı: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.47-61
  • Anahtar Kelimeler: Semantic content extraction, video content modeling, fuzziness, ontology, REPRESENTATION, KNOWLEDGE, DESIGN
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Recent increase in the use of video-based applications has revealed the need for extracting the content in videos. Raw data and low-level features alone are not sufficient to fulfill the user's needs; that is, a deeper understanding of the content at the semantic level is required. Currently, manual techniques, which are inefficient, subjective and costly in time and limit the querying capabilities, are being used to bridge the gap between low-level representative features and high-level semantic content. Here, we propose a semantic content extraction system that allows the user to query and retrieve objects, events, and concepts that are extracted automatically. We introduce an ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions. This metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct an ontology for a given domain. In addition to domain ontologies, we use additional rule definitions (without using ontology) to lower spatial relation computation cost and to be able to define some complex situations more effectively. The proposed framework has been fully implemented and tested on three different domains. We have obtained satisfactory precision and recall rates for object, event and concept extraction.