Natural language querying for video databases

Creative Commons License

Erozel G., Cicekli N. K., Cicekli I.

INFORMATION SCIENCES, vol.178, no.12, pp.2534-2552, 2008 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 178 Issue: 12
  • Publication Date: 2008
  • Doi Number: 10.1016/j.ins.2008.02.001
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.2534-2552
  • Keywords: natural language querying, content-based querying in video databases, link parser, information extraction, conceptual ontology, SEMANTIC SIMILARITY, SYSTEM, IMPLEMENTATION, DESIGN
  • Middle East Technical University Affiliated: Yes


The video databases have become popular in various areas due to the recent advances in technology. Video archive systems need user-friendly interfaces to retrieve video frames. In this paper, a user interface based on natural language processing (NLP) to a video database system is described. The video database is based on a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, a natural language interface enables flexible querying. The queries, which are given as English sentences, are parsed using link parser. The semantic representations of the queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying video database system to return the results of the queries. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. (C) 2008 Elsevier Inc. All rights reserved.