Supervised approaches for explicit search result diversification

Yigit-Sert S., Altıngövde İ. S. , Macdonald C., Ounis I., Ulusoy Ö.

INFORMATION PROCESSING & MANAGEMENT, vol.57, no.6, 2020 (Peer-Reviewed Journal) identifier identifier

  • Publication Type: Article / Article
  • Volume: 57 Issue: 6
  • Publication Date: 2020
  • Doi Number: 10.1016/j.ipm.2020.102356
  • Journal Indexes: Science Citation Index Expanded, Social Sciences Citation Index, Scopus, FRANCIS, ABI/INFORM, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Compendex, Computer & Applied Sciences, EBSCO Education Source, Education Abstracts, Information Science and Technology Abstracts, INSPEC, Library and Information Science Abstracts, Library Literature and Information Science, Linguistics & Language Behavior Abstracts, MLA - Modern Language Association Database, zbMATH, Library, Information Science & Technology Abstracts (LISTA)
  • Keywords: Explicit diversification, Supervised learning, Query performance predictors, Aspect importance


Diversification of web search results aims to promote documents with diverse content (i.e., covering different aspects of a query) to the top-ranked positions, to satisfy more users, enhance fairness and reduce bias. In this work, we focus on the explicit diversification methods, which assume that the query aspects are known at the diversification time, and leverage supervised learning methods to improve their performance in three different frameworks with different features and goals. First, in the LTRDiv framework, we focus on applying typical learning to rank (LTR) algorithms to obtain a ranking where each top-ranked document covers as many aspects as possible. We argue that such rankings optimize various diversification metrics (under certain assumptions), and hence, are likely to achieve diversity in practice. Second, in the AspectRanker framework, we apply LTR for ranking the aspects of a query with the goal of more accurately setting the aspect importance values for diversification. As features, we exploit several pre- and post-retrieval query performance predictors (QPPs) to estimate how well a given aspect is covered among the candidate documents. Finally, in the LmDiv framework, we cast the diversification problem into an alternative fusion task, namely, the supervised merging of rankings per query aspect. We again use QPPs computed over the candidate set for each aspect, and optimize an objective function that is tailored for the diversification goal. We conduct thorough comparative experiments using both the basic systems (based on the well-known BM25 matching function) and the best-performing systems (with more sophisticated retrieval methods) from previous TREC campaigns. Our findings reveal that the proposed frameworks, especially AspectRanker and LmDiv, outperform both non-diversified rankings and two strong diversification baselines (i.e., xQuAD and its variant) in terms of various effectiveness metrics.