GANILLA: Generative adversarial networks for image to illustration translation


Creative Commons License

Hicsonmez S., SAMET N., AKBAŞ E., DUYGULU ŞAHİN P.

IMAGE AND VISION COMPUTING, vol.95, 2020 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 95
  • Publication Date: 2020
  • Doi Number: 10.1016/j.imavis.2020.103886
  • Journal Name: IMAGE AND VISION COMPUTING
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, INSPEC
  • Keywords: Generative adversarial networks, Image to image translation, Illustrations style transfer
  • Middle East Technical University Affiliated: Yes

Abstract

In this paper, we explore illustrations in children's books as a new domain in unpaired image-to-image translation. We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the sametime. We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content. There are no well-defined or agreed-upon evaluation metrics for unpaired image-to-image translation. So far, the success of image translation models has been based on subjective, qualitative visual comparison on a limited number of images. To address this problem, we propose a new framework for the quantitative evaluation of image-to-illustration models, where both content and style are taken into account using separate classifiers. In this new evaluation framework, our proposed model performs better than the current state-of-the-art models on the illustrations dataset. Our code and pretrained models can be found at https://github.com/giddyyupp/ganilla. (C) 2020 Elsevier B.V. All rights reserved.