Deep generation of 3D articulated models and animations from 2D stick figures


Akman A., SAHİLLİOĞLU Y., Sezgin T. M.

COMPUTERS & GRAPHICS-UK, cilt.109, ss.65-74, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 109
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.cag.2022.10.004
  • Dergi Adı: COMPUTERS & GRAPHICS-UK
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.65-74
  • Anahtar Kelimeler: Computer graphics, 3D model generation, Deep learning, Sketch-based shape modeling, SHAPES
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method does not require a statistical body shape model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. In addition to 3D human models, we produce 3D horse models in order to show the generalization ability of our framework. Extensive experiments show that our model learns to generate compatible 3D models and animations with 2D sketches. (C) 2022 The Author(s). Published by Elsevier Ltd.