Generation of 3D human models and animations using simple sketches


Akman A., SAHİLLİOĞLU Y., Sezgin T. M.

Graphics Interface 2020, GI 2020, Toronto, Virtual, Online, Kanada, 28 - 29 Mayıs 2020, cilt.2020-May identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 2020-May
  • Basıldığı Şehir: Toronto, Virtual, Online
  • Basıldığı Ülke: Kanada
  • Anahtar Kelimeler: Computing methodologies, Computing methodologies, Learning latent representations, Neural networks
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

© 2020 Canadian Information Processing Society. All rights reserved.Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.