Dense depth alignment for human pose and shape estimation


Karagoz B., Suat O., Uguz B., AKBAŞ E.

Signal, Image and Video Processing, cilt.18, sa.12, ss.8577-8584, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 18 Sayı: 12
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1007/s11760-024-03491-9
  • Dergi Adı: Signal, Image and Video Processing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Sayfa Sayıları: ss.8577-8584
  • Anahtar Kelimeler: Dense correspondence estimation, Depth estimation, Human mesh estimation, Human pose and shape estimation, Human pose estimation
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Estimating 3D human pose and shape (HPS) from a monocular image has many applications. However, collecting ground-truth data for this problem is costly and constrained to limited lab environments. Researchers have used priors based on body structure or kinematics, cues obtained from other vision tasks to mitigate the scarcity of supervision. Despite its apparent potential in this context, monocular depth estimation has yet to be explored. In this paper, we propose the Dense Depth Alignment (DDA) method, where we use an estimated dense depth map to create an auxiliary supervision signal for 3D HPS estimation. Specifically, we define a dense mapping between the points on the surface of the human mesh and the points reconstructed from depth estimation. We further introduce the idea of Camera Pretraining, a novel learning strategy where, instead of estimating all parameters simultaneously, learning of camera parameters is prioritized (before pose and shape parameters) to avoid unwanted local minima. Our experiments on Human3.6M and 3DPW datasets show that our DDA loss and Camera Pretraining significantly improve HPS estimation performance over using only 2D keypoint supervision or 2D and 3D supervision. Code will be provided for research purposes in the following URL: https://terteros.github.io/hmr-depth/.