3-D Rigid Body Tracking Using Vision and Depth Sensors


Gedik O. S., Alatan A. A.

IEEE TRANSACTIONS ON CYBERNETICS, cilt.43, ss.1395-1405, 2013 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 43
  • Basım Tarihi: 2013
  • Doi Numarası: 10.1109/tcyb.2013.2272735
  • Dergi Adı: IEEE TRANSACTIONS ON CYBERNETICS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.1395-1405
  • Anahtar Kelimeler: 3-D tracking, extended Kalman filter, model-based tracking, RGBD data fusion, POSE ESTIMATION, OBJECT
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.