A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement


Abdulsalam M., Ahiska K., Aouf N.

ADVANCED INTELLIGENT SYSTEMS, cilt.6, sa.10, 2024 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 6 Sayı: 10
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1002/aisy.202400110
  • Dergi Adı: ADVANCED INTELLIGENT SYSTEMS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Orta Doğu Teknik Üniversitesi Adresli: Hayır

Özet

In response to increasing demand for robotics manipulation, accurate vision-based full pose estimation is essential. While convolutional neural networks-based approaches have been introduced, the quest for higher performance continues, especially for precise robotics manipulation, including in the Agri-robotics domain. This article proposes an improved transformer-based pipeline for full pose estimation, incorporating a Depth Refinement Module. Operating solely on monocular images, the architecture features an innovative Lighter Depth Estimation Network using a Feature Pyramid with an up-sampling method for depth prediction. A Transformer-based Detection Network with additional prediction heads is employed to directly regress object centers and predict the full poses of the target objects. A novel Depth Refinement Module is then utilized alongside the predicted centers, full poses, and depth patches to refine the accuracy of the estimated poses. The performance of this pipeline is extensively compared with other state-of-the-art methods, and the results are analyzed for fruit picking applications. The results demonstrate that the pipeline improves the accuracy of pose estimation to up to 90.79% compared to other methods available in the literature. Explore an advanced transformer-based pipeline for precise 6D pose estimation in robotics. This approach integrates a novel depth refinement module with monocular images, surpassing traditional multi-modal methods. Comprehensive performance evaluations demonstrate up to 90.79% accuracy improvements, highlighting its potential in enhancing robotic manipulation tasks.image (c) 2024 WILEY-VCH GmbH