DFM: A Performance Baseline for Deep Feature Matching


Efe U., Ince K. G., ALATAN A. A.

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), ELECTR NETWORK, 19 - 25 Haziran 2021, ss.4279-4288 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/cvprw53098.2021.00484
  • Basıldığı Ülke: ELECTR NETWORK
  • Sayfa Sayıları: ss.4279-4288
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

A novel image matching method is proposed that utilizes learned features extracted by an off-the-shelf deep neural network to obtain a promising performance. The proposed method uses pre-trained VGG architecture as a feature extractor and does not require any additional training specific to improve matching. Inspired by well-established concepts in the psychology area, such as the Mental Rotation paradigm, an initial warping is performed as a result of a preliminary geometric transformation estimate. These estimates are simply based on dense matching of nearest neighbors at the terminal layer of VGG network outputs of the images to be matched. After this initial alignment, the same approach is repeated again between reference and aligned images in a hierarchical manner to reach a good localization and matching performance. Our algorithm achieves 0.57 and 0.80 overall scores in terms of Mean Matching Accuracy (MMA) for 1 pixel and 2 pixels thresholds respectively on Hpatches dataset [4], which indicates a better performance than the state-of-the-art.