Jongmin Lee | Byungjin Kim | Minsu Cho | |
Pohang University of Science and Technology (POSTECH), South Korea |
Detecting robust keypoints from an image is an integral part of many computer vision problems, and the characteristic orientation and scale of keypoints play an important role for keypoint description and matching. Existing learning-based methods for keypoint detection rely on standard translation-equivariant CNNs but often fail to detect reliable keypoints against geometric variations. To learn to detect robust oriented keypoints, we introduce a self-supervised learning framework using rotation-equivariant CNNs. We propose a dense orientation alignment loss by an image pair generated by synthetic transformations for training a histogram-based orientation map. Our method outperforms the previous methods on an image matching benchmark and a camera pose estimation benchmark.
(a) Illustration of dense orientation alignment loss. | (b) Visualization of the color-coded orientation maps. |
(a) Repeatability | (b) Orientation estimation accuracy |
(a) Results on HPatches | (b) Results on IMC2021 [1] |
(a) Outlier filtering using the estimated orientation | (b) Results according to the order of group |
This work was supported by Samsung Research Funding & Incubation Center of Samsung Electronics under Project Number SRFC-TF2103-02.
Check our GitHub repository: [GitHub]