In this paper, we propose a method to estimate the fundamental matrix for hybrid cameras robustly. In our study a catadioptric omnidirectional camera and a perspective camera were used to obtain hybrid image pairs. For automatic feature point matching, we employed Scale Invariant Feature Transform (SIFT) and improved matching results with the proposed image preprocessing. We also performed matching using virtual camera plane (VCP) images, which are unwarped from the omnidirectional image and carries perspective image properties. Although both approaches are able to produce succesful results, we observed that VCP-perspective matching is more robust to increasing baseline when compared to direct omnidirectional-perspective matching. We implemented RANSAC based on the hybrid epipolar geometry which enables robust estimation of the fundamental matrix as well as elimination of false matches.