This paper examines the effects of target position and velocity estimation accuracy obtained by visual-based tracking algorithms on the performance of classical guidance laws. Present deep learning-based visual MOT (Multiple Object Tracking) algorithms use a linear Kalman Filter to estimate the motion of moving objects. Thus, these methods require high-frame rate videos to accurately track the objects and apply the prediction-update steps sequentially since they do not rely on prediction results if the measurement is unavailable. As a result, employing the linear Kalman Filter give rise to the objects being untracked or lost due to occlusion in a short time window. For this reason, we utilized Unequal Dimension Multiple Model (UDIMM) based motion estimation filter in deep learning-based visual MOT to increase the estimation accuracy of a moving target. By doing that, we keep providing highly accurate state estimations to our quadcopter while the target is occluded and maneuvering. The study presents the performance of guidance methods such as True Proportional Navigation (TPN), Pure Proportional Navigation (PPN), and Effective Pure Proportional Navigation (EPPN) as a result of Monte Carlo (MC) simulations when employing both linear and UDIMM filters.