In this study, the effect of visual context information to the performance of learning-based techniques for the super resolution problem is analyzed. Beside the interpretation of the experimental results in detail, its theoretical reasoning is also achieved in the paper. For the experiments, two different visual datasets composed of natural and remote sensing scenes are utilized. From the experimental results, we observe that keeping visual context information in the course of parameter learning for convolutional neural networks yields better performance compared to the baselines. Moreover, we summarize that fine-tuning pre-trained parameters with the related context yet fewer samples improves the results.