Content Based Image Retrieval (CBIR) is an important area in the field of image processing and analysis. A novel method is proposed in this paper in order to retrieve visually similar images. The method uses the visual attention model to extract the saliency map of a given image with the help of SalNet algorithm. It is based on deep learning methods which have shown that many difficult computer vision problems can be solved by machine learning algorithms and more specifically by Deep Convolution Neural Networks (DCNNs). Using this model, first the saliency region or segment is detected from the images and then the traditional visual features such as color histogram, texture, Histograms of Oriented Gradients (HOG), etc. are computed and are stored in the feature database. Using saliency detection will make our retrieval process easier and accurate as the salient regions in the image is automatically detected. Hence, we can retrieve the most visually similar images with respect to a given query image, because saliency regions exactly map what humans visually perceive. The experimental dataset contains 1000 images including horses, elephants, food, African people, etc. from WANG database. Our results show that the proposed method is efficient and accurate compared to other previously existed models.