Hyperspectral data provides rich information about a scene in terms of spectral details since it encapsulates measurements/observations from a wide large range of spectrum. To this end, it has been used in different problems mostly related to identification and detection processes. However, the main limitation arises for the accessibility of data. More precisely, there is no sufficient amount of hyperspectral data available compared to visible range data for trainable models. In this paper, we tackle an inverse problem to estimate the relative lidar depth from hyperspectral data. To solve its limitation, we integrate semantic information existed in data with supervised labels to decrease the possibility ofparameter overfitting. Moreover, details of the output responses are enhanced with Laplacian pyramids and attention layers in which the model makes predictions from each subsequent scale instead of a single shot prediction from the top of the model. In our experiments, we use the 2018 IEEE GRSS Data Fusion Challenge dataset. From the experimental results, we prove that use of hyperspectral data instead of visible range data improves the performance. Moreover, we show that results are significantly improved if a sparse set of depth measurements is used along with hyperspectral data. Lastly, the integration of semantic information to the solution yields more stable and better results compared to the baselines.