In this paper, we present a binarized neural network structure for inverse problems. In this structure, memory requirements and computation time are significantly reduced with a negligible performance drop compared to full-precision models. For this purpose, a unique architecture is proposed based on a residual learning. Precisely, it opts to reconstruct only the error between input and output images, which is eventually centralized the responses around zero. To this end, this provides several advantages for binary representation and manifold space is adopted to learn with binarized networks. Experiments are conducted on three different inverse problems as super-resolution, denoising and deblurring problems for various datasets. The results validate the success of the method.