Learned Lossless Image Compression Through Interpolation With Low Complexity


KAMIŞLI F.

IEEE Transactions on Circuits and Systems for Video Technology, vol.33, no.12, pp.7832-7841, 2023 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 33 Issue: 12
  • Publication Date: 2023
  • Doi Number: 10.1109/tcsvt.2023.3273578
  • Journal Name: IEEE Transactions on Circuits and Systems for Video Technology
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Page Numbers: pp.7832-7841
  • Keywords: artificial neural networks, entropy coding, Image compression, lossless compression
  • Middle East Technical University Affiliated: Yes

Abstract

With the increasing popularity of deep learning in image processing, many learned lossless image compression methods have been proposed recently. One group of algorithms are based on scale-based auto-regressive models and can provide competitive compression performance while also allowing easily parallelized computations and short encoding/decoding times. However, they use large neural networks and have high computational requirements. This paper presents an interpolation based learned lossless image compression method which falls in the scale-based auto-regressive models group. The method achieves compression performance better than or on par with the recent scale-based auto-regressive models, yet requires more than 10x less neural network parameters (0.19M) and encoding/decoding computation complexity. These achievements are due to the contributions/findings in the overall system and neural network architecture design, such as sharing interpolator neural networks across different scales, using separate neural networks for different parameters of the probability distribution model and performing the processing in the YCoCg-R color space instead of the RGB color space.