Autoencoder-Based Error Correction Coding for One-Bit Quantization


Balevi E., Andrews J. G.

IEEE TRANSACTIONS ON COMMUNICATIONS, vol.68, no.6, pp.3440-3451, 2020 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 68 Issue: 6
  • Publication Date: 2020
  • Doi Number: 10.1109/tcomm.2020.2977280
  • Journal Name: IEEE TRANSACTIONS ON COMMUNICATIONS
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication & Mass Media Index, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, zbMATH, Civil Engineering Abstracts
  • Page Numbers: pp.3440-3451
  • Keywords: Decoding, Training, Quantization (signal), Turbo codes, Communication systems, Error correction codes, symbol Deep learning, error correction coding, one-bit quantization, DEEP, NETWORKS
  • Middle East Technical University Affiliated: No

Abstract

This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which "perfectly" refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.