MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels

Creative Commons License

Algan G., Ulusoy I.

IEEE Transactions on Image Processing, vol.31, pp.4352-4362, 2022 (Peer-Reviewed Journal) identifier identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 31
  • Publication Date: 2022
  • Doi Number: 10.1109/tip.2022.3183841
  • Journal Name: IEEE Transactions on Image Processing
  • Journal Indexes: Science Citation Index Expanded, Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, zbMATH, Civil Engineering Abstracts
  • Page Numbers: pp.4352-4362
  • Keywords: Training, Noise measurement, Noise robustness, Feature extraction, Training data, Deep learning, Wide band gap semiconductors, Deep learning, label noise, noise robust, noise cleansing, meta-learning, SET


© 1992-2012 IEEE.Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs). In order to address this problem, we propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective. In each iteration, before conventional training, the meta-training loop updates soft-labels so that resulting gradients updates on the base classifier would yield minimum loss on meta-data. Soft-labels are generated from extracted features of data instances, and the mapping function is learned by a single layer perceptron (SLP) network, which is called MetaLabelNet. Following, base classifier is trained by using these generated soft-labels. These iterations are repeated for each batch of training data. Our algorithm uses a small amount of clean data as meta-data, which can be obtained effortlessly for many cases. We perform extensive experiments on benchmark datasets with both synthetic and real-world noises. Results show that our approach outperforms existing baselines. The source code of the proposed model is available at