MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels


Creative Commons License

Algan G., Ulusoy I.

IEEE Transactions on Image Processing, cilt.31, ss.4352-4362, 2022 (SCI-Expanded) identifier identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 31
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1109/tip.2022.3183841
  • Dergi Adı: IEEE Transactions on Image Processing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, zbMATH, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.4352-4362
  • Anahtar Kelimeler: Training, Noise measurement, Noise robustness, Feature extraction, Training data, Deep learning, Wide band gap semiconductors, Deep learning, label noise, noise robust, noise cleansing, meta-learning, SET
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

© 1992-2012 IEEE.Real-world datasets commonly have noisy labels, which negatively affects the performance of deep neural networks (DNNs). In order to address this problem, we propose a label noise robust learning algorithm, in which the base classifier is trained on soft-labels that are produced according to a meta-objective. In each iteration, before conventional training, the meta-training loop updates soft-labels so that resulting gradients updates on the base classifier would yield minimum loss on meta-data. Soft-labels are generated from extracted features of data instances, and the mapping function is learned by a single layer perceptron (SLP) network, which is called MetaLabelNet. Following, base classifier is trained by using these generated soft-labels. These iterations are repeated for each batch of training data. Our algorithm uses a small amount of clean data as meta-data, which can be obtained effortlessly for many cases. We perform extensive experiments on benchmark datasets with both synthetic and real-world noises. Results show that our approach outperforms existing baselines. The source code of the proposed model is available at https://github.com/gorkemalgan/MetaLabelNet.