Optimized KiU-Net: A Convolutional Autoencoder for Retinal Vessel Segmentation in Medical Images


Bilal H., Bendechache M., DİREKOĞLU C.

IEEE Access, vol.14, pp.2784-2799, 2026 (SCI-Expanded, Scopus) identifier identifier

  • Publication Type: Article / Article
  • Volume: 14
  • Publication Date: 2026
  • Doi Number: 10.1109/access.2025.3648822
  • Journal Name: IEEE Access
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Page Numbers: pp.2784-2799
  • Keywords: low-contrast, optimized KiU-Net, Retinal vessel segmentation, U-Net
  • Middle East Technical University Affiliated: Yes

Abstract

Medical image segmentation plays a central role in enhancing diagnosis, surgical planning, and treatment strategies, and in this work, the focus is on segmenting retinal vessels from color fundus images. Precise vessel extraction is essential because vessel morphology reflects several ophthalmic conditions. Wide neural networks such as U-Net and KiU-Net have improved retinal vessel segmentation; however, thin and low-contrast vessel regions remain difficult to capture. U-Net follows an undercomplete design that limits its ability to retain fine structures, whereas KiU-Net combines undercomplete and overcomplete paths to provide better detail extraction but still suffers from accuracy limitations and increased computational cost. We present an Optimized KiU Net model that improves the segmentation of thin and low-contrast retinal vessels while keeping the model lightweight. The design refines convolution channel selection and increases encoder depth, and the final feature fusion uses a single concatenation step, which supports faster convergence with fewer parameters. On the RITE dataset, the model achieves an F1 score of 79.80 and an IoU of 66.30, outperforming U-Net, KiU-Net, and other similar sized architectures. Compared to KiU-Net, the gains are about four points in F1 and six points in IoU with fewer parameters. Additional evaluation on the GlaS dataset shows F1 and IoU scores of 82.21 and 71.03, demonstrating that the method remains effective when compared with existing approaches of comparable scale.