IEEE Access, cilt.14, ss.2784-2799, 2026 (SCI-Expanded, Scopus)
Medical image segmentation plays a central role in enhancing diagnosis, surgical planning, and treatment strategies, and in this work, the focus is on segmenting retinal vessels from color fundus images. Precise vessel extraction is essential because vessel morphology reflects several ophthalmic conditions. Wide neural networks such as U-Net and KiU-Net have improved retinal vessel segmentation; however, thin and low-contrast vessel regions remain difficult to capture. U-Net follows an undercomplete design that limits its ability to retain fine structures, whereas KiU-Net combines undercomplete and overcomplete paths to provide better detail extraction but still suffers from accuracy limitations and increased computational cost. We present an Optimized KiU Net model that improves the segmentation of thin and low-contrast retinal vessels while keeping the model lightweight. The design refines convolution channel selection and increases encoder depth, and the final feature fusion uses a single concatenation step, which supports faster convergence with fewer parameters. On the RITE dataset, the model achieves an F1 score of 79.80 and an IoU of 66.30, outperforming U-Net, KiU-Net, and other similar sized architectures. Compared to KiU-Net, the gains are about four points in F1 and six points in IoU with fewer parameters. Additional evaluation on the GlaS dataset shows F1 and IoU scores of 82.21 and 71.03, demonstrating that the method remains effective when compared with existing approaches of comparable scale.