Brain Tumor Classification using Deep Learning: Robustness Against Adversarial Attacks and Defense Strategies


Khalid H., DİREKOĞLU C.

7th International Congress on Human-Computer Interaction, Optimization and Robotic Applications, ICHORA 2025, Ankara, Turkey, 23 - 24 May 2025, (Full Text) identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/ichora65333.2025.11017313
  • City: Ankara
  • Country: Turkey
  • Keywords: Artificial Intelligence (AI), Brain tumor, Classification, Customized Convolutional Neural Network (C-CNN), Deep Learning (DL), Defensive Strategy, Transfer Learning (TL) Adversarial Attacks
  • Middle East Technical University Affiliated: Yes

Abstract

In this paper, we propose a custom convolutional neural network (C-CNN) and a ResNet-50 based transfer learning model to classify three different types of brain tumors: meningioma, glioma, and pituitary-using the open-source brain tumor dataset Figshare. We assess the robustness of both C-CNN and ResNet-50 by applying six different types of adversarial attacks that simulate real-world challenges encountered in MRI scans of brain tumors: Fast Gradient Sign Method (FGSM), Motion Blur, Partial Occlusion, JPEG Compression Artifacts, Gaussian Noise Artifacts, and Adversarial Boundary Noise. To improve the model's robustness against these adversarial attacks, we designed a defensive strategy which is known as Adversarial attack-driven data augmentation, where we integrate adversarially attacked images into our dataset and evaluate the model on test and validation datasets. Our C-CNN and ResNet-50 obtained the test accuracy of 94.42% and 98.03%, respectively on a clean dataset with a data split of 15% for the test, 15% for the validation, and 70% for the training dataset. After implementing the defensive strategy for C-CNN and ResNet-50, we evaluated both models using two different approaches. The first approach includes random distribution of attacked and clean images among the test, validation, and training sets, we achieved the test accuracies for C-CNN and ResNet-50 of 97.15% and 98.3% respectively. The second approach includes the distribution with a fixed 16:84 ratio of attacked to clean images across all three test, validation, and training datasets, accuracies of both models dropped to 95.82% and 97.63%, respectively.