RAIDS: Robust autoencoder-based intrusion detection system model against adversarial attacks


Sarıkaya A., GÜNEL KILIÇ B., DEMİRCİ M.

Computers and Security, cilt.135, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 135
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1016/j.cose.2023.103483
  • Dergi Adı: Computers and Security
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, PASCAL, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, Criminal Justice Abstracts, INSPEC, Metadex, Civil Engineering Abstracts
  • Anahtar Kelimeler: Adversarial attack, Adversarial robustness, Autoencoder, CICIDS 2017, InSDN, Intrusion detection
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Machine learning-based intrusion detection systems (IDS) are essential security functions in conventional and software-defined networks alike. Their success and the security of the networks they protect depend on the accuracy of their classification results. Adversarial attacks against machine learning, which seriously threaten any IDS, are still not countered effectively. In this study, we first develop a method that employs generative adversarial networks to produce adversarial attack data. Then, we propose RAIDS, a robust IDS model, designed to be resilient against adversarial attacks. In RAIDS, an autoencoder's reconstruction error is used as a prediction value for a classifier. Also, to prevent the attacker from guessing about the feature set, multiple feature sets are created and used to train baseline machine learning classifiers. A LightGBM classifier is then trained with the results produced by two autoencoders and an ensemble of baseline machine learning classifiers. The results show that the proposed robust model can increase overall accuracy by at least 13.2% and F1-score by more than 110% against adversarial attacks without the need for adversarial training.