Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Explainable Artificial Intelligence and

Cybersecurity: A Systematic Literature Review


1st Carlos Frederico D’Almeida e Mendes 2nd Tatiane Nogueira Rios
Institute of Computing Institute of Computing
Federal University of Bahia Federal University of Bahia
Salvador, Brazil Salvador, Brazil
carlosfam@ufba.br tatiane.nogueira@ufba.br
arXiv:2303.01259v1 [cs.CR] 27 Feb 2023

Abstract—Cybersecurity vendors consistently apply AI (Ar- • Security and Risk Management


tificial Intelligence) to their solutions and many cybersecurity • Asset Security
domains can benefit from AI technology. However, black-box • Security Architecture and Engineering
AI techniques present some difficulties in comprehension and
• Communications and Network Security
adoption by its operators, given that their decisions are not
always humanly understandable (as is usually the case with • Identity and Access Management
deep neural networks, for example). Since it aims to make the • Security Assessment and Testing
operation of AI algorithms more interpretable for its users and • Security Operations
developers, XAI (eXplainable Artificial Intelligence) can be used • Software Development Security
to address this issue. Through a systematic literature review,
this work seeks to investigate the current research scenario on This breadth of knowledge required is one of the factors
XAI applied to cybersecurity, aiming to discover which XAI why the cybersecurity industry is currently facing a shortage
techniques have been applied in cybersecurity, and which areas of qualified professionals. The worldwide workforce gap is
of cybersecurity have already benefited from this technology.
estimated at 2.7 million professionals [4].
Index Terms—XAI, explainable artificial intelligence, inter-
pretable artificial intelligence, cybersecurity, cyber security, de- To reduce the industry’s dependence on these sought-after
tection and response, intrusion detection, intrusion prevention, analysts, cybersecurity vendors make extensive use of AI
cyber risk, malware (Artificial Intelligence) in a variety of products. Some of the
technologies already consolidated or in adoption by the market
I. I NTRODUCTION can be verified in [5]–[8], each of them using AI techniques
A cyber risk can be roughly defined as a factor that can to a greater or lesser extent.
lead to unwanted digital information leakage, sequestration, or According to technological research and consulting firm,
destruction; unauthorized use of computer resources; unavail- Gartner [9], there are 19 current prominent AI use cases
ability of computer systems; or that compromises the integrity that are directly relevant to the security and risk management
or confidentiality of digital data in general. Further discussion leaders, some of which are:
of this concept can be seen in [1]. • Transaction Fraud Detection
The materialization of cyber risk can generate losses that, • File-Based Malware Detection
depending on their severity, can be treated as a cyber-incident. • Process Behavior Analysis
In the business environment, the 5 consequences of a cyber- • Abnormal System Behavior Detection
incident identified as the most negative are [2]: • Account Takeover Identification

• Customer turnover • Asset Inventory and Dependency Mapping Optimization

• Lost intellectual property (including trade secrets) • Web Domain and Reputation Assessment

• Disruption or damages to critical infrastructure Unfortunately, there are AI techniques whose operation is
• Cost of outside consultants and experts not transparent to the user and which also do not provide
• Lost revenues explanations on how they arrived at the generated result, as
Cyber incidents are touted as the biggest risk factor for is usually the case with neural networks, for example. These
business in 2022, according to research conducted by insurer are called “black box” AI techniques. It turns out that a better
Allianz, even ahead of business interruption, natural catastro- understanding by technology operators is desirable as it allows
phes, and pandemic outbreak [3]. greater [10]:
Cybersecurity spans incredibly diverse specialties. The • Trust in its decisions
(ISC)² (International Information System Security Certification • Social acceptance
Consortium), maintainer of the CISSP(Certified Information • Ease of debugging and auditing
Systems Security Professional) qualification, projects its exam • Fairness (by the ease of bias detection)
in 8 domains: • Assessment on the relevance of learned features
With this problem in mind, the concept of XAI (eXplainable In addition, some secondary questions were elaborated,
Artificial Intelligence) was derived, whose goal is to make the seeking to give more details to the research scenario in the
operation of an AI algorithm more understandable for its users area:
and developers. XAI can be defined as the set of AI methods 1) Which countries do most research on the subject come
capable of conveying to a suitably specialized observer how from?
they arrived at a classification, regression, or prediction. The 2) What is the frequency of published studies on the
discussion about what constitutes “understanding” is heated. subject?
There is no peaceful and widely accepted definition, but 3) How are studies in the area divided by type of publica-
attempts of formalization have already been well developed tion?
since at least [11]. For this discussion, the reading of [12] 4) Which authors and institutions publish the most on the
may be relevant. topic?
Some features are particularly desirable in an XAI applica- 5) What domains of cybersecurity have already benefited
tion [13]: from XAI research?
• Understandability 6) Why is security analysts’ ability to interpret AI cyber
• Fidelity: reasonable representation of what the AI system risk classification important?
actually does. 7) How are techniques evaluated?
• Sufficiency: detailed enough to justify the AI decision. 8) What are the limitations of current techniques?
• Low Construction Overhead: not dominate the cost of The repositories of scientific articles in which to conduct
designing the AI. the search were defined based on the prevalence of use
• Efficiency: not slow down the AI significantly. by researchers of information technologies, in addition to
With regard specifically to the application of XAI to cy- allowing access via the Web and search queries:
bersecurity, [14] and [15] address this matter in a high-level 1) Scopus (http://www.scopus.com/home.url)
way, proposing a so-called desiderata for the area and general 2) ACM Digital Library (http://portal.acm.org/)
architecture that can serve as a roadmap for guiding research 3) IEEE Xplore Digital Library (http://ieeexplore.ieee.org/)
efforts towards the development of XAI-based cybersecurity Only articles written in English were considered in the
systems. scope of this work. In addition, books and panels were
One way XAI algorithms can be classified is on whether disregarded.
interpretability is achieved by restricting the complexity of A set of keywords was generated, among them the more
the machine learning model (intrinsic) or by applying methods general: “explainable artificial intelligence” and “cybersecu-
that analyze the model after training (post hoc). Furthermore, rity”, and its variation “cyber security” (the forms “cyber-
depending on the scope of interpretability, they can be clas- security” and “interpretable artificial intelligence” did not
sified as global (explain the entire model behavior) or local add new results). Aiming to increase the number of results
(explain an individual prediction) [10]. returned, keywords were added referring to specific cybersecu-
Through a Systematic Literature Review (SLR), this work rity topics, so that articles that do not mention “cybersecurity”
seeks to investigate the current research scenario on XAI but that otherwise belong to the area can be included. The
applied to cybersecurity. added keywords are:
The SLR follows 3 well-defined steps: search (query), 1) “detection and response”
analysis (quantitative and qualitative insights), and conclusion 2) “intrusion detection”
(response to the Main Research Question). In the following 3) “intrusion prevention”
sections, each of the SLR phases will be presented, with their 4) “cyber risk”
development and results. 5) “malware”
Thus, the formulated search string is:
II. SLR P HASE 1: S EARCH
(“explainable artificial intelligence”) AND (“cybersecurity”
At this stage of the SLR, we defined the Main Research OR “cyber security” OR “detection and response” OR
Question (MRQ), a set of Secondary Questions (SQ), the “intrusion detection” OR “intrusion prevention” OR “cyber
repositories to be searched, the language of the articles to be risk” OR “malware”)
evaluated, the keywords and search query, and the inclusion
As inclusion criteria for an article returned by the search to
and exclusion criteria for returned articles.
be considered relevant for reading and analysis, the following
In order to investigate the current research scenario on
were established:
XAI techniques applied to cybersecurity, the Main Research
Question we aim to answer is: 1) Is it a primary work, as opposed to other literature
reviews?
What are the XAI techniques used to promote more 2) Does the XAI technique discussed have a cybersecurity
interpretable automated cyber risk classification? domain as its main application?
Table I Table II
PAPERS R ETRIEVED FROM Q UERIES PAPERS PER C OUNTRY
Repository Number of Papers Country Number of Papers
ACM 15 USA 13
IEEE 13 China 3
Scopus 24 India 3
Total 52 Italy 3
Unique and Valid References 42 Germany 2
After Inclusion Criteria 21 South Korea 2
Austria 1
Canada 1
Ireland 1
Since the returned articles were in an accessible quantity, Israel 1
there was no need to establish exclusion criteria. Japan 1
Once all these SLR parameters were defined, the search Mexico 1
Poland 1
itself was carried out: Qatar 1
1) The queries were performed in the established reposito- UAE 1
UK 1
ries
Total 36
2) Redundant articles were filtered
3) Titles and abstracts were read, considering the inclu-
sion/exclusion criteria Table III
4) The remaining articles have been read in their entirety AUTHORS PER C OUNTRY

III. SLR P HASE 2: A NALYSIS Country Number of Authors


USA 52
In this phase of the present work, we seek to answer some of Italy 21
the secondary questions, extracting quantitative insights from China 18
South Korea 7
the results obtained. Germany 6
During the search phase, 42 valid and unique references India 4
were retrieved. Books and panels were considered invalid Israel 4
references. After applying the inclusion/exclusion criteria, 21 Japan 4
Poland 4
papers were considered relevant. More details can be seen Austria 3
in Table I. One of the retrieved works was another sys- Ireland 3
tematic review [16], whose references were also included in Mexico 3
UK 3
the scope of this work, totaling 36 relevant papers, by the Canada 2
inclusion/exclusion criteria. Qatar 2
In order to answer secondary question number 1, the anal- Spain 2
ysis of which countries publish the most on the subject was UAE 2
Czech 1
carried out from two perspectives. In Table II we associate France 1
the paper with the country of the institution to which the first Indonesia 1
author is linked. Otherwise, in Table III we associate each Yemen 1
author with the country of the institution to which they are
linked (link with more than one country allowed).
Table IV
To answer secondary question number 2, Table IV was PAPERS PER Y EAR
sorted by year, showing the number of studies on the topic in
Year Number of Papers
each year. Furthermore, to shed light on secondary question
2022 2
number 3, Table V shows the number of studies depending on 2021 16
the type of publication. 2020 15
Aiming at secondary question number 4, Table VI shows the 2019 1
2018 2
number of publications by authors with at least two articles. Total 36
Of the 141 different authors, only 5 published more than one
paper within the scope of this work. Also, Table VII shows
the number of papers from Institutions that published more Table V
than one article. Of the 60 institutions that published a paper, PAPERS PER P UBLICATION T YPE
6 were the ones that published at least two. Publication Type Number of Papers
Journal Article 26
IV. SLR P HASE 3: C ONCLUSION Conference Proceedings 7
Report 3
In this phase we explore the XAI techniques that were
Total 36
applied by the reviewed papers to different areas of cyber-
Table VI decision making. In the paper, SHAP and FOS (feature outlier
PAPERS PER AUTHOR score) techniques are applied to find valuable information in
Year Number of Papers IDS and Malware datasets.
Islam, Sheikh Rabiul 3 In [24], SHAP, LIME, and an auto-encoding-based scheme
Drichel, Arthur 2 for LSTM (Long short-term memory) models are applied to an
Eberle, William 2
Mane, Shraddha 2 ML-based detection system for cryptomining in a Kubernetes
Rao, Dattaraj 2 cluster.
In [25], a DeNNeS (deep embedded neural network expert
Table VII system) which extracts refined rules from a trained DNN (deep
PAPERS PER I NSTITUTION neural network) to substitute the knowledge base of an expert
system is proposed. It’s then applied to Phishing Detection
Institution Number of Papers
Chinese Academy of Sciences 2 and Malware Classification.
Persistent Systems Limited 2 In [26], gradient-based attribution methods are used to
RWTH Aachen University 2 explain Android malware classifiers’ decisions by identifying
Tennessee Technological University 2
University of California 2
the most relevant features. Also, the authors propose metrics
University of Hartford 2 to evaluate the impact of the explanation on the adversarial
robustness of the classifiers.
In [27], ML models are infused with Domain Knowledge
security. Furthermore, addressing secondary questions 6 to 8, for Intrusion Detection. They use six different algorithms for
we will note the importance given to explainability, ways of predicting malicious records: a probabilistic classifier based
evaluating it, and limitations found against it. A more detailed on the Naive Bayes theorem, and five supervised “black box”
summary of all texts can be found in Table VIII (Appendix). models. Their finding is that “domain knowledge infusion
In it, the authors’ motivation for the use of XAI is transcribed, provides better explainability with negligible compromises in
as well as, when mentioned in the articles, the limitations performance”.
encountered when seeking explainability and the techniques [28] presents a use case for understanding both what
applied to evaluate it. information requirements a human needs for decision-making,
In Reyes et al. [17], SHapley Additive exPlanations (SHAP) as well as what information can be made available by the
technique is used to understand the influence of features on AI, seeking a guide for the development of future explainable
each type of network traffic records over a machine-learning systems. In this particular use case, the XAI takes the role of
based IDS. a junior cyber analyst.
[18]’s Intrusion Detection System uses a hybrid approach In [29], a DT (Decision Tree) model is used for a Intrusion
to deliver maximum accuracy, and still provide explainability. Detection System. The authors point out that previous works
Consists of a Feed Forward Artificial Neural Network (ANN) that have used Decision Trees in IDS focused on the accuracy
black-box classifier, named Oracle, and a surrogate explana- of benchmark machine learning algorithms. Conversely, this
tion module, which is composed of Decision Trees trained paper focus on the interpretability of a widely used benchmark
using microaggregation. It is model agnostic with local scope dataset.
explanations. [30] applies LIME (Local Interpretable Model-Agnostic
[19] takes an adversarial approach to generate explanations Explanations) to Malware Classification. Also, it has a great
for incorrect classifications made by IDS. They use it to find discussion on XAI in general, citing important concepts for
the minimum modifications of the input features required to the evaluation of explainability developed by others, such as
correctly classify a given set of misclassified samples. Descriptive Accuracy.
[20] develops a framework using SHapley Additive exPla- In [31], SHAP is used do explain autoencoder anomaly
nations (SHAP) to provide local and global explanations on detections. One of the datasets is of intrusions simulated in
the functioning of an IDS. a military network environment.
[21] performs an example-based black box analysis of [32] uses a GA (Genetic Algorithm) to promote explana-
Android anti-malware solutions, to determine which features tions for a network traffic classifier, which in turn can be used
a detector relies on for its classifications. for Intrusion Detection. The GA selects important features in
[22] is particularly innovative in utilizing the intrinsically the entire feature set.
interpretable Symbolic Deep Learning (SDL) method, which In [33], a profusion of XAI techniques is used to improve
constructs cognitive models based on small samples of expert the interpretability of an Intrusion Detection System. They
classifications, to provide decision support for non-expert leverage SHAP, LIME, and three other algorithms present in
users in the form of explainable suggestions over Intrusion the AIX360 (AI explainability 360) open-source toolkit by
Detection. Human experiment results reveal that SDL can help IBM to create a framework that “provides explanations at
to reduce missed threats by 25%. every stage of machine learning pipeline”.
[23] strongly talks about the need to apply XAI to cy- [34] is a beautiful work on applying XAI to many stages
bersecurity technologies to improve the efficiency of analysts’ of the human-AI interaction and evaluating it. The authors
propose a framework named FAIXID for improving the ex- In [45], applies heatmaps generated with Grad-CAM to
plainability and understandability of intrusion detection alerts. interpret a deep-learning based mobile malware classifier.
Their method has been implemented and evaluated using In [46], the authors propose a universal XAI model named
experiments with real-world datasets and human subjects. Transparency Relying Upon Statistical Theory (TRUST),
In [35], a visual analytics system for CNN (Convolutional which is model-agnostic, high-performing, and suitable for
Neural Network) interpretation, using LIME and Saliency numerical applications. They demonstrate the effectiveness of
Maps, is described, which is applied in the context of Intrusion TRUST in a case study on the Industrial Internet of things
Detection. (IIoT) using three different datasets.
In [36], Random Forest, a feature-based machine learning In [47], the authors propose a system to shed light on how
model, is used to identify DGA (Domain-Generation Algo- an app description reflects privacy-related permission usage,
rithms), based on Domain Names, in the context of Intrusion in the context of Mobile Apps. They apply LIME to their
Detection. CNN (Convolutional Neural Network) in order to assess the
In [37], SHAP is applied to a DNN (Deep Neural Network) quality of their network and to avoid incomprehensible black
in the context of host-based Intrusion Detection, with the spe- box predictions.
cific purpose of better understanding the features to improve [48] uses SHAP with a Multimodal DL-based Mobile
the algorithm execution time. Traffic Classifier to evaluate the input importance.
In [38], the authors extend their previous work presented In [49], SHAP is used to explain autoencoder anomaly
in [27], now also applying a proxy task-based explainability detections.
quantification method. In [50], aiming to answer the question “does static analysis
on packed binaries provide rich enough features to a malware
In [39], addressing the issue of concept drift in the context
classifier?”, the authors use a Random Forest method with
of network Intrusion Detection, the authors propose a frame-
feature selection.
work named INSOMNIA, which, among other functionalities,
In [51], LIME and Saliency Maps are applied successfully
makes use of DALEX (moDel Agnostic Language for Ex-
to black-box models that are used for WF (Website Finger-
ploration and eXplanation), an open-source XAI package for
printing) attacks, to explore the leakage sources. The authors
Python and R, to understand feature importance changes over
also evaluate the usage of the techniques with the Remove and
time.
Retrain (ROAR) metric for explainability.
In [40], a visual analytics system is applied to interpret two
Finally, [52] addresses the Alarm Flooding problem so com-
different types of deep learning-based neural nets for Domain-
mon to Intrusion Detection and SIEM (Security Information
Generation Algorithm (DGA) classification, in the context of
and Event Management) systems, by automatically labeling
Intrusion Detection. It works by clustering the activations of a
the alerts and categorizing them. To this end, the authors use
model’s neurons and subsequently leveraging decision trees in
a ZSL (Zero-shot Learning) method interpreted through SHAP
order to explain the constructed clusters. In combination with
and LIME.
a 2D projection, the user can explore how the model views
the data at different layers. V. C ONCLUDING R EMARKS
In [41], LEMNA (Local Explanation Method using Nonlin- Cybersecurity is a growing concern for businesses and gov-
ear Approximation) a novel, high-fidelity explanation method ernments. Vendors of cybersecurity solutions are increasingly
dedicated for security applications is developed. In the paper, using AI (Artificial Intelligence). The ability to explain the
it is used with two deep learning applications in security: decisions of an AI algorithm brings several benefits, including
Malware Classification, and Binary Reverse-Engineering. The greater confidence in the system and a better understanding
authors also care about demonstrating the practical applica- of its operation. In this sense, XAI (eXplainable Artificial
tions of the explanation method. Intelligence) is being applied in several areas of cybersecurity.
In [42], an RL (Reinforcement Learning) Adversarial ap- In this systematic literature review, we sought to discover
proach is taken to evade PE (Portable Executable) Malware which XAI techniques have been applied in cybersecurity, and
classifiers. It is able to shed light on the root cause of the which areas of cybersecurity have already benefited from this
evasions and thus provide feature interpretation. technology.
In [43], four XAI techniques and Open-Source Intelli- Almost all of the works reviewed in this paper explicitly
gence (OSINT) are blended to deliver better AI explainability mention some reason why explainability is important. Inter-
through second opinion approaches. The techniques are AN- estingly, the reasons varied a lot between, on the one hand,
CHOR, LIME, SHAP, and Counterfactual Explanations and improving users’ trust in the system, and, on the other hand,
they are applied to a Domain-Generation Algorithm (DGA) enabling researchers to understand the internal mechanisms of
classifier, for Intrusion Detection. the classifier.
In [44], a decision-tree-based autoencoder is described, As we can see in Table VIII, few papers perform tests that
designed to detect anomalies and provide the explanations specifically assess the degree of explainability of the technique
behind its decisions by finding the correlations among different employed or its practical impact. In this sense, reading [22]
attribute values. and [34] is particularly recommended, due to the excellent
description of the techniques used and for carrying out human [19] D. L. Marino, C. S. Wickramasinghe, M. Manic,
experiments in the evaluation of the practical consequences An adversarial approach for explainable ai in intrusion detection systems
(11 2018).
of explainability. [51] is a good example of using the ROAR URL http://arxiv.org/abs/1811.11705
(Remove and Retrain) explainability metric. [20] M. Wang, K. Zheng, Y. Yang, X. Wang, An explainable machine
Also referring to Table VIII, a minority of texts pointed out learning framework for intrusion detection systems, IEEE Access 8
(2020) 73127–73141. doi:10.1109/ACCESS.2020.2988359.
the limitations caused by adding more explainability to their [21] G. Nellaivadivelu, F. D. Troia, M. Stamp, Black box anal-
algorithms. Among the XAI limitations pointed out are the ysis of android malware detectors, Array 6 (2020) 100022.
decrease in the accuracy of intrinsically explainable models, doi:10.1016/j.array.2020.100022.
[22] V. D. Veksler, N. Buchler, C. G. LaFleur, M. S. Yu, C. Lebiere,
performance difficulties, and the lack of formalization in the C. Gonzalez, Cognitive models in cybersecurity: Learning from
concept of explanation. expert analysts and predicting attacker behavior, Frontiers in
We can see that SHAP and LIME techniques are the most Psychology 11, sDL applied on IDS alert escalation (2020).
doi:10.3389/fpsyg.2020.01049.
used, perhaps because they have been implemented in open- [23] H. Kim, Y. Lee, E. Lee, T. Lee, Cost-effective valuable data detection
source frameworks for some time. LEMNA [41] appears to based on the reliability of artificial intelligence, IEEE Access 9 (2021)
be a promising technique, developed with cybersecurity use 108959–108974. doi:10.1109/ACCESS.2021.3101257.
[24] R. R. Karn, P. Kudva, H. Huang, S. Suneja, I. M. Elfadel, Cryptomining
cases in mind. detection in container clouds using system calls and explainable machine
Intrusion Detection, Malware Classification, Phishing learning, IEEE Transactions on Parallel and Distributed Systems 32
Detection, Reverse Engineering, Website Fingerprinting, (2021) 674–691. doi:10.1109/TPDS.2020.3029088.
[25] S. Mahdavifar, A. A. Ghorbani, Dennes: deep embedded
Domain-Generation Algorithms Detection and Abuse of neural network expert system for detecting cyber attacks,
Privacy-related Permissions on Mobile Apps are areas of Neural Computing and Applications 32 (2020) 14753–14780.
cybersecurity that have already made use of XAI. doi:10.1007/s00521-020-04830-w.
[26] M. Melis, M. Scalas, A. Demontis,
The authors hope that this work can encourage and con- D. Maiorca, B. Biggio, G. Giacinto, F. Roli,
tribute to the adoption of XAI in more areas of cybersecurity. Do gradient-based explanations tell anything about adversarial robustness to android m
(5 2020).
R EFERENCES URL http://arxiv.org/abs/2005.01452
[27] S. R. Islam, W. Eberle, S. K. Ghafoor, A. Siraj, M. Rogers, Domain
[1] A. Refsdal, B. Solhaug, K. Stølen, Cyber-risk management, in: Cyber- knowledge aided explainable artificial intelligence for intrusion detection
risk management, Springer, 2015, pp. 33–47. and response, CEUR Workshop Proceedings 2600 (2020).
[2] 1h’2021 cyber risk index (cri), Tech. rep., TREND MICRO, PONEMON [28] E. Holder, N. Wang, Explainable artificial intelligence (xai)
INSTITUTE (8 2021). interactively working with humans as a junior cyber analyst,
[3] Allianz risk barometer 2022, Tech. rep., ALLIANZ GLOBAL CORPO- Human-Intelligent Systems Integration 3 (2021) 139–153.
RATE & SPECIALTY (1 2022). doi:10.1007/s42454-020-00021-z.
[4] Cybersecurity workforce study, Tech. rep., (ISC)² (2021). [29] B. Mahbooba, M. Timilsina, R. Sahal, M. Serrano, Explainable ar-
[5] Hype cycle for endpoint security, 2021, Tech. rep., GARTNER (8 2021). tificial intelligence (xai) to enhance trust management in intrusion
[6] Hype cycle for cloud security, 2021, Tech. rep., GARTNER (7 2021). detection systems using decision tree model, Complexity 2021 (2021).
[7] Hype cycle for network security, 2021, Tech. rep., GARTNER (10 2021). doi:10.1155/2021/6634811.
[8] Hype cycle for security operations, 2021, Tech. rep., GARTNER (7 [30] S. M. Mathews, Explainable Artificial Intelligence Applications in NLP,
2021). Biomedical, and Malware Classification: A Literature Review, Vol. 998,
[9] Infographic: Ai use-case prism for cybersecurity, Tech. rep., GARTNER Springer International Publishing, 2019.
(11 2021). [31] L. Antwarg, R. M. Miller, B. Shapira, L. Rokach, Explaining anomalies
[10] C. Molnar, Interpretable Machine Learning, 2nd Edition, 2022. detected by autoencoders using shap (2020).
URL christophm.github.io/interpretable-ml-book/ [32] S. Ahn, J. Kim, S. young Park, S. Cho, Explaining deep learning-based
[11] A. M. Turing, Computing machinery and intelligence, Mind LIX (1950) traffic classification using a genetic algorithm, IEEE Access 9 (2020).
433–460. doi:10.1093/mind/LIX.236.433. doi:10.1109/ACCESS.2020.3048348.
[12] S. Aaronson, Why philosophers should care about computational com- [33] S. Mane, D. Rao, Explaining network intrusion detection system using
plexity (8 2011). explainable ai framework.
[13] L. K. Hansen, L. Rieger, Interpretability in intelligent systems – a new [34] H. Liu, C. Zhong, A. Alnusair, S. R. Islam, Faixid: A framework for
concept?, Lecture Notes in Computer Science (including subseries Lec- enhancing ai explainability of intrusion detection results using data
ture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) cleaning techniques, Journal of Network and Systems Management 29
11700 LNCS (2019) 41–49. (2021). doi:10.1007/s10922-021-09606-8.
[14] L. Viganò, D. Magazzeni, Explainable security (7 2018). [35] C. Wu, A. Qian, X. Dong, Y. Zhang, Feature-oriented design of visual
URL http://arxiv.org/abs/1807.04178 analytics system for interpretable deep learning based intrusion detec-
[15] J. N. Paredes, J. C. L. Teze, G. I. Simari, M. V. Martinez, tion, 2020, pp. 73–80, iDSBoard, visual analytics system to interpret a
CNN (technical
On the importance of domain-specific explanations in ai-based cybersecurity systems for network intrusion detection task.¡br/¿Researchers in the field
report)
(8 2021). of network security are the target users of the visual analytics system.
URL http://arxiv.org/abs/2108.02006 doi:10.1109/TASE49443.2020.00019.
[16] S. Hariharan, A. Velicheti, A. S. Anagha, C. Thomas, N. Balakr- [36] A. Drichel, N. Faerber, U. Meyer, First step towards explainable dga
ishnan, Explainable artificial intelligence in cybersecurity: A brief multiclass classification, ACM International Conference Proceeding Se-
review, Institute of Electrical and Electronics Engineers Inc., 2021. ries (6 2021). doi:10.1145/3465481.3465749.
doi:10.1109/ISEA-ISAP54304.2021.9689765. [37] M. Gu, B. Zhou, F. Du, X. Tang, W. Wang, L. Zang, J. Han, S. Hu,
[17] A. A. Reyes, F. D. Vaca, G. A. Aguayo, Q. Niyaz, V. Devabhak- Grasp the key: Towards fast and accurate host-based intrusion detection
tuni, A machine learning based two-stage wi-fi network intrusion in data centers, Vol. 12743 LNCS, Springer Science and Business Media
detection system, Electronics (Switzerland) 9 (2020) 1–18, sHAP. Deutschland GmbH, 2021, pp. 181–194.
doi:10.3390/electronics9101689. [38] S. R. Islam, W. Eberle, Implications of combining domain knowledge
[18] M. Szczepanski, M. Choras, M. Pawlicki, R. Kozik, Achieving ex- in explainable artificial intelligence, CEUR Workshop Proceedings 2846
plainability of intrusion detection system by hybrid oracle-explainer (2021).
approach, Proceedings of the International Joint Conference on Neural [39] G. Andresini, F. Pendlebury, F. Pierazzi, C. Loglisci, A. Appice,
Networks (2020). doi:10.1109/IJCNN48605.2020.9207199. L. Cavallaro, Insomnia: Towards concept-drift robustness
in network intrusion detection, ACM, 2021, pp. 111–122.
doi:10.1145/3474369.3486864.
[40] F. Becker, A. Drichel, C. Muller, T. Ertl, Interpretable visualizations of
deep neural networks for domain generation algorithm detection, 2020
IEEE Symposium on Visualization for Cyber Security, VizSec 2020
(2020) 25–29doi:10.1109/VizSec51108.2020.00010.
[41] W. Guo, D. Mu, J. Xu, P. Su, G. Wang, X. Xing, Lemna: Explaining
deep learning based security applications, Association for Computing
Machinery, 2018, pp. 364–379. doi:10.1145/3243734.3243792.
[42] W. Song, X. Li, S. Afroz, D. Garg, D. Kuznetsov, H. Yin,
Mab-malware: A reinforcement learning framework for attacking static malware classifiers
(3 2020).
URL http://arxiv.org/abs/2003.03100
[43] H. Suryotrisongko, Y. Musashi, A. Tsuneda, K. Sugitani, Ro-
bust botnet dga detection: Blending xai and osint for cyber
threat intelligence sharing, IEEE Access 10 (2022) 34613–34624.
doi:10.1109/access.2022.3162588.
[44] D. L. Aguilar, M. A. M. Perez, O. Loyola-Gonzalez, K. K. R. Choo,
E. Bucheli-Susarrey, Towards an interpretable autoencoder: A deci-
sion tree-based autoencoder and its application in anomaly detection,
IEEE Transactions on Dependable and Secure Computing (2022).
doi:10.1109/TDSC.2022.3148331.
[45] G. Iadarola, F. Martinelli, F. Mercaldo, A. Santone, Towards an
interpretable deep learning model for mobile malware detection
and family identification, Computers and Security 105 (6 2021).
doi:10.1016/j.cose.2021.102198.
[46] M. Zolanvari, Z. Yang, K. Khan, R. Jain, N. Meskin, Trust
xai: Model-agnostic explanations for ai with a case study
on iiot security, IEEE Internet of Things Journal (2021).
doi:10.1109/JIOT.2021.3122019.
[47] J. Feichtner, S. Gruber, Understanding privacy awareness in android app
descriptions using deep learning, Association for Computing Machinery,
Inc, 2020, pp. 203–214. doi:10.1145/3374664.3375730.
[48] A. Nascita, A. Montieri, G. Aceto, D. Ciuonzo, V. Persico,
A. Pescape, Unveiling mimetic: Interpreting deep learning traf-
fic classifiers via xai techniques, IEEE, 2021, pp. 455–460.
doi:10.1109/CSR51186.2021.9527948.
[49] K. Roshan, A. Zafar, Utilizing xai technique to improve au-
toencoder based model for computer network anomaly detec-
tion with shapley additive explanation(shap), International journal
of Computer Networks & Communications 13 (2021) 109–128.
doi:10.5121/ijcnc.2021.13607.
[50] H. Aghakhani, F. Gritti, F. Mecca, M. Lindorfer, S. Ortolani,
D. Balzarotti, G. Vigna, C. Kruegel, When malware is packin’ heat;
limits of machine learning classifiers based on static analysis features,
Internet Society, 2020. doi:10.14722/ndss.2020.24310.
[51] B. Gulmezoglu, Xai-based microarchitectural side-channel
analysis for website fingerprinting attacks and defenses, IEEE
Transactions on Dependable and Secure Computing (2021).
doi:10.1109/TDSC.2021.3117145.
[52] D. J. Rao, S. Mane, Zero-shot learning approach to adaptive cybersecu-
rity using explainable ai.
[53] S. R. Islam, W. Eberle, S. K. Ghafoor, Towards quantification of
explainability in explainable artificial intelligence methods (11 2019).

VI. A PPENDIX
The following appendix contains a more detailed summary
of all the papers reviewed in this work.
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations


[17] SHAP (SHapley Intrusion “XAI was implemented to have an “” “”
Additive Detection insight for the decisions made by the
exPlanations) first stage ML model, mostly for the
cases where the records were predicted
as impersonation or injection. The
features that significantly contribute to
their prediction were determined.”
[18] Decision Trees Intrusion “The ability to understand how a system “” “The derived explanation is
with Detection makes a decision is necessary to help not a faithful representation of
Microaggregation develop trust, settle issues of fairness the opaque classifier function
and perform the debugging of a model.” in general”, referring to the
fidelity feature in AI desiderata
[13].
[19] Example based, Intrusion “It is crucial that the inner workings of “” “”
Adversarial Detection data-driven models are transparent for
the engineers designing IDSs. Decisions
presented by explainable models can be
easily interpreted by a human,
simplifying the process of knowledge
discovery. Explainable approaches help
on diagnosing, debugging, and
understanding the decisions made by
the model, ultimately increasing the
trust on the data-driven IDS.”
[20] SHAP Intrusion “It is imperative to provide some “” “”
Detection information about the reasons behind
IDSs predictions, and provide
cybersecurity personnel with some
explanations about the detected
intrusions” and also “This framework
contributes to a deeper understanding
of the predictions made from IDSs, and
ultimately help build cyber users’ trust
in the IDSs.”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[21] Example based, Malware “Such analysis will help us to “” “”


Adversarial Classification understand the robustness of detectors
when dealing with minor variants of
known malware samples. The second
issue concerns the possibility of
uncovering important aspects of a
malware detection algorithm. Thus,
black box analysis of malware detectors
can point towards ways to improve on
existing malware detectors”
[22] SDL (Symbolic Intrusion “We project that SDL-generated “Human experiment results “The major hurdle for symbolic
Deep Learning) Detection cognitive models of expert analysts will reveal that SDL can help to deep models of memory has
impart a high degree of trust for at reduce missed threats by 25%.” been a combinatoric explosion
least two reasons – (...) SDL promises of memory.”
to be a more transparent technique than
DL, one that is able to provide some
explainability for each of its
suggestions”
[23] SHAP, FOS Intrusion “AI for cyber security requires final “” “”
(Feature Outlier Detection, confirmation by an analyst, e.g.
Score) Malware malware misdetection can cause
Classification significant adverse side effects. Thus, a
human analyst must check all AI
predictions, which poses a major
obstacle to AI expansion. [XAI] enable
analysts with limited daily workload to
focus upon valuable data, and quickly
verify AI predictions.”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[24] SHAP, LIME Malware “The explanation will justify and “The performance of an “Convergence is a major issue
(Local Classification support disruptive administrative autoencoder model is with autoencoder design,
Interpretable decisions” and to answer the questions evaluated based on the model’s especially when the dataset
Model-Agnostic “Why did the ML classify a particular ability to recreate the input size is large, and the centroids
Explanations), and pod as a miner? How does the syscalls sequence. Validation of the of the various classes have
an auto-encoding- sequence change from one pod to autoencoder model also significant variance. Also,
based scheme for another? Which feature has the greatest validates the upstream half of when convergence is achieved,
LSTM (Long impact on miner prediction? Is there the classifier model which, in it is often the case that it is at
Short-Term any way to visualize the ML outcome turn, further strengthens the a local minimum of the loss
Memory) models apart from plotting the evaluation trust in the classifier’s function. Such difficulties
metrics?” outcome.” impact the quality of the
explainability method.”
[25] Deep Embedded Malware “Security experts not only do need to “” “”
Neural Network Classification, detect the incoming threat but also need
Expert System Phishing Detection to know the incorporating features that
cause that particular security incident”
and “Adding an explanation feature to
a neural network would enhance its
trustworthiness and reliability.”
[26] Gradient-based Malware “We investigate whether gradient-based “We propose and empirically “”
Explanations Classification attribution methods used to explain validate a few synthetic
classifiers’ decisions provide useful metrics that allow correlating
information about the robustness of the evenness of gradient-based
Android malware detectors against explanations with the classifier
sparse attacks.” robustness to adversarial
attacks.”
[27] Domain Intrusion “The lack of explainability and The authors conduct an The authors stress that “there
Knowledge Detection interpretability of successful AI models Explainability Test whose are some open challenges
Infusion is a key stumbling block when trust in a purpose “is to discover the surrounding explainability and
model’s prediction is critical. This leads comparative advantages or interpretability such as an
to human intervention, which in turn disadvantages of incorporating agreement of what an
results in a delayed response or domain knowledge in the explanation is and to whom, a
decision” experiment” formalism for the explanation,
and quantifying the human
comprehensibility of the
explanation”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[28] - Cybersecurity “There are many applications where “” “”


Operations artificial intelligence (AI) can add a
benefit, but this benefit may not be fully
realized, if the human cannot
understand and interact with the output
as required by their context. Allowing
AI to explain its decisions can
potentially mitigate this issue.”
[29] Decision Tree Intrusion “eXplainable Artificial Intelligence “” “There may be a chance of
Prevention (XAI) has become increasingly overfitting when the algorithm
important to interpret the machine captures noise in the dataset”,
learning models to enhance trust besides that “Information gain
management by allowing human experts in decision trees is biased in
to understand the underlying data favor of those attributes with
evidence and causal reasoning” more levels. This behavior
might impact prediction
performance.”
[30] LIME Malware “It enables human users to understand, The paper defines the concept LIME technique does not
Classification appropriately trust, and effectively of “descriptive accuracy” as produce “fixed” feature
manage the emerging generation of the ability of the importance plots (i.e., a general
artificially intelligent partners” and also interpretations to properly rather than a case-to-case view
“Explainability in general also helps to describe what the model has of which variables are most
identify bias in raw data and strategize learned. Although it mentions informative when making a
the model optimization” “descriptive accuracy”, it does prediction). The explanation
not evaluate it against the reflects the behavior of the
applied technique. Nonetheless, classifier “around” the instance
the system is evaluated through being predicted.
use cases, showing an analysis
of why a model makes
mistakes.
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[31] SHAP Anomaly “The manual validation of results “” “”


Detection, becomes challenging without
Intrusion justification or additional clues. An
Detection explanation of why an instance is
anomalous enables the experts to focus
their investigation on the most
important anomalies and may increase
their trust in the algorithm”
[32] Feature Selection Intrusion “The mechanism of deep learning is “” “”
with Genetic Detection inexplicable. A malfunction of the deep
Algorithm learning model may occur if the
training dataset includes malicious or
erroneous data. Explainable artificial
intelligence (XAI) can give some insight
for improving the deep learning model
by explaining the cause of the
malfunction”
[33] SHAP, LIME, Intrusion “Deep neural networks are complex “” “”
Contrastive Detection and hard to interpret which makes
Explanations difficult to use them in production as
Method (CEM), reasons behind their decisions are
ProtoDash and unknown.” and also “We propose an
Boolean Decision explainable AI framework along with
Rules via Column intrusion detection system which would
Generation help analyst to make final decision.”
(BRCG)
[34] Boolean Rule Intrusion “The decisions from AI solutions have The researchers conducted “”
Column Detection to be explainable to gain analysts’ trust extensive evaluation of
Generation and help analysts in making a confident explainability using human
(BRCG), Logistic and accountable decision.” and also subject and proxy methods
Rule Regres- “We need XAI to improve fairness, experiments.
sion(LogRR), accountability, and trust in decisions”
ProtoDash,
Contrastive
Explanations
Method (CEM)
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[35] LIME and Intrusion “Researchers in the field of network “” No case study on real datasets;
Saliency Maps Detection security are the target users of our lack of scalability for larger
visual analytics system. The design goal DL models; limited to CNNs.
of our visual analytics system is to aid
our target users to better interpret the
deep learning model.”
[36] Random Forest DGA “The proposed state-of-the-art “” “”
Classification, classifiers are based on deep learning
Intrusion models. The black box nature of these
Detection makes it difficult to evaluate their
reasoning. The resulting lack of
confidence makes the utilization of such
models impracticable”
[37] SHAP Intrusion “Help us understand how deep learning “” “”
Detection models learn and why they make such
decisions for each input” and also “We
propose a method to improve detection
efficiency by using XAI to reduce the
input data”
[38] Domain Intrusion “The lack of explainability leads to a The authors extend the work “”
Knowledge Detection lack of trust in the model and made in [27], applying to it the
Infusion prediction, which can involve ethical Explainability Quantification
and legal issues in critical domains due Method previously developed
to the potential implications on human by them in [53].
interests, rights, and lives”
[39] DALEX (moDel Intrusion “Apply explainable AI to better “” “”
Agnostic Detection interpret how the model reacts to the
Language for shifting distribution.”
Exploration and
eXplanation)
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[40] Decision Tree, DGA “Deep learning models have found wide “” “”
Visual Analytics Classification, adoption for many problems. However,
Intrusion their blackbox nature makes it hard to
Detection trust their decisions and to evaluate
their line of reasoning. In the field of
cybersecurity, this lack of trust and
understanding poses a significant
challenge for the utilization of deep
learning models”
[41] LEMNA (Local Malware “Security practitioners are concerned “The fidelity metrics are “”
Explanation Classification, about the lack of transparency of the computed either by directly
Method using Binary Reverse deep learning models and thus hesitated comparing the approximated
Nonlinear Engineering to widely adopt deep learning classifiers detection boundary with the
Approximation) in security and safety-critical areas” real one, or running
end-to-end feature tests”
[42] Adversarial Malware “Researchers should use explanation “” “”
Classification techniques to understand the behavior
of the classifiers and check if the
learned features are fragile features
that can be easily evaded or if they
conflict with expert knowledge”
[43] ANCHOR, LIME, DGA XAI and Open-Source Intelligence can “” “”
SHAP and Classification, together address “trust problems”,
Counterfactual Intrusion serving as “an antidote for skepticism
Explanations Detection to the shared models and preventing
automation bias.”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[44] Decision Trees Anomaly They cite other authors stating that “” “First, our architecture should
Detection “explanations ensure the correct be used in datasets with less
behavior of the algorithm” and than a thousand attributes
“machine learning systems would be because it builds a tree for
more widely accepted once they are each attribute. Building
capable of providing satisfactory thousands of trees is
explanations for their decisions.” time-consuming, although this
limitation may be overcome
with access to better
computing resources. Second,
our proposal may fail to build
decision trees for attributes
with tens of different values in
the definition domain.”
[45] Heatmaps Malware “The XAI aims to enable human users “” “Despite their usefulness, the
Classification to develop understanding and trusts to cumulative heatmaps, at the
the model prediction.” and also “The time of writing, do not play a
effectiveness of these [autonomous] role that can be automatized
systems is limited by the current without knowledge on the
inability of machines to explain their dataset and the malware code.
decisions and actions to human users. They help to interpret and
The most important step towards understand the outcomes, but
reliable models is the possibility to they do not provide fixed
understand their prediction i.e., the information that could be used
so-called interpretability.” to any user to evaluate models
without any prior-knowledge
on the architecture or the
problem itself.”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[46] TRUST Intrusion “Despite the popularity of AI, it is “” “Due to using information gain
(Transparency Detection limited by its current inability to build in picking the representatives,
Relying Upon trust. Researchers and industrial TRUST might overfit to the
Statistical Theory) leaders have a hard time explaining the training set. This would lead to
decisions that sophisticated AI poor performance on unseen
algorithms come up with because they data. On the other hand, if the
(as AI users) cannot fully understand Gaussian assumption cannot
why and how these “black boxes” make be made or the probability
their decisions.” distribution of data changes,
the output of TRUST would not
be reliable. Also, the
assumption of samples being
drawn independently is very
important.”
[47] LIME Abuse of “To assess the quality of our network “” “”
Privacy-related and to avoid incomprehensible black
Permissions on box predictions, we employ the model
Mobile Apps explaining algorithm LIME”
[48] Deep SHAP Network Traffic “The black-box nature of DL techniques “” “”
Classifier, hides the reason behind specific
Intrusion classification outcomes. This impacts
Detection the understanding of classification
errors and the evaluation of the
resilience against adversarial
manipulation of traffic to impair
identification. Moreover, by
understanding the behavior of the
learned model, performance
enhancements can be pursued with
much more focused and efficient
research, compared with a
less-informed exploration of the
(typically huge) hyper-parameters
space.”
Table VIII: Summary of Papers

Paper Technique CyberSec Area XAI Importance Evaluation of Explainability Limitations

[49] SHAP Anomaly “In making life-changing decisions such “” “”


Detection, as disease diagnosis, it’s crucial to
Intrusion understand why the system makes such
Detection a critical decision. Hence the
importance of explaining the AI system.
Furthermore, the black-box nature of
the AI-based system gives excellent
results but without any explanation, and
hence, they lose their trust to adapt
these systems in critical decision
making.”
[50] Random Forest Malware The authors only discuss the results of “” “”
Classification the Random Forest approach in their
paper because “Random forest allows
for better interpretation of the results
compared to neural networks”.
[51] LIME and Website “The lack of XAI studies on Website “ROAR metric is implemented “In this study, LIME cannot be
Saliency Maps Fingerprinting Fingerprinting slows down the research on both techniques and it is applied on CNN model due to
Detection on countermeasures against this type of shown that LIME and saliency the lack of high performance.”
attack since the leakage source is not map correctly discover the
clearly visible to both attackers and most dominant features in the
cyber-defenders. Therefore, there is a side-channel measurements.”
need for a sophisticated analysis
technique to identify the leakage
sources in the side-channel data by
applying XAI algorithms to trained
models.”
[52] SHAP, LIME Alarm Flooding, “Explanations give us measurable “” “”
Intrusion factors as to what features influence the
Detection prediction of a cyber-attack and to what
degree” and also “Without any prior
knowledge of the attack, we try to
identify it, decipher the features that
contribute to its classification and try to
bucketize the attack in a specific
category - using explainable AI”
This figure "Advantages_of_Interpretability.png" is available in "png" format from:

http://arxiv.org/ps/2303.01259v1
This figure "fig1.png" is available in "png" format from:

http://arxiv.org/ps/2303.01259v1

You might also like