South African Institute of Computer Scientists and Information Technologists, Jul 20, 2005
When conducting a computer-based assessment in an educational environment, several infringements ... more When conducting a computer-based assessment in an educational environment, several infringements of assessment regulations could arise. Examples are, illegal communication (e.g. by e-mail, web or cell phone); hiding of computer objects with the aim of accessing or utilising it; impersonation of another student and presenting the assessment material (e.g. file containing answers of WebCT test, files that form part of a programming project) of another student. To determine beyond reasonable doubt that no infringement has taken place, various tools could be utilised. One such a tool, the key logger, is the subject of scrutiny for this study. Key loggers are considered a type of spyware. Spyware is software that gathers information secretly about a computer's use, usually installed without the user's consent or knowledge, and relays that information, also covertly, back to a third party. This paper reports the results of an explorative experiment applied to computer-based assessments with the aim to investigate the role of key loggers in computer-based assessment forensics. This exploratory experiment was conducted during computer-based assessments of different groups of students in different subjects. The results include a description of the set up of the controlled environment for the computer-based assessment, execution of the assessment with the accompanying data collection, preserving of the data, analysis of the data, effectiveness of the specific key logger in the forensic process and the conclusions derived from the data.
When conducting a computer-based assessment, several infringements of assessment regulations coul... more When conducting a computer-based assessment, several infringements of assessment regulations could arise. Examples are illegal communication (e.g. by email, web, cell phone), hiding of computer objects with the aim of accessing or utilizing it, impersonation of another learner and presenting the project of another learner. If infringement is suspected, a computer forensic investigation should be launched. Almost no academic institution has a computer forensic department that can assist with a computer forensic investigation and therefore the responsibility rests upon the lecturer. The purpose of this project is to apply forensic principles to a computer-based assessment environment in order to facilitate the identification and prosecution of any party that contravenes assessment regulations. The aim of the current paper is to consider the nature of a forensic ready computer-based assessment environment in more detail. This nature is derived from established computer forensic princip...
2015 International Conference on Computing, Communication and Security (ICCCS), 2015
With the advent of cloud computing systems it has become possible to provision large scale system... more With the advent of cloud computing systems it has become possible to provision large scale systems in a short time with little effort. The systems underpinning these cloud systems have to deal with massive amounts of data in order to function. Should an indecent occur that requires some form of forensic investigation it can be very challenging for an investigator to conduct the investigation. This is due, in large part, to the volatility of data in cloud systems. In this paper, a model architecture is proposed to enable proactive forensics of cloud computing systems. Using a reference architecture for cloud systems, an add-on system is created to enable the capture and storage of forensic data. The captured data is then available to the investigator should the need for an investigation arise. This must be achieved with minimal alteration or interruption of existing cloud systems. The system is described and a theoretical architectural model is given. An evaluation discusses the possible advantages and disadvantages of such a system and how it can be implemented as a proof of concept. It also relates the proposed model to the ISO 27043 standard of forensic investigations.
Existing digital forensic investigation process models have provided guidelines for identifying a... more Existing digital forensic investigation process models have provided guidelines for identifying and preserving potential digital evidence captured from a crime scene. However, for any of the digital forensic investigation process models developed across the world to be adopted and fully applied by the scientific community, it has to be tested. For this reason, the Harmonized Digital Forensic Investigation Process (HDFIP) model, currently a working draft towards becoming an international standard for digital forensic investigations (ISO/IEC 27043), needs to be tested. This paper, therefore, presents the findings of a case study used to test the HDFIP model implemented in the ISO/IEC 27043 draft standard. The testing and evaluation process uses an anonymised real-life case to test each subprocess (grouped in classes) of the HDFIP model to show that it maintains a structured and precise logical flow that aims to provide acceptance, reliability, usability, and flexibility. The case study used also helps to analyse the effectiveness of the HDFIP model to ensure that the principles of validity and admissibility are fulfilled. A process with these properties would reduce the disparities within the field of digital forensic investigations and achieve global acceptance and standardization.
This category involves vulnerabilities concerned with retrieving information of user accounts fro... more This category involves vulnerabilities concerned with retrieving information of user accounts from a specific system [SMK2 01]. As soon as an intruder has retrieved a list of the user names registered on a specific system, it is often only a matter of time before he/she obtains the password by using a passwordcracking program, for example L0pht Crack [LOPH 01]. After all, the user names have to be obtained before any attempt can be made to crack passwords.
A legion of vulnerabilities are potentially compromising the security status of IT industries inf... more A legion of vulnerabilities are potentially compromising the security status of IT industries infrastructures today. Current state-of-the-art intrusion detection systems (IDSs) can potentially identify some of the vulnerabilities. Each IDS defines its own and unique list of vulnerabilities, making it cumbersome for organisations to assess the completeness and reliability of vulnerability scans. What This furthermore complicates the matter of determining the degree to which a specific IDS complies to with the security requirements of a specific organisation. This paper presents an approach to harmonise different sets of vulnerabilities as currently used by state-of-the-art IDS tools.
south african institute of computer scientists and information technologists, Oct 4, 2004
ABSTRACT As the transmission of data over the internet increases, the need to protect connected s... more ABSTRACT As the transmission of data over the internet increases, the need to protect connected systems also increases. Intrusion Detection Systems (IDSs) are the latest technology used for this purpose. Although the field of IDSs is still developing, the systems that do exist are still not complete, in the sense that they are not able to detect all types of intrusions. Some attacks which are detected by various tools available today cannot be detected by other products, depending on the types and methods that they are built on. Using a Genetic Algorithm (GA) is one of the methods that IDSs use to detect intrusions. They incorporate the concept of Darwin's theory and natural selection to detect intrusions. Not much research has been conducted in this area besides the Genetic Algorithm as an Alternative Tool for Security Audit Trails Analysis (GASSATA) tool; there are very few IDSs that are completely developed from using GAs. The focus of this paper is to introduce the application of GA, in order to improve the effectiveness of IDSs.
Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2017, 3-10 Septem... more Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2017, 3-10 September 2017, Freedom of the Seas Cruise
A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accu... more A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accumulate relevant potential digital evidence (PDE) which may be needed during a post-event response process. The benefit of applying a CFRaaS model in a cloud environment, is that, it is designed to prevent the modification/ tampering of the cloud architectures or the infrastructure during the reactive process, which if it could, may end up having far-reaching implications. The authors of this article present the reactive process as a very costly exercise when the infrastructure must be reprogrammed every time the process is conducted. This may hamper successful investigation from the forensic experts and law enforcement agencies perspectives. The CFRaaS model, in its current state, has not been presented in a way that can help to classify or visualize the different types of potential evidence in all the cloud deployable models, and this may limit the expectations of what or how the required PDE may be collected. To address this problem, the article presents the CFRaaS from a holistic ontology-driven perspective, which allows the forensic experts to be able to apply the CFRaaS based on its simplicity of the concepts, relationship or semantics between different form of potential evidence, as well as how the security of a digital environment being investigated could be upheld. The CFRaaS in this context follows a fundamental ontology engineering approach that is based on the classical Resource Description Framework. The proposed ontology-driven approach to CFRaaS is, therefore, a knowledge-base that uses layer-dependencies, which could be an essential toolkit for digital forensic examiners and other stakeholders in cloud-security. The implementation of this approach could further provide a platform to develop other knowledge base components for cloud forensics and security.
The Internet constitutes a means of communication in terms of which millions of messages and huge... more The Internet constitutes a means of communication in terms of which millions of messages and huge chunks of data are electronically sent millions of miles across the globe each day thanks to the Transmission Control Protocol/Internet Protocol (TCP/IP). One of the functions of the TCP/IP is to break up each of these messages into smaller entities of equal length. Such
In order for digital evidence from a digital forensic investigation to be admissible, one needs t... more In order for digital evidence from a digital forensic investigation to be admissible, one needs to follow a formalised and ideally standardised process. The authors' previous research and initiative within ISO resulted in a new international standard ISO/IEC 27043:2015, titled “Information technology — Security techniques — Incident investigation principles and processes” as published in March 2015. The standard governs the digital forensic investigation process and covers it from a wide angle, while harmonising existing process models in this field. In this paper, the authors give an analysis of both the standard itself and of related standards so as to enable the reader to understand the ecosystem of standards relating to the digital forensic investigation process and role of ISO/IEC 27043:2015.
2019 IEEE Conference on Application, Information and Network Security (AINS), 2019
The threats posed by botnets in the cyberspace continue to grow each day and it has become very h... more The threats posed by botnets in the cyberspace continue to grow each day and it has become very hard to detect or infiltrate the cynicism of bots. This, is owing to the fact, that, the botnet developers each day, keep changing the propagation and attack techniques. Currently, most of these attacks have been centered on stealing computing energy, theft of personal information and Distributed Denial of Service (DDoS) attacks. In this paper, the authors propose a novel technique that uses the Non-Deterministic Polynomial-Time Hardness (NP-Hard Problem) based on the Traveling Salesperson Person (TSP) that depicts that a given bot, bj, is able to visit each host on a network environment, NE, and then it returns to the botmaster, in form of instruction(command), through optimal minimization of the hosts that are (may) be attacked. Given that bj represents a piece of malicious code and TSP-NP Hard Problem, which forms part of combinatorial optimization, the authors present this as an effective approach for the detection of the botnet. It is worth noting that the concentration of this study is basically on the centralized botnet architecture. This holistic approach shows that botnet detection accuracy can be increased with a degree of certainty and potentially decrease the chances of false positives. Nevertheless, a discussion on the possible applicability and implementation has also been given in this paper.
2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), 2020
The relationship between negative and positive connotations with regard to malware in the cloud i... more The relationship between negative and positive connotations with regard to malware in the cloud is rarely investigated according to the prevailing literature. However, there is a significant relationship between the use of positive and negative connotations. A clear distinction between the two emanates when we use the originally considered malicious code, for positive connotation like in the case of capturing keystrokes in a proactive forensic purpose. This is done during the collection of digital evidence for Digital Forensic Readiness (DFR) purposes, in preparation of a Digital Forensic Investigation (DFI) process. The paper explores the problem of having to use the keystrokes for positive reasons as a piece of potential evidence through extraction and digitally preserving it as highlighted in ISO/IEC 27037: 2012 (security approaches) and ISO/IEC 27043: 2015 (legal connotations). In this paper, therefore, the authors present a technique of how DFR can be achieved through the collection of digital information from the originally considered malicious code. This is achieved without modifying the cloud operations or the infrastructure thereof, while preserving the integrity of digital information and possibly maintain the chain of custody at the same time. The paper proposes that the threshold of malicious code intrusion in the cloud can be transformed to an efficacious process of DFR through logical acquisition and digitally preserving keystrokes. The experiment-tested keystrokes have shown a significant approach that could achieve proactive forensics.
Abstract A potential security incident may go unsolved if standardized forensic approaches are no... more Abstract A potential security incident may go unsolved if standardized forensic approaches are not applied during lawful investigations. This paper highlights the importance of mapping the digital forensic application requirement specification to an international standard, precisely ISO/IEC 27043. The outcome of this work is projected to contribute to the problem of secure DF tool creation, and in the process address Software Requirements Specification (SRS) as a process of digital evidence admissibility.
More than ever before, the world is nowadays experiencing increased cyber-attacks in all areas of... more More than ever before, the world is nowadays experiencing increased cyber-attacks in all areas of our daily lives. This situation has made combating cybercrimes a daily struggle for both individuals and organisations. Furthermore, this struggle has been aggravated by the fact that today's cybercriminals have gone a step ahead and are able to employ complicated cyber-attack techniques. Some of those techniques are minuscule and inconspicuous in nature and often camouflage in the facade of authentic requests and commands. In order to combat this menace, especially after a security incident has happened, cyber security professionals as well as digital forensic investigators are always forced to sift through large and complex pools of data also known as Big Data in an effort to unveil Potential Digital Evidence (PDE) that can be used to support litigations. Gathered PDE can then be used to help investigators arrive at particular conclusions and/or decisions. In the case of cyber forensics, what makes the process even tough for investigators is the fact that Big Data often comes from multiple sources and has different file formats. Forensic investigators often have less time and budget to handle the increased demands when it comes to the analysis of these large amounts of complex data for forensic purposes. It is for this reason that the authors in this paper have realised that Deep Learning (DL), which is a subset of Artificial Intelligence (AI), has very distinct use-cases in the domain of cyber forensics, and even if many people might argue that it's not an unrivalled solution, it can help enhance the fight against cybercrime. This paper therefore proposes a generic framework for diverging DL cognitive computing techniques into Cyber Forensics (CF) hereafter referred to as the DLCF Framework. DL uses some machine learning techniques to solve problems through the use of neural networks that simulate human decision-making. Based on these grounds, DL holds the potential to dramatically change the domain of CF in a variety of ways as well as provide solutions to forensic investigators. Such solutions can range from, reducing bias in forensic investigations to challenging what evidence is considered admissible in a court of law or any civil hearing and many more.
Digital forensics (DF) is a growing field that is gaining popularity among many computer professi... more Digital forensics (DF) is a growing field that is gaining popularity among many computer professionals, law enforcement agencies and other stakeholders who must always cooperate in this profession. Unfortunately, this has created an environment replete with semantic disparities within the domain that needs to be resolved and/or eliminated. For the purpose of this study, semantic disparity refers to disagreements about the meaning, interpretation, descriptions and the intended use of the same or related data and terminologies. If semantic disparity is not detected and resolved, it may lead to misunderstandings. Even worse, since the people involved may not be from the same neighbourhood, they may not be aware of the existence of the semantic disparities, and probably might not easily realize it. The aim of this paper, therefore, is to discuss semantic disparity in DF and further elaborates on how to manage it. In addition, this paper also presents the significance of semantic reconciliation in DF. Semantic reconciliation refers to reconciling the meaning (including the interpretations and descriptions) of terminologies and data used in digital forensics. Managing semantic disparities and the significance of semantic reconciliation in digital forensics constitutes the main contributions of this paper.
Cloud computing is a novel computing paradigm that presents new research opportunities in the fie... more Cloud computing is a novel computing paradigm that presents new research opportunities in the field of digital forensics. Cloud computing is based on the following principles: on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service. These principles require that cloud computing be distributed internationally. Even if the cloud is hosted locally, it is based on multi tenancy, which is a challenge when using an advanced "dead" forensic approach. For these reasons, digital forensic investigations in cloud computing need to be performed on live systems. There are challenges in cloud forensics itself, as there are no standardised digital forensic procedures and processes. This paper is part of an effort by the authors to standardise the digital forensic process, and we therefore focus specifically on live forensics. Since cloud computing services are provisioned over the Internet, live forensics and network forensics form an integral part of cloud forensics. In a bid to standardise a digital forensic process in cloud computing, there is a need to first focus on live forensics and network forensics. In this paper we present digital forensic procedures on live forensics that follow the draft international standard for Investigation Principles and Processes. A standardised live digital forensic process will form part of a standardised cloud forensic process.
South African Institute of Computer Scientists and Information Technologists, Jul 20, 2005
When conducting a computer-based assessment in an educational environment, several infringements ... more When conducting a computer-based assessment in an educational environment, several infringements of assessment regulations could arise. Examples are, illegal communication (e.g. by e-mail, web or cell phone); hiding of computer objects with the aim of accessing or utilising it; impersonation of another student and presenting the assessment material (e.g. file containing answers of WebCT test, files that form part of a programming project) of another student. To determine beyond reasonable doubt that no infringement has taken place, various tools could be utilised. One such a tool, the key logger, is the subject of scrutiny for this study. Key loggers are considered a type of spyware. Spyware is software that gathers information secretly about a computer's use, usually installed without the user's consent or knowledge, and relays that information, also covertly, back to a third party. This paper reports the results of an explorative experiment applied to computer-based assessments with the aim to investigate the role of key loggers in computer-based assessment forensics. This exploratory experiment was conducted during computer-based assessments of different groups of students in different subjects. The results include a description of the set up of the controlled environment for the computer-based assessment, execution of the assessment with the accompanying data collection, preserving of the data, analysis of the data, effectiveness of the specific key logger in the forensic process and the conclusions derived from the data.
When conducting a computer-based assessment, several infringements of assessment regulations coul... more When conducting a computer-based assessment, several infringements of assessment regulations could arise. Examples are illegal communication (e.g. by email, web, cell phone), hiding of computer objects with the aim of accessing or utilizing it, impersonation of another learner and presenting the project of another learner. If infringement is suspected, a computer forensic investigation should be launched. Almost no academic institution has a computer forensic department that can assist with a computer forensic investigation and therefore the responsibility rests upon the lecturer. The purpose of this project is to apply forensic principles to a computer-based assessment environment in order to facilitate the identification and prosecution of any party that contravenes assessment regulations. The aim of the current paper is to consider the nature of a forensic ready computer-based assessment environment in more detail. This nature is derived from established computer forensic princip...
2015 International Conference on Computing, Communication and Security (ICCCS), 2015
With the advent of cloud computing systems it has become possible to provision large scale system... more With the advent of cloud computing systems it has become possible to provision large scale systems in a short time with little effort. The systems underpinning these cloud systems have to deal with massive amounts of data in order to function. Should an indecent occur that requires some form of forensic investigation it can be very challenging for an investigator to conduct the investigation. This is due, in large part, to the volatility of data in cloud systems. In this paper, a model architecture is proposed to enable proactive forensics of cloud computing systems. Using a reference architecture for cloud systems, an add-on system is created to enable the capture and storage of forensic data. The captured data is then available to the investigator should the need for an investigation arise. This must be achieved with minimal alteration or interruption of existing cloud systems. The system is described and a theoretical architectural model is given. An evaluation discusses the possible advantages and disadvantages of such a system and how it can be implemented as a proof of concept. It also relates the proposed model to the ISO 27043 standard of forensic investigations.
Existing digital forensic investigation process models have provided guidelines for identifying a... more Existing digital forensic investigation process models have provided guidelines for identifying and preserving potential digital evidence captured from a crime scene. However, for any of the digital forensic investigation process models developed across the world to be adopted and fully applied by the scientific community, it has to be tested. For this reason, the Harmonized Digital Forensic Investigation Process (HDFIP) model, currently a working draft towards becoming an international standard for digital forensic investigations (ISO/IEC 27043), needs to be tested. This paper, therefore, presents the findings of a case study used to test the HDFIP model implemented in the ISO/IEC 27043 draft standard. The testing and evaluation process uses an anonymised real-life case to test each subprocess (grouped in classes) of the HDFIP model to show that it maintains a structured and precise logical flow that aims to provide acceptance, reliability, usability, and flexibility. The case study used also helps to analyse the effectiveness of the HDFIP model to ensure that the principles of validity and admissibility are fulfilled. A process with these properties would reduce the disparities within the field of digital forensic investigations and achieve global acceptance and standardization.
This category involves vulnerabilities concerned with retrieving information of user accounts fro... more This category involves vulnerabilities concerned with retrieving information of user accounts from a specific system [SMK2 01]. As soon as an intruder has retrieved a list of the user names registered on a specific system, it is often only a matter of time before he/she obtains the password by using a passwordcracking program, for example L0pht Crack [LOPH 01]. After all, the user names have to be obtained before any attempt can be made to crack passwords.
A legion of vulnerabilities are potentially compromising the security status of IT industries inf... more A legion of vulnerabilities are potentially compromising the security status of IT industries infrastructures today. Current state-of-the-art intrusion detection systems (IDSs) can potentially identify some of the vulnerabilities. Each IDS defines its own and unique list of vulnerabilities, making it cumbersome for organisations to assess the completeness and reliability of vulnerability scans. What This furthermore complicates the matter of determining the degree to which a specific IDS complies to with the security requirements of a specific organisation. This paper presents an approach to harmonise different sets of vulnerabilities as currently used by state-of-the-art IDS tools.
south african institute of computer scientists and information technologists, Oct 4, 2004
ABSTRACT As the transmission of data over the internet increases, the need to protect connected s... more ABSTRACT As the transmission of data over the internet increases, the need to protect connected systems also increases. Intrusion Detection Systems (IDSs) are the latest technology used for this purpose. Although the field of IDSs is still developing, the systems that do exist are still not complete, in the sense that they are not able to detect all types of intrusions. Some attacks which are detected by various tools available today cannot be detected by other products, depending on the types and methods that they are built on. Using a Genetic Algorithm (GA) is one of the methods that IDSs use to detect intrusions. They incorporate the concept of Darwin's theory and natural selection to detect intrusions. Not much research has been conducted in this area besides the Genetic Algorithm as an Alternative Tool for Security Audit Trails Analysis (GASSATA) tool; there are very few IDSs that are completely developed from using GAs. The focus of this paper is to introduce the application of GA, in order to improve the effectiveness of IDSs.
Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2017, 3-10 Septem... more Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2017, 3-10 September 2017, Freedom of the Seas Cruise
A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accu... more A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accumulate relevant potential digital evidence (PDE) which may be needed during a post-event response process. The benefit of applying a CFRaaS model in a cloud environment, is that, it is designed to prevent the modification/ tampering of the cloud architectures or the infrastructure during the reactive process, which if it could, may end up having far-reaching implications. The authors of this article present the reactive process as a very costly exercise when the infrastructure must be reprogrammed every time the process is conducted. This may hamper successful investigation from the forensic experts and law enforcement agencies perspectives. The CFRaaS model, in its current state, has not been presented in a way that can help to classify or visualize the different types of potential evidence in all the cloud deployable models, and this may limit the expectations of what or how the required PDE may be collected. To address this problem, the article presents the CFRaaS from a holistic ontology-driven perspective, which allows the forensic experts to be able to apply the CFRaaS based on its simplicity of the concepts, relationship or semantics between different form of potential evidence, as well as how the security of a digital environment being investigated could be upheld. The CFRaaS in this context follows a fundamental ontology engineering approach that is based on the classical Resource Description Framework. The proposed ontology-driven approach to CFRaaS is, therefore, a knowledge-base that uses layer-dependencies, which could be an essential toolkit for digital forensic examiners and other stakeholders in cloud-security. The implementation of this approach could further provide a platform to develop other knowledge base components for cloud forensics and security.
The Internet constitutes a means of communication in terms of which millions of messages and huge... more The Internet constitutes a means of communication in terms of which millions of messages and huge chunks of data are electronically sent millions of miles across the globe each day thanks to the Transmission Control Protocol/Internet Protocol (TCP/IP). One of the functions of the TCP/IP is to break up each of these messages into smaller entities of equal length. Such
In order for digital evidence from a digital forensic investigation to be admissible, one needs t... more In order for digital evidence from a digital forensic investigation to be admissible, one needs to follow a formalised and ideally standardised process. The authors' previous research and initiative within ISO resulted in a new international standard ISO/IEC 27043:2015, titled “Information technology — Security techniques — Incident investigation principles and processes” as published in March 2015. The standard governs the digital forensic investigation process and covers it from a wide angle, while harmonising existing process models in this field. In this paper, the authors give an analysis of both the standard itself and of related standards so as to enable the reader to understand the ecosystem of standards relating to the digital forensic investigation process and role of ISO/IEC 27043:2015.
2019 IEEE Conference on Application, Information and Network Security (AINS), 2019
The threats posed by botnets in the cyberspace continue to grow each day and it has become very h... more The threats posed by botnets in the cyberspace continue to grow each day and it has become very hard to detect or infiltrate the cynicism of bots. This, is owing to the fact, that, the botnet developers each day, keep changing the propagation and attack techniques. Currently, most of these attacks have been centered on stealing computing energy, theft of personal information and Distributed Denial of Service (DDoS) attacks. In this paper, the authors propose a novel technique that uses the Non-Deterministic Polynomial-Time Hardness (NP-Hard Problem) based on the Traveling Salesperson Person (TSP) that depicts that a given bot, bj, is able to visit each host on a network environment, NE, and then it returns to the botmaster, in form of instruction(command), through optimal minimization of the hosts that are (may) be attacked. Given that bj represents a piece of malicious code and TSP-NP Hard Problem, which forms part of combinatorial optimization, the authors present this as an effective approach for the detection of the botnet. It is worth noting that the concentration of this study is basically on the centralized botnet architecture. This holistic approach shows that botnet detection accuracy can be increased with a degree of certainty and potentially decrease the chances of false positives. Nevertheless, a discussion on the possible applicability and implementation has also been given in this paper.
2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), 2020
The relationship between negative and positive connotations with regard to malware in the cloud i... more The relationship between negative and positive connotations with regard to malware in the cloud is rarely investigated according to the prevailing literature. However, there is a significant relationship between the use of positive and negative connotations. A clear distinction between the two emanates when we use the originally considered malicious code, for positive connotation like in the case of capturing keystrokes in a proactive forensic purpose. This is done during the collection of digital evidence for Digital Forensic Readiness (DFR) purposes, in preparation of a Digital Forensic Investigation (DFI) process. The paper explores the problem of having to use the keystrokes for positive reasons as a piece of potential evidence through extraction and digitally preserving it as highlighted in ISO/IEC 27037: 2012 (security approaches) and ISO/IEC 27043: 2015 (legal connotations). In this paper, therefore, the authors present a technique of how DFR can be achieved through the collection of digital information from the originally considered malicious code. This is achieved without modifying the cloud operations or the infrastructure thereof, while preserving the integrity of digital information and possibly maintain the chain of custody at the same time. The paper proposes that the threshold of malicious code intrusion in the cloud can be transformed to an efficacious process of DFR through logical acquisition and digitally preserving keystrokes. The experiment-tested keystrokes have shown a significant approach that could achieve proactive forensics.
Abstract A potential security incident may go unsolved if standardized forensic approaches are no... more Abstract A potential security incident may go unsolved if standardized forensic approaches are not applied during lawful investigations. This paper highlights the importance of mapping the digital forensic application requirement specification to an international standard, precisely ISO/IEC 27043. The outcome of this work is projected to contribute to the problem of secure DF tool creation, and in the process address Software Requirements Specification (SRS) as a process of digital evidence admissibility.
More than ever before, the world is nowadays experiencing increased cyber-attacks in all areas of... more More than ever before, the world is nowadays experiencing increased cyber-attacks in all areas of our daily lives. This situation has made combating cybercrimes a daily struggle for both individuals and organisations. Furthermore, this struggle has been aggravated by the fact that today's cybercriminals have gone a step ahead and are able to employ complicated cyber-attack techniques. Some of those techniques are minuscule and inconspicuous in nature and often camouflage in the facade of authentic requests and commands. In order to combat this menace, especially after a security incident has happened, cyber security professionals as well as digital forensic investigators are always forced to sift through large and complex pools of data also known as Big Data in an effort to unveil Potential Digital Evidence (PDE) that can be used to support litigations. Gathered PDE can then be used to help investigators arrive at particular conclusions and/or decisions. In the case of cyber forensics, what makes the process even tough for investigators is the fact that Big Data often comes from multiple sources and has different file formats. Forensic investigators often have less time and budget to handle the increased demands when it comes to the analysis of these large amounts of complex data for forensic purposes. It is for this reason that the authors in this paper have realised that Deep Learning (DL), which is a subset of Artificial Intelligence (AI), has very distinct use-cases in the domain of cyber forensics, and even if many people might argue that it's not an unrivalled solution, it can help enhance the fight against cybercrime. This paper therefore proposes a generic framework for diverging DL cognitive computing techniques into Cyber Forensics (CF) hereafter referred to as the DLCF Framework. DL uses some machine learning techniques to solve problems through the use of neural networks that simulate human decision-making. Based on these grounds, DL holds the potential to dramatically change the domain of CF in a variety of ways as well as provide solutions to forensic investigators. Such solutions can range from, reducing bias in forensic investigations to challenging what evidence is considered admissible in a court of law or any civil hearing and many more.
Digital forensics (DF) is a growing field that is gaining popularity among many computer professi... more Digital forensics (DF) is a growing field that is gaining popularity among many computer professionals, law enforcement agencies and other stakeholders who must always cooperate in this profession. Unfortunately, this has created an environment replete with semantic disparities within the domain that needs to be resolved and/or eliminated. For the purpose of this study, semantic disparity refers to disagreements about the meaning, interpretation, descriptions and the intended use of the same or related data and terminologies. If semantic disparity is not detected and resolved, it may lead to misunderstandings. Even worse, since the people involved may not be from the same neighbourhood, they may not be aware of the existence of the semantic disparities, and probably might not easily realize it. The aim of this paper, therefore, is to discuss semantic disparity in DF and further elaborates on how to manage it. In addition, this paper also presents the significance of semantic reconciliation in DF. Semantic reconciliation refers to reconciling the meaning (including the interpretations and descriptions) of terminologies and data used in digital forensics. Managing semantic disparities and the significance of semantic reconciliation in digital forensics constitutes the main contributions of this paper.
Cloud computing is a novel computing paradigm that presents new research opportunities in the fie... more Cloud computing is a novel computing paradigm that presents new research opportunities in the field of digital forensics. Cloud computing is based on the following principles: on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service. These principles require that cloud computing be distributed internationally. Even if the cloud is hosted locally, it is based on multi tenancy, which is a challenge when using an advanced "dead" forensic approach. For these reasons, digital forensic investigations in cloud computing need to be performed on live systems. There are challenges in cloud forensics itself, as there are no standardised digital forensic procedures and processes. This paper is part of an effort by the authors to standardise the digital forensic process, and we therefore focus specifically on live forensics. Since cloud computing services are provisioned over the Internet, live forensics and network forensics form an integral part of cloud forensics. In a bid to standardise a digital forensic process in cloud computing, there is a need to first focus on live forensics and network forensics. In this paper we present digital forensic procedures on live forensics that follow the draft international standard for Investigation Principles and Processes. A standardised live digital forensic process will form part of a standardised cloud forensic process.
Uploads
Papers by H. Venter