Design and evaluation of bulk data transfer extensions for the NFComms framework
- Bradshaw, Karen L, Irwin, Barry V W, Pennefather, Sean
- Authors: Bradshaw, Karen L , Irwin, Barry V W , Pennefather, Sean
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430369 , vital:72686 , https://hdl.handle.net/10520/EJC-1d75c01e79
- Description: We present the design and implementation of an indirect messaging extension for the existing NFComms framework that provides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the framework and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 268 that of the current direct message passing framework at the cost of increased single message latency of up to 2. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
- Authors: Bradshaw, Karen L , Irwin, Barry V W , Pennefather, Sean
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430369 , vital:72686 , https://hdl.handle.net/10520/EJC-1d75c01e79
- Description: We present the design and implementation of an indirect messaging extension for the existing NFComms framework that provides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the framework and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 268 that of the current direct message passing framework at the cost of increased single message latency of up to 2. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
Quantifying the accuracy of small subnet-equivalent sampling of IPv4 internet background radiation datasets
- Chindipha, Stones D, Irwin, Barry V W, Herbert, Alan
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430271 , vital:72679 , https://doi.org/10.1145/3351108.3351129
- Description: Network telescopes have been used for over a decade to aid in identifying threats by gathering unsolicited network traffic. This Internet Background Radiation (IBR) data has proved to be a significant source of intelligence in combating emerging threats on the Internet at large. Traditionally, operation has required a significant contiguous block of IP addresses. Continued operation of such sensors by researchers and adoption by organisations as part of its operation intelligence is becoming a challenge due to the global shortage of IPv4 addresses. The pressure is on to use allocated IP addresses for operational purposes. Future use of IBR collection methods is likely to be limited to smaller IP address pools, which may not be contiguous. This paper offers a first step towards evaluating the feasibility of such small sensors. An evaluation is conducted of the random sampling of various subnet sized equivalents. The accuracy of observable data is compared against a traditional 'small' IPv4 network telescope using a /24 net-block. Results show that for much of the IBR data, sensors consisting of smaller, non-contiguous blocks of addresses are able to achieve high accuracy rates vs. the base case. While the results obtained given the current nature of IBR, it proves the viability for organisations to utilise free IP addresses within their networks for IBR collection and ultimately the production of Threat intelligence.
- Full Text:
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430271 , vital:72679 , https://doi.org/10.1145/3351108.3351129
- Description: Network telescopes have been used for over a decade to aid in identifying threats by gathering unsolicited network traffic. This Internet Background Radiation (IBR) data has proved to be a significant source of intelligence in combating emerging threats on the Internet at large. Traditionally, operation has required a significant contiguous block of IP addresses. Continued operation of such sensors by researchers and adoption by organisations as part of its operation intelligence is becoming a challenge due to the global shortage of IPv4 addresses. The pressure is on to use allocated IP addresses for operational purposes. Future use of IBR collection methods is likely to be limited to smaller IP address pools, which may not be contiguous. This paper offers a first step towards evaluating the feasibility of such small sensors. An evaluation is conducted of the random sampling of various subnet sized equivalents. The accuracy of observable data is compared against a traditional 'small' IPv4 network telescope using a /24 net-block. Results show that for much of the IBR data, sensors consisting of smaller, non-contiguous blocks of addresses are able to achieve high accuracy rates vs. the base case. While the results obtained given the current nature of IBR, it proves the viability for organisations to utilise free IP addresses within their networks for IBR collection and ultimately the production of Threat intelligence.
- Full Text:
Developing an electromagnetic noise generator to protect a Raspberry Pi from side channel analysis
- Frieslaar, I, Irwin, Barry V W
- Authors: Frieslaar, I , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429511 , vital:72618 , https://ieeexplore.ieee.org/abstract/document/8531950
- Description: This research investigates the Electromagnetic (EM) side channel leakage of a Raspberry Pi 2 B+. An evaluation is performed on the EM leakage as the device executes the AES-128 cryptographic algorithm contained in the libcrypto++ library in a threaded environment. Four multi-threaded implementations are evaluated. These implementations are Portable Operating System Interface Threads, C++11 threads, Threading Building Blocks, and OpenMP threads. It is demonstrated that the various thread techniques have distinct variations in frequency and shape as EM emanations are leaked from the Raspberry Pi. It is demonstrated that the AES-128 cryptographic implementation within the libcrypto++ library on a Raspberry Pi is vulnerable to Side Channel Analysis (SCA) attacks. The cryptographic process was seen visibly within the EM spectrum and the data for this process was extracted where digital filtering techniques was applied to the signal. The resultant data was utilised in the Differential Electromagnetic Analysis (DEMA) attack and the results revealed 16 sub-keys that are required to recover the full AES-128 secret key. Based on this discovery, this research introduced a multi-threading approach with the utilisation of Secure Hash Algorithm (SHA) to serve as a software based countermeasure to mitigate SCA attacks. The proposed countermeasure known as the FRIES noise generator executed as a Daemon and generated EM noise that was able to hide the cryptographic implementations and prevent the DEMA attack and other statistical analysis.
- Full Text:
- Authors: Frieslaar, I , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429511 , vital:72618 , https://ieeexplore.ieee.org/abstract/document/8531950
- Description: This research investigates the Electromagnetic (EM) side channel leakage of a Raspberry Pi 2 B+. An evaluation is performed on the EM leakage as the device executes the AES-128 cryptographic algorithm contained in the libcrypto++ library in a threaded environment. Four multi-threaded implementations are evaluated. These implementations are Portable Operating System Interface Threads, C++11 threads, Threading Building Blocks, and OpenMP threads. It is demonstrated that the various thread techniques have distinct variations in frequency and shape as EM emanations are leaked from the Raspberry Pi. It is demonstrated that the AES-128 cryptographic implementation within the libcrypto++ library on a Raspberry Pi is vulnerable to Side Channel Analysis (SCA) attacks. The cryptographic process was seen visibly within the EM spectrum and the data for this process was extracted where digital filtering techniques was applied to the signal. The resultant data was utilised in the Differential Electromagnetic Analysis (DEMA) attack and the results revealed 16 sub-keys that are required to recover the full AES-128 secret key. Based on this discovery, this research introduced a multi-threading approach with the utilisation of Secure Hash Algorithm (SHA) to serve as a software based countermeasure to mitigate SCA attacks. The proposed countermeasure known as the FRIES noise generator executed as a Daemon and generated EM noise that was able to hide the cryptographic implementations and prevent the DEMA attack and other statistical analysis.
- Full Text:
Exploration and design of a synchronous message passing framework for a CPU-NPU heterogeneous architecture
- Pennefather, Sean, Bradshaw, Karen L, Irwin, Barry V W
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429537 , vital:72620 , https://ieeexplore.ieee.org/abstract/document/8425384
- Description: In this paper we present the development of a framework for communication between an NPU (network processing unit) and CPU through synchronous message passing that is compliant with the synchronous communication events of the CSP formalisms. This framework is designed to be used for passing generic information between application components operating on both architectures and is intended to operate in conjunction with existing datapaths present on the NPU which in turn are responsible for network traffic transmission. An investigation of different message passing topologies is covered before the proposed message passing fabric is presented. As a proof of concept, an initial implementation of the fabric is developed and tested to determine its viability and correctness. Through testing it is shown that the implemented framework operates as intended. However, it is noted the throughput of the exploratory implementation is not considered suitable for high-performance applications and further evaluation is required.
- Full Text:
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429537 , vital:72620 , https://ieeexplore.ieee.org/abstract/document/8425384
- Description: In this paper we present the development of a framework for communication between an NPU (network processing unit) and CPU through synchronous message passing that is compliant with the synchronous communication events of the CSP formalisms. This framework is designed to be used for passing generic information between application components operating on both architectures and is intended to operate in conjunction with existing datapaths present on the NPU which in turn are responsible for network traffic transmission. An investigation of different message passing topologies is covered before the proposed message passing fabric is presented. As a proof of concept, an initial implementation of the fabric is developed and tested to determine its viability and correctness. Through testing it is shown that the implemented framework operates as intended. However, it is noted the throughput of the exploratory implementation is not considered suitable for high-performance applications and further evaluation is required.
- Full Text:
Extending the NFComms framework for bulk data transfers
- Pennefather, Sean, Bradshaw, Karen L, Irwin, Barry V W
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430152 , vital:72669 , https://doi.org/10.1145/3278681.3278686
- Description: In this paper we present the design and implementation of an indirect messaging extension for the existing NFComms framework that pro-vides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the frame-work and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 300× that of the current direct mes-sage passing framework at the cost of increased single message laten-cy of up to 2×. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
- Authors: Pennefather, Sean , Bradshaw, Karen L , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430152 , vital:72669 , https://doi.org/10.1145/3278681.3278686
- Description: In this paper we present the design and implementation of an indirect messaging extension for the existing NFComms framework that pro-vides communication between a network flow processor and host CPU. This extension addresses the bulk throughput limitations of the frame-work and is intended to work in conjunction with existing communication mediums. Testing of the framework extensions shows an increase in throughput performance of up to 300× that of the current direct mes-sage passing framework at the cost of increased single message laten-cy of up to 2×. This trade-off is considered acceptable as the proposed extensions are intended for bulk data transfer only while the existing message passing functionality of the framework is preserved and can be used in situations where low latency is required for small messages.
- Full Text:
Toward distributed key management for offline authentication
- Linklater, Gregory, Smith, Christian, Herbert, Alan, Irwin, Barry V W
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
Violations of good security practices in graphical passwords schemes: Enterprise constraints on scheme-design
- Vorster, Johannes, Irwin, Barry V W, van Heerden, Renier P
- Authors: Vorster, Johannes , Irwin, Barry V W , van Heerden, Renier P
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430324 , vital:72683 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/10919/Vorster_22337_2018.pdf?sequence=1isAllowed=y
- Description: During the past decade, the sophistication and maturity of Enterprise-level Information Security (EIS) Standards and Systems has increased significantly. This maturity, particularly in the handling of enterprise-wide capability models, has led to a set of standards – e.g. ISO/IEC 27001, NIST 800-53, ISO/IEC 27789 and CSA CCM – that propose controls applicable to the implementation of an Information Security Manage-ment System (ISMS). By nature, the academic community is fruitful in its endeavour to propose new password schemes; and Graphical Passwords (GPs) have had many proposals for schemes. In this paper, we explore the impact of good security standards and lessons-learnt over the past decade of EID as a model of constraint on GPs schemes. The paper focuses on a number of GP schemes and points out the var-ious security constraints and limitations, if such schemes are to be im-plemented at the enterprise level.
- Full Text:
- Authors: Vorster, Johannes , Irwin, Barry V W , van Heerden, Renier P
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430324 , vital:72683 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/10919/Vorster_22337_2018.pdf?sequence=1isAllowed=y
- Description: During the past decade, the sophistication and maturity of Enterprise-level Information Security (EIS) Standards and Systems has increased significantly. This maturity, particularly in the handling of enterprise-wide capability models, has led to a set of standards – e.g. ISO/IEC 27001, NIST 800-53, ISO/IEC 27789 and CSA CCM – that propose controls applicable to the implementation of an Information Security Manage-ment System (ISMS). By nature, the academic community is fruitful in its endeavour to propose new password schemes; and Graphical Passwords (GPs) have had many proposals for schemes. In this paper, we explore the impact of good security standards and lessons-learnt over the past decade of EID as a model of constraint on GPs schemes. The paper focuses on a number of GP schemes and points out the var-ious security constraints and limitations, if such schemes are to be im-plemented at the enterprise level.
- Full Text:
Investigating the effects various compilers have on the electromagnetic signature of a cryptographic executable
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430207 , vital:72673 , https://doi.org/10.1145/3129416.3129436
- Description: This research investigates changes in the electromagnetic (EM) signatures of a cryptographic binary executable based on compile-time parameters to the GNU and clang compilers. The source code was compiled and executed on a Raspberry Pi 2, which utilizes the ARMv7 CPU. Various optimization flags are enabled at compile-time and the output of the binary executable's EM signatures are captured at run-time. It is demonstrated that GNU and clang compilers produced different EM signature on program execution. The results indicated while utilizing the O3 optimization flag, the EM signature of the program changes. Additionally, the g++ compiler demonstrated fewer instructions were required to run the executable; this related to fewer EM emissions leaked. The EM data from the various compilers under different optimization levels was used as input data for a correlation power analysis attack. The results indicated that partial AES-128 encryption keys was possible. In addition, the fewest subkeys recovered was when the clang compiler was used with level O2 optimization. Finally, the research was able to recover 15 of 16 AES-128 cryptographic algorithm's subkeys, from the the Pi.
- Full Text:
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430207 , vital:72673 , https://doi.org/10.1145/3129416.3129436
- Description: This research investigates changes in the electromagnetic (EM) signatures of a cryptographic binary executable based on compile-time parameters to the GNU and clang compilers. The source code was compiled and executed on a Raspberry Pi 2, which utilizes the ARMv7 CPU. Various optimization flags are enabled at compile-time and the output of the binary executable's EM signatures are captured at run-time. It is demonstrated that GNU and clang compilers produced different EM signature on program execution. The results indicated while utilizing the O3 optimization flag, the EM signature of the program changes. Additionally, the g++ compiler demonstrated fewer instructions were required to run the executable; this related to fewer EM emissions leaked. The EM data from the various compilers under different optimization levels was used as input data for a correlation power analysis attack. The results indicated that partial AES-128 encryption keys was possible. In addition, the fewest subkeys recovered was when the clang compiler was used with level O2 optimization. Finally, the research was able to recover 15 of 16 AES-128 cryptographic algorithm's subkeys, from the the Pi.
- Full Text:
Investigating the electromagnetic side channel leakage from a raspberry pi
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429548 , vital:72621 , https://ieeexplore.ieee.org/abstract/document/8251771
- Description: This research investigates the Electromagnetic (EM) side channel leakage of a Raspberry Pi 2 B+. An evaluation is performed on the EM leakage as the device executes the AES-128 cryptographic algorithm contained in the Crypto++ library in a threaded environment. Four multi-threaded implementations are evaluated. These implementations are Portable Operating System Interface Threads, C++11 threads, Threading Building Blocks, and OpenMP threads. It is demonstrated that the various thread techniques have distinct variations in frequency and shape as EM emanations is leaked from the Raspberry Pi. Additionally, noise is introduced while the cryptographic algorithm executes. The results indicates that tt is still possible to visibly see the execution of the cryptographic algorithm. However, out of 50 occasions the cryptographic execution was not detected 32 times. It was further identified when calculating prime numbers, the cryptographic algorithm becomes hidden. Furthermore, the analysis pointed in the direction that when high prime numbers are calculated there is a window where the cryptographic algorithm can not be seen visibly in the EM spectrum.
- Full Text:
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429548 , vital:72621 , https://ieeexplore.ieee.org/abstract/document/8251771
- Description: This research investigates the Electromagnetic (EM) side channel leakage of a Raspberry Pi 2 B+. An evaluation is performed on the EM leakage as the device executes the AES-128 cryptographic algorithm contained in the Crypto++ library in a threaded environment. Four multi-threaded implementations are evaluated. These implementations are Portable Operating System Interface Threads, C++11 threads, Threading Building Blocks, and OpenMP threads. It is demonstrated that the various thread techniques have distinct variations in frequency and shape as EM emanations is leaked from the Raspberry Pi. Additionally, noise is introduced while the cryptographic algorithm executes. The results indicates that tt is still possible to visibly see the execution of the cryptographic algorithm. However, out of 50 occasions the cryptographic execution was not detected 32 times. It was further identified when calculating prime numbers, the cryptographic algorithm becomes hidden. Furthermore, the analysis pointed in the direction that when high prime numbers are calculated there is a window where the cryptographic algorithm can not be seen visibly in the EM spectrum.
- Full Text:
Investigating the utilization of the secure hash algorithm to generate electromagnetic noise
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430222 , vital:72674 , https://doi.org/10.1145/3163080.3163089
- Description: This research introduces an electromagnetic (EM) noise generator known as the FRIES noise generator to mitigate and obfuscate Side Channel Analysis (SCA) attacks against a Raspberry Pi. The FRIES noise generator utilizes the implementation of the Secure Hash Algorithm (SHA) from OpenSSL to generate white noise within the EM spectrum. This research further contributes to the body of knowledge by demonstrating that the SHA implementation of libcrypto++ and OpenSSL had different EM signatures. It was further revealed that as a more secure implementation of the SHA was executed additional data lines were used, resulting in increased EM emissions. It was demonstrated that the OpenSSL implementations of the SHA was more optimized as opposed to the libcrypto++ implementation by utilizing less resources and not leaving the device in a bottleneck. The FRIES daemon added noise to the EM leakage which prevents the visual location of the AES-128 cryptographic implementation. Finally, the cross-correlation test demonstrated that the EM features of the AES-128 algorithm was not detected within the FRIES noise.
- Full Text:
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430222 , vital:72674 , https://doi.org/10.1145/3163080.3163089
- Description: This research introduces an electromagnetic (EM) noise generator known as the FRIES noise generator to mitigate and obfuscate Side Channel Analysis (SCA) attacks against a Raspberry Pi. The FRIES noise generator utilizes the implementation of the Secure Hash Algorithm (SHA) from OpenSSL to generate white noise within the EM spectrum. This research further contributes to the body of knowledge by demonstrating that the SHA implementation of libcrypto++ and OpenSSL had different EM signatures. It was further revealed that as a more secure implementation of the SHA was executed additional data lines were used, resulting in increased EM emissions. It was demonstrated that the OpenSSL implementations of the SHA was more optimized as opposed to the libcrypto++ implementation by utilizing less resources and not leaving the device in a bottleneck. The FRIES daemon added noise to the EM leakage which prevents the visual location of the AES-128 cryptographic implementation. Finally, the cross-correlation test demonstrated that the EM features of the AES-128 algorithm was not detected within the FRIES noise.
- Full Text:
An overview of linux container based network emulation
- Peach, Schalk, Irwin, Barry V W, van Heerden, Renier
- Authors: Peach, Schalk , Irwin, Barry V W , van Heerden, Renier
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430095 , vital:72665 , https://www.proceedings.com/30838.html
- Description: The objective of this paper is to assess the current state of Container-Based Emulator implementations on the Linux platform. Through a nar-rative overview, a selection of open source Container-Based emulators are analysed to collect information regarding the technologies used to construct them to assess the current state of this emerging technology. Container-Based Emulators allows the creation of small emulated net-works on commodity hardware through the use of kernel level virtualiza-tion techniques, also referred to as containerisation. Container-Based Emulators act as a management tool to control containers and the ap-plications that execute within them. The ability of Container Based Emu-lators to create repeatable and controllable test networks makes it ideal for use as training and experimentation tools in the information security and network management fields. Due to the ease of use and low hard-ware requirements, the tools present a low cost alternative to other forms of network experimentation platforms. Through a review of cur-rent literature and source code, the current state of Container-Based Emulators is assessed.
- Full Text:
- Authors: Peach, Schalk , Irwin, Barry V W , van Heerden, Renier
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430095 , vital:72665 , https://www.proceedings.com/30838.html
- Description: The objective of this paper is to assess the current state of Container-Based Emulator implementations on the Linux platform. Through a nar-rative overview, a selection of open source Container-Based emulators are analysed to collect information regarding the technologies used to construct them to assess the current state of this emerging technology. Container-Based Emulators allows the creation of small emulated net-works on commodity hardware through the use of kernel level virtualiza-tion techniques, also referred to as containerisation. Container-Based Emulators act as a management tool to control containers and the ap-plications that execute within them. The ability of Container Based Emu-lators to create repeatable and controllable test networks makes it ideal for use as training and experimentation tools in the information security and network management fields. Due to the ease of use and low hard-ware requirements, the tools present a low cost alternative to other forms of network experimentation platforms. Through a review of cur-rent literature and source code, the current state of Container-Based Emulators is assessed.
- Full Text:
Dridex: Analysis of the traffic and automatic generation of IOCs
- Rudman, Lauren, Irwin, Barry V W
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429525 , vital:72619 , https://ieeexplore.ieee.org/abstract/document/7802932
- Description: In this paper we present a framework that generates network Indicators of Compromise (IOC) automatically from a malware sample after dynamic runtime analysis. The framework addresses the limitations of manual Indicator of Compromise generation and utilises sandbox environment to perform the malware analysis in. We focus on the generation of network based IOCs from captured traffic files (PCAPs) generated by the dynamic malware analysis. The Cuckoo Sandbox environment is used for the analysis and the setup is described in detail. Accordingly, we discuss the concept of IOCs and the popular formats used as there is currently no standard. As an example of how the proof-of-concept framework can be used, we chose 100 Dridex malware samples and evaluated the traffic and showed what can be used for the generation of network-based IOCs. Results of our system confirm that we can create IOCs from dynamic malware analysis and avoid the legitimate background traffic originating from the sandbox system. We also briefly discuss the sharing of, and application of the generated IOCs and the number of systems that can be used to share them. Lastly we discuss how they can be useful in combating cyber threats.
- Full Text:
- Authors: Rudman, Lauren , Irwin, Barry V W
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429525 , vital:72619 , https://ieeexplore.ieee.org/abstract/document/7802932
- Description: In this paper we present a framework that generates network Indicators of Compromise (IOC) automatically from a malware sample after dynamic runtime analysis. The framework addresses the limitations of manual Indicator of Compromise generation and utilises sandbox environment to perform the malware analysis in. We focus on the generation of network based IOCs from captured traffic files (PCAPs) generated by the dynamic malware analysis. The Cuckoo Sandbox environment is used for the analysis and the setup is described in detail. Accordingly, we discuss the concept of IOCs and the popular formats used as there is currently no standard. As an example of how the proof-of-concept framework can be used, we chose 100 Dridex malware samples and evaluated the traffic and showed what can be used for the generation of network-based IOCs. Results of our system confirm that we can create IOCs from dynamic malware analysis and avoid the legitimate background traffic originating from the sandbox system. We also briefly discuss the sharing of, and application of the generated IOCs and the number of systems that can be used to share them. Lastly we discuss how they can be useful in combating cyber threats.
- Full Text:
Investigating multi-thread utilization as a software defence mechanism against side channel attacks
- Frieslaar, Ibraheem, Irwin, Barry V W
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430190 , vital:72672 , https://doi.org/10.1145/3015166.3015176
- Description: A state-of-the-art software countermeasure to defend against side channel attacks is investigated in this work. The implementation of this novel approach consists of using multi-threads and a task scheduler on a microcontroller to purposefully leak out information at critical points in the cryptographic algorithm and confuse the attacker. This research demonstrates it is capable of outperforming the known countermeasure of hiding and shuffling in terms of preventing the secret information from being leaked out. Furthermore, the proposed countermeasure mitigates the side channel attacks, such as correlation power analysis and template attacks.
- Full Text:
- Authors: Frieslaar, Ibraheem , Irwin, Barry V W
- Date: 2016
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430190 , vital:72672 , https://doi.org/10.1145/3015166.3015176
- Description: A state-of-the-art software countermeasure to defend against side channel attacks is investigated in this work. The implementation of this novel approach consists of using multi-threads and a task scheduler on a microcontroller to purposefully leak out information at critical points in the cryptographic algorithm and confuse the attacker. This research demonstrates it is capable of outperforming the known countermeasure of hiding and shuffling in terms of preventing the secret information from being leaked out. Furthermore, the proposed countermeasure mitigates the side channel attacks, such as correlation power analysis and template attacks.
- Full Text:
DDoS Attack Mitigation Through Control of Inherent Charge Decay of Memory Implementations
- Herbert, Alan, Irwin, Barry V W, van Heerden, Renier P
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
Multi sensor national cyber security data fusion
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
Observed correlations of unsolicited network traffic over five distinct IPv4 netblocks
- Nkhumeleni, Thiswilondi M, Irwin, Barry V W
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
Towards a PHP webshell taxonomy using deobfuscation-assisted similarity analysis
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429560 , vital:72622 , 10.1109/ISSA.2015.7335066
- Description: The abundance of PHP-based Remote Access Trojans (or web shells) found in the wild has led malware researchers to develop systems capable of tracking and analysing these shells. In the past, such shells were ably classified using signature matching, a process that is currently unable to cope with the sheer volume and variety of web-based malware in circulation. Although a large percentage of newly-created webshell software incorporates portions of code derived from seminal shells such as c99 and r57, they are able to disguise this by making extensive use of obfuscation techniques intended to frustrate any attempts to dissect or reverse engineer the code. This paper presents an approach to shell classification and analysis (based on similarity to a body of known malware) in an attempt to create a comprehensive taxonomy of PHP-based web shells. Several different measures of similarity were used in conjunction with clustering algorithms and visualisation techniques in order to achieve this. Furthermore, an auxiliary component capable of syntactically deobfuscating PHP code is described. This was employed to reverse idiomatic obfuscation constructs used by software authors. It was found that this deobfuscation dramatically increased the observed levels of similarity by exposing additional code for analysis.
- Full Text:
An exploration of geolocation and traffic visualisation using network flows
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
Human perception of the measurement of a network attack taxonomy in near real-time
- Van Heerden, Renier, Malan, Mercia M, Mouton, Francois, Irwin, Barry V W
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
On the viability of pro-active automated PII breach detection: A South African case study
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text: