An exploration of geolocation and traffic visualisation using network flows
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429597 , vital:72625 , 10.1109/ISSA.2014.6950
- Description: A network flow is a data record that represents characteristics associated with a unidirectional stream of packets transmitted between two hosts using an IP layer protocol. As a network flow only represents statistics relating to the data transferred in the stream, the effectiveness of utilizing network flows for traffic visualization to aid in cyber defense is not immediately apparent and needs further exploration. The goal of this research is to explore the use of network flows for data visualization and geolocation.
- Full Text:
Design of a Network Packet Processing platform
- Pennefather, Sean, Irwin, Barry V W
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427901 , vital:72472 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622772_Design_of_a_Network_Packet_Processing_platform/links/5b9a187f92851c4ba8181bd6/Design-of-a-Network-Packet-Processing-platform.pdf
- Description: This paper describes the design considerations investigated in the implementation of a prototype embedded network packet processing platform. The purpose of this system is to provide a means for researchers to process, and manipulate network traffic using an embedded standalone hardware platform, with the provision this be soft-configurable and flexible in its functionality. The performance of the Ethernet layer subsystem implemented using XMOS MCU’s is investigated. Future applications of this prototype are discussed.
- Full Text:
- Authors: Pennefather, Sean , Irwin, Barry V W
- Date: 2014
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427901 , vital:72472 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622772_Design_of_a_Network_Packet_Processing_platform/links/5b9a187f92851c4ba8181bd6/Design-of-a-Network-Packet-Processing-platform.pdf
- Description: This paper describes the design considerations investigated in the implementation of a prototype embedded network packet processing platform. The purpose of this system is to provide a means for researchers to process, and manipulate network traffic using an embedded standalone hardware platform, with the provision this be soft-configurable and flexible in its functionality. The performance of the Ethernet layer subsystem implemented using XMOS MCU’s is investigated. Future applications of this prototype are discussed.
- Full Text:
Human perception of the measurement of a network attack taxonomy in near real-time
- Van Heerden, Renier, Malan, Mercia M, Mouton, Francois, Irwin, Barry V W
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
- Authors: Van Heerden, Renier , Malan, Mercia M , Mouton, Francois , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429924 , vital:72652 , https://doi.org/10.1007/978-3-662-44208-1_23
- Description: This paper investigates how the measurement of a network attack taxonomy can be related to human perception. Network attacks do not have a time limitation, but the earlier its detected, the more damage can be prevented and the more preventative actions can be taken. This paper evaluate how elements of network attacks can be measured in near real-time(60 seconds). The taxonomy we use was developed by van Heerden et al (2012) with over 100 classes. These classes present the attack and defenders point of view. The degree to which each class can be quantified or measured is determined by investigating the accuracy of various assessment methods. We classify each class as either defined, high, low or not quantifiable. For example, it may not be possible to determine the instigator of an attack (Aggressor), but only that the attack has been launched by a Hacker (Actor). Some classes can only be quantified with a low confidence or not at all in a sort (near real-time) time. The IP address of an attack can easily be faked thus reducing the confidence in the information obtained from it, and thus determining the origin of an attack with a low confidence. This determination itself is subjective. All the evaluations of the classes in this paper is subjective, but due to the very basic grouping (High, Low or Not Quantifiable) a subjective value can be used. The complexity of the taxonomy can be significantly reduced if classes with only a high perceptive accuracy is used.
- Full Text:
On the viability of pro-active automated PII breach detection: A South African case study
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
Testing antivirus engines to determine their effectiveness as a security layer
- Haffejee, Jameel, Irwin, Barry V W
- Authors: Haffejee, Jameel , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429673 , vital:72631 , 10.1109/ISSA.2014.6950496
- Description: This research has been undertaken to empirically test the assumption that it is trivial to bypass an antivirus application and to gauge the effectiveness of antivirus engines when faced with a number of known evasion techniques. A known malicious binary was combined with evasion techniques and deployed against several antivirus engines to test their detection ability. The research also documents the process of setting up an environment for testing antivirus engines as well as building the evasion techniques used in the tests. This environment facilitated the empirical testing that was needed to determine if the assumption that antivirus security controls could easily be bypassed. The results of the empirical tests are also presented in this research and demonstrate that it is indeed within reason that an attacker can evade multiple antivirus engines without much effort. As such while an antivirus application is useful for protecting against known threats, it does not work as effectively against unknown threats.
- Full Text:
- Authors: Haffejee, Jameel , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429673 , vital:72631 , 10.1109/ISSA.2014.6950496
- Description: This research has been undertaken to empirically test the assumption that it is trivial to bypass an antivirus application and to gauge the effectiveness of antivirus engines when faced with a number of known evasion techniques. A known malicious binary was combined with evasion techniques and deployed against several antivirus engines to test their detection ability. The research also documents the process of setting up an environment for testing antivirus engines as well as building the evasion techniques used in the tests. This environment facilitated the empirical testing that was needed to determine if the assumption that antivirus security controls could easily be bypassed. The results of the empirical tests are also presented in this research and demonstrate that it is indeed within reason that an attacker can evade multiple antivirus engines without much effort. As such while an antivirus application is useful for protecting against known threats, it does not work as effectively against unknown threats.
- Full Text:
Towards a platform to visualize the state of South Africa's information security
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429688 , vital:72632 , 10.1109/ISSA.2014.6950511
- Description: Attacks via the Internet infrastructure is increasingly becoming a daily occurrence and South Africa is no exception. In response, certain governments have published strategies pertaining to information security on a national level. These policies aim to ensure that critical infrastructure is protected, and that there is a move towards a greater state of information security readiness. This is also the case for South Africa where a variety of policy initiatives have started to gain momentum. While establishing strategy and policy is essential, ensuring its implementation is often difficult and dependent on the availability of resources. This is even more so in the case of information security since virtually all standardized security improvement processes start off with specifying that a proper inventory is required of all hardware, software, people and processes. While this may be possible to achieve at an organizational level, it is far more challenging on a national level. In this paper, the authors examine the possibility of making use of available data sources to achieve inventory of infrastructure on a national level and to visualize the state of a country's information security in at least a partial manner.
- Full Text:
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429688 , vital:72632 , 10.1109/ISSA.2014.6950511
- Description: Attacks via the Internet infrastructure is increasingly becoming a daily occurrence and South Africa is no exception. In response, certain governments have published strategies pertaining to information security on a national level. These policies aim to ensure that critical infrastructure is protected, and that there is a move towards a greater state of information security readiness. This is also the case for South Africa where a variety of policy initiatives have started to gain momentum. While establishing strategy and policy is essential, ensuring its implementation is often difficult and dependent on the availability of resources. This is even more so in the case of information security since virtually all standardized security improvement processes start off with specifying that a proper inventory is required of all hardware, software, people and processes. While this may be possible to achieve at an organizational level, it is far more challenging on a national level. In this paper, the authors examine the possibility of making use of available data sources to achieve inventory of infrastructure on a national level and to visualize the state of a country's information security in at least a partial manner.
- Full Text:
Towards a Sandbox for the Deobfuscation and Dissection of PHP Malware
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
A baseline study of potentially malicious activity across five network telescopes
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
A high-level architecture for efficient packet trace analysis on gpu co-processors
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429572 , vital:72623 , 10.1109/ISSA.2013.6641052
- Description: This paper proposes a high-level architecture to support efficient, massively parallel packet classification, filtering and analysis using commodity Graphics Processing Unit (GPU) hardware. The proposed architecture aims to provide a flexible and efficient parallel packet processing and analysis framework, supporting complex programmable filtering, data mining operations, statistical analysis functions and traffic visualisation, with minimal CPU overhead. In particular, this framework aims to provide a robust set of high-speed analysis functionality, in order to dramatically reduce the time required to process and analyse extremely large network traces. This architecture derives from initial research, which has shown GPU co-processors to be effective in accelerating packet classification to up to tera-bit speeds with minimal CPU overhead, far exceeding the bandwidth capacity between standard long term storage and the GPU device. This paper provides a high-level overview of the proposed architecture and its primary components, motivated by the results of prior research in the field.
- Full Text:
A kernel-driven framework for high performance internet routing simulation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
A source analysis of the conficker outbreak from a network telescope.
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429742 , vital:72636 , 10.23919/SAIEE.2013.8531865
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
Automated classification of computer network attacks
- van Heerden, Renier, Leenen, Louise, Irwin, Barry V W
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
- Authors: van Heerden, Renier , Leenen, Louise , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429622 , vital:72627 , 10.1109/ICASTech.2013.6707510
- Description: In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes and inter-class relationships and has previously been implemented in the Protege ontology editor. Two significant recent instances of network based attacks are presented as individuals in the ontology and correctly classified by the automated reasoner according to the relevant types of attack scenarios depicted in the ontology. The two network attack instances are the Distributed Denial of Service attack on SpamHaus in 2013 and the theft of 42 million Rand ($6.7 million) from South African Postbank in 2012.
- Full Text:
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429635 , vital:72628 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
Classification of security operation centers
- Jacobs, Pierre, Arnab, Alapan, Irwin, Barry V W
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
- Authors: Jacobs, Pierre , Arnab, Alapan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429785 , vital:72639 , 10.1109/ISSA.2013.6641054
- Description: Security Operation Centers (SOCs) are a necessary service for organisations that want to address compliance and threat management. While there are frameworks in existence that addresses the technology aspects of these services, a holistic framework addressing processes, staffing and technology currently do not exist. Additionally, it would be useful for organizations and constituents considering building, buying or selling these services to measure the effectiveness and maturity of the provided services. In this paper, we propose a classification and rating scheme for SOC services, evaluating both the capabilities and the maturity of the services offered.
- Full Text:
Deep Routing Simulation
- Irwin, Barry V W, Herbert, Alan
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
Developing a virtualised testbed environment in preparation for testing of network based attacks
- Van Heerden, Renier, Pieterse, Heloise, Burke, Ivan, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
Real-time distributed malicious traffic monitoring for honeypots and network telescopes
- Hunter, Samuel O, Irwin, Barry V W, Stalmans, Etienne
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
- Authors: Hunter, Samuel O , Irwin, Barry V W , Stalmans, Etienne
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429660 , vital:72630 , 10.1109/ISSA.2013.6641050
- Description: Network telescopes and honeypots have been used with great success to record malicious network traffic for analysis, however, this is often done off-line well after the traffic was observed. This has left us with only a cursory understanding of malicious hosts and no knowledge of the software they run, uptime or other malicious activity they may have participated in. This work covers a messaging framework (rDSN) that was developed to allow for the real-time analysis of malicious traffic. This data was captured from multiple, distributed honeypots and network telescopes. Data was collected over a period of two months from these data sensors. Using this data new techniques for malicious host analysis and re-identification in dynamic IP address space were explored. An Automated Reconnaissance (AR) Framework was developed to aid the process of data collection, this framework was responsible for gathering information from malicious hosts through both passive and active fingerprinting techniques. From the analysis of this data; correlations between malicious hosts were identified based on characteristics such as Operating System, targeted service, location and services running on the malicious hosts. An initial investigation in Latency Based Multilateration (LBM), a novel technique to assist in host re-identification was tested and proved successful as a supporting metric for host re-identification.
- Full Text:
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Nottingham, Alastair, Irwin, Barry V W
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
Towards a GPU accelerated virtual machine for massively parallel packet classification and filtering
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430295 , vital:72681 , https://doi.org/10.1145/2513456.2513504
- Description: This paper considers the application of GPU co-processors to accelerate the analysis of packet data, particularly within extremely large packet traces spanning months or years of traffic. Discussion focuses on the construction, performance and limitations of the experimental GPF (GPU Packet Filter), which employs a prototype massively-parallel protocol-independent multi-match algorithm to rapidly compare packets against multiple arbitrary filters. The paper concludes with a consideration of mechanisms to expand the flexibility and power of the GPF algorithm to construct a fully programmable GPU packet classification virtual machine, which can perform massively parallel classification, data-mining and data-transformation to explore and analyse packet traces. This virtual machine is a component of a larger framework of capture analysis tools which together provide capture indexing, manipulation, filtering and visualisation functions.
- Full Text:
Visualization of a data leak
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
A computer network attack taxonomy and ontology
- Van Heerden, Renier P, Irwin, Barry V W, Burke, Ivan D, Leenen, Louise
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text:
- Authors: Van Heerden, Renier P , Irwin, Barry V W , Burke, Ivan D , Leenen, Louise
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430064 , vital:72663 , DOI: 10.4018/ijcwt.2012070102
- Description: Computer network attacks differ in the motivation of the entity behind the attack, the execution and the end result. The diversity of attacks has the consequence that no standard classification ex-ists. The benefit of automated classification of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this pa-per. The objective of this ontology is to automate the classifica-tion of a network attack during its early stages. Most published taxonomies present an attack from either the attacker's or defend-er's point of view. The authors' taxonomy presents both these points of view. The framework for an ontology was developed using a core class, the "Attack Scenario", which can be used to characterize and classify computer network attacks.
- Full Text: