An investigation into the current state of web based cryptominers and cryptojacking
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
A framework for scoring and tagging NetFlow data
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
A multi-threading software countermeasure to mitigate side channel analysis in the time domain
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
- Authors: Frieslaar, Ibraheem
- Date: 2019
- Subjects: Computer security , Data encryption (Computer science) , Noise generators (Electronics)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71152 , vital:29790
- Description: This research is the first of its kind to investigate the utilisation of a multi-threading software-based countermeasure to mitigate Side Channel Analysis (SCA) attacks, with a particular focus on the AES-128 cryptographic algorithm. This investigation is novel, as there has not been a software-based countermeasure relying on multi-threading to our knowledge. The research has been tested on the Atmel microcontrollers, as well as a more fully featured system in the form of the popular Raspberry Pi that utilises the ARM7 processor. The main contributions of this research is the introduction of a multi-threading software based countermeasure used to mitigate SCA attacks on both an embedded device and a Raspberry Pi. These threads are comprised of various mathematical operations which are utilised to generate electromagnetic (EM) noise resulting in the obfuscation of the execution of the AES-128 algorithm. A novel EM noise generator known as the FRIES noise generator is implemented to obfuscate data captured in the EM field. FRIES comprises of hiding the execution of AES-128 algorithm within the EM noise generated by the 512 Secure Hash Algorithm (SHA) from the libcrypto++ and OpenSSL libraries. In order to evaluate the proposed countermeasure, a novel attack methodology was developed where the entire secret AES-128 encryption key was recovered from a Raspberry Pi, which has not been achieved before. The FRIES noise generator was pitted against this new attack vector and other known noise generators. The results exhibited that the FRIES noise generator withstood this attack whilst other existing techniques still leaked out secret information. The visual location of the AES-128 encryption algorithm in the EM spectrum and key recovery was prevented. These results demonstrated that the proposed multi-threading software based countermeasure was able to be resistant to existing and new forms of attacks, thus verifying that a multi-threading software based countermeasure can serve to mitigate SCA attacks.
- Full Text:
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
Bolvedere: a scalable network flow threat analysis system
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
Modernisation and extension of InetVis: a network security data visualisation tool
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
Gaining cyber security insight through an analysis of open source intelligence data: an East African case study
- Authors: Chindipha, Stones Dalitso
- Date: 2018
- Subjects: Open source intelligence -- Africa, East , Computer security -- Africa, East , Computer networks -- Security measures -- Africa, East , Denial of service attacks -- Africa, East , Sentient Hvper-Optimised Data Access Network (SHODAN) , Internet Background Radiation (IBR)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60618 , vital:27805
- Description: With each passing year the number of Internet users and connected devices grows, and this is particularly so in Africa. This growth brings with it an increase in the prevalence cyber-attacks. Looking at the current state of affairs, cybersecurity incidents are more likely to increase in African countries mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. The adoption of mobile banking has aggravated the situation making the continent more attractive to hackers who bank on the malpractices of users. Using Open Source Intelligence (OSINT) data sources like Sentient Hvper-Optimised Data Access Network (SHODAN) and Internet Background Radiation (IBR), this research explores the prevalence of vulnerabilities and their accessibility to evber threat actors. The research focuses on the East African Community (EAC) comprising of Tanzania, Kenya, Malawi, and Uganda, An IBR data set collected by a Rhodes University network telescope spanning over 72 months was used in this research, along with two snapshot period of data from the SHODAN project. The findings shows that there is a significant risk to systems within the EAC, particularly using the SHODAN data. The MITRE CVSS threat scoring system was applied to this research using FREAK and Heartbleed as sample vulnerabilities identified in EAC, When looking at IBR, the research has shown that attackers can use either destination ports or IP source addresses to perform an attack which if not attended to may be reused yearly until later on move to the allocated IP address space once it starts making random probes. The moment it finds one vulnerable client on the network it spreads throughout like a worm, DDoS is one the attacks that can be generated from IBR, Since the SHODAN dataset had two collection points, the study has shown the changes that have occurred in Malawi and Tanzania for a period of 14 months by using three variables i.e, device type, operating systems, and ports. The research has also identified vulnerable devices in all the four countries. Apart from that, the study identified operating systems, products, OpenSSL, ports and ISPs as some of the variables that can be used to identify vulnerabilities in systems. In the ease of OpenSSL and products, this research went further by identifying the type of attack that can occur and its associated CVE-ID.
- Full Text:
- Authors: Chindipha, Stones Dalitso
- Date: 2018
- Subjects: Open source intelligence -- Africa, East , Computer security -- Africa, East , Computer networks -- Security measures -- Africa, East , Denial of service attacks -- Africa, East , Sentient Hvper-Optimised Data Access Network (SHODAN) , Internet Background Radiation (IBR)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60618 , vital:27805
- Description: With each passing year the number of Internet users and connected devices grows, and this is particularly so in Africa. This growth brings with it an increase in the prevalence cyber-attacks. Looking at the current state of affairs, cybersecurity incidents are more likely to increase in African countries mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. The adoption of mobile banking has aggravated the situation making the continent more attractive to hackers who bank on the malpractices of users. Using Open Source Intelligence (OSINT) data sources like Sentient Hvper-Optimised Data Access Network (SHODAN) and Internet Background Radiation (IBR), this research explores the prevalence of vulnerabilities and their accessibility to evber threat actors. The research focuses on the East African Community (EAC) comprising of Tanzania, Kenya, Malawi, and Uganda, An IBR data set collected by a Rhodes University network telescope spanning over 72 months was used in this research, along with two snapshot period of data from the SHODAN project. The findings shows that there is a significant risk to systems within the EAC, particularly using the SHODAN data. The MITRE CVSS threat scoring system was applied to this research using FREAK and Heartbleed as sample vulnerabilities identified in EAC, When looking at IBR, the research has shown that attackers can use either destination ports or IP source addresses to perform an attack which if not attended to may be reused yearly until later on move to the allocated IP address space once it starts making random probes. The moment it finds one vulnerable client on the network it spreads throughout like a worm, DDoS is one the attacks that can be generated from IBR, Since the SHODAN dataset had two collection points, the study has shown the changes that have occurred in Malawi and Tanzania for a period of 14 months by using three variables i.e, device type, operating systems, and ports. The research has also identified vulnerable devices in all the four countries. Apart from that, the study identified operating systems, products, OpenSSL, ports and ISPs as some of the variables that can be used to identify vulnerabilities in systems. In the ease of OpenSSL and products, this research went further by identifying the type of attack that can occur and its associated CVE-ID.
- Full Text:
Towards a threat assessment framework for consumer health wearables
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
A longitudinal study of DNS traffic: understanding current DNS practice and abuse
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
An investigation into the prevalence and growth of phishing attacks against South African financial targets
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
FRAME: frame routing and manipulation engine
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
GPU Accelerated protocol analysis for large and long-term traffic traces
- Nottingham, Alastair Timothy
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
- Authors: Nottingham, Alastair Timothy
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/910 , vital:20002
- Description: This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark.
- Full Text:
Toward an automated botnet analysis framework: a darkcomet case-study
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
An analysis of malware evasion techniques against modern AV engines
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
An investigation into the role played by perceived security concerns in the adoption of mobile money services : a Zimbabwean case study
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
Pro-active visualization of cyber security on a National Level : a South African case study
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
- Authors: Swart, Ignatius Petrus
- Date: 2015
- Subjects: Internet -- Security measures -- South Africa , Computer security -- Government policy -- South Africa
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4718 , http://hdl.handle.net/10962/d1017940
- Description: The need for increased national cyber security situational awareness is evident from the growing number of published national cyber security strategies. Governments are progressively seen as responsible for cyber security, but at the same time increasingly constrained by legal, privacy and resource considerations. Infrastructure and services that form part of the national cyber domain are often not under the control of government, necessitating the need for information sharing between governments and commercial partners. While sharing of security information is necessary, it typically requires considerable time to be implemented effectively. In an effort to decrease the time and effort required for cyber security situational awareness, this study considered commercially available data sources relating to a national cyber domain. Open source information is typically used by attackers to gather information with great success. An understanding of the data provided by these sources can also afford decision makers the opportunity to set priorities more effectively. Through the use of an adapted Joint Directors of Laboratories (JDL) fusion model, an experimental system was implemented that visualized the potential that open source intelligence could have on cyber situational awareness. Datasets used in the validation of the model contained information obtained from eight different data sources over a two year period with a focus on the South African .co.za sub domain. Over a million infrastructure devices were examined in this study along with information pertaining to a potential 88 million vulnerabilities on these devices. During the examination of data sources, a severe lack of information regarding the human aspect in cyber security was identified that led to the creation of a novel Personally Identifiable Information detection sensor (PII). The resultant two million records pertaining to PII in the South African domain were incorporated into the data fusion experiment for processing. The results of this processing are discussed in the three case studies. The results offered in this study aim to highlight how data fusion and effective visualization can serve to move national cyber security from a primarily reactive undertaking to a more pro-active model.
- Full Text:
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text: