A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A framework for scoring and tagging NetFlow data
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
- Authors: Sweeney, Michael John
- Date: 2019
- Subjects: NetFlow , Big data , High performance computing , Event processing (Computer science)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/65022 , vital:28654
- Description: With the increase in link speeds and the growth of the Internet, the volume of NetFlow data generated has increased significantly over time and processing these volumes has become a challenge, more specifically a Big Data challenge. With the advent of technologies and architectures designed to handle Big Data volumes, researchers have investigated their application to the processing of NetFlow data. This work builds on prior work wherein a scoring methodology was proposed for identifying anomalies in NetFlow by proposing and implementing a system that allows for automatic, real-time scoring through the adoption of Big Data stream processing architectures. The first part of the research looks at the means of event detection using the scoring approach and implementing as a number of individual, standalone components, each responsible for detecting and scoring a single type of flow trait. The second part is the implementation of these scoring components in a framework, named Themis1, capable of handling high volumes of data with low latency processing times. This was tackled using tools, technologies and architectural elements from the world of Big Data stream processing. The performance of the framework on the stream processing architecture was shown to demonstrate good flow throughput at low processing latencies on a single low end host. The successful demonstration of the framework on a single host opens the way to leverage the scaling capabilities afforded by the architectures and technologies used. This gives weight to the possibility of using this framework for real time threat detection using NetFlow data from larger networked environments.
- Full Text:
- Date Issued: 2019
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Date Issued: 2019
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
Modernisation and extension of InetVis: a network security data visualisation tool
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
- Date Issued: 2019
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
- Date Issued: 2019
Gaining cyber security insight through an analysis of open source intelligence data: an East African case study
- Authors: Chindipha, Stones Dalitso
- Date: 2018
- Subjects: Open source intelligence -- Africa, East , Computer security -- Africa, East , Computer networks -- Security measures -- Africa, East , Denial of service attacks -- Africa, East , Sentient Hvper-Optimised Data Access Network (SHODAN) , Internet Background Radiation (IBR)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60618 , vital:27805
- Description: With each passing year the number of Internet users and connected devices grows, and this is particularly so in Africa. This growth brings with it an increase in the prevalence cyber-attacks. Looking at the current state of affairs, cybersecurity incidents are more likely to increase in African countries mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. The adoption of mobile banking has aggravated the situation making the continent more attractive to hackers who bank on the malpractices of users. Using Open Source Intelligence (OSINT) data sources like Sentient Hvper-Optimised Data Access Network (SHODAN) and Internet Background Radiation (IBR), this research explores the prevalence of vulnerabilities and their accessibility to evber threat actors. The research focuses on the East African Community (EAC) comprising of Tanzania, Kenya, Malawi, and Uganda, An IBR data set collected by a Rhodes University network telescope spanning over 72 months was used in this research, along with two snapshot period of data from the SHODAN project. The findings shows that there is a significant risk to systems within the EAC, particularly using the SHODAN data. The MITRE CVSS threat scoring system was applied to this research using FREAK and Heartbleed as sample vulnerabilities identified in EAC, When looking at IBR, the research has shown that attackers can use either destination ports or IP source addresses to perform an attack which if not attended to may be reused yearly until later on move to the allocated IP address space once it starts making random probes. The moment it finds one vulnerable client on the network it spreads throughout like a worm, DDoS is one the attacks that can be generated from IBR, Since the SHODAN dataset had two collection points, the study has shown the changes that have occurred in Malawi and Tanzania for a period of 14 months by using three variables i.e, device type, operating systems, and ports. The research has also identified vulnerable devices in all the four countries. Apart from that, the study identified operating systems, products, OpenSSL, ports and ISPs as some of the variables that can be used to identify vulnerabilities in systems. In the ease of OpenSSL and products, this research went further by identifying the type of attack that can occur and its associated CVE-ID.
- Full Text:
- Date Issued: 2018
- Authors: Chindipha, Stones Dalitso
- Date: 2018
- Subjects: Open source intelligence -- Africa, East , Computer security -- Africa, East , Computer networks -- Security measures -- Africa, East , Denial of service attacks -- Africa, East , Sentient Hvper-Optimised Data Access Network (SHODAN) , Internet Background Radiation (IBR)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60618 , vital:27805
- Description: With each passing year the number of Internet users and connected devices grows, and this is particularly so in Africa. This growth brings with it an increase in the prevalence cyber-attacks. Looking at the current state of affairs, cybersecurity incidents are more likely to increase in African countries mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. The adoption of mobile banking has aggravated the situation making the continent more attractive to hackers who bank on the malpractices of users. Using Open Source Intelligence (OSINT) data sources like Sentient Hvper-Optimised Data Access Network (SHODAN) and Internet Background Radiation (IBR), this research explores the prevalence of vulnerabilities and their accessibility to evber threat actors. The research focuses on the East African Community (EAC) comprising of Tanzania, Kenya, Malawi, and Uganda, An IBR data set collected by a Rhodes University network telescope spanning over 72 months was used in this research, along with two snapshot period of data from the SHODAN project. The findings shows that there is a significant risk to systems within the EAC, particularly using the SHODAN data. The MITRE CVSS threat scoring system was applied to this research using FREAK and Heartbleed as sample vulnerabilities identified in EAC, When looking at IBR, the research has shown that attackers can use either destination ports or IP source addresses to perform an attack which if not attended to may be reused yearly until later on move to the allocated IP address space once it starts making random probes. The moment it finds one vulnerable client on the network it spreads throughout like a worm, DDoS is one the attacks that can be generated from IBR, Since the SHODAN dataset had two collection points, the study has shown the changes that have occurred in Malawi and Tanzania for a period of 14 months by using three variables i.e, device type, operating systems, and ports. The research has also identified vulnerable devices in all the four countries. Apart from that, the study identified operating systems, products, OpenSSL, ports and ISPs as some of the variables that can be used to identify vulnerabilities in systems. In the ease of OpenSSL and products, this research went further by identifying the type of attack that can occur and its associated CVE-ID.
- Full Text:
- Date Issued: 2018
Towards a threat assessment framework for consumer health wearables
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
- Authors: Mnjama, Javan Joshua
- Date: 2018
- Subjects: Activity trackers (Wearable technology) , Computer networks -- Security measures , Data protection , Information storage and retrieval systems -- Security systems , Computer security -- Software , Consumer Health Wearable Threat Assessment Framework , Design Science Research
- Language: English
- Type: text , Thesis , Masters , MCom
- Identifier: http://hdl.handle.net/10962/62649 , vital:28225
- Description: The collection of health data such as physical activity, consumption and physiological data through the use of consumer health wearables via fitness trackers are very beneficial for the promotion of physical wellness. However, consumer health wearables and their associated applications are known to have privacy and security concerns that can potentially make the collected personal health data vulnerable to hackers. These concerns are attributed to security theoretical frameworks not sufficiently addressing the entirety of privacy and security concerns relating to the diverse technological ecosystem of consumer health wearables. The objective of this research was therefore to develop a threat assessment framework that can be used to guide the detection of vulnerabilities which affect consumer health wearables and their associated applications. To meet this objective, the Design Science Research methodology was used to develop the desired artefact (Consumer Health Wearable Threat Assessment Framework). The framework is comprised of fourteen vulnerabilities classified according to Authentication, Authorization, Availability, Confidentiality, Non-Repudiation and Integrity. Through developing the artefact, the threat assessment framework was demonstrated on two fitness trackers and their associated applications. It was discovered, that the framework was able to identify how these vulnerabilities affected, these two test cases based on the classification categories of the framework. The framework was also evaluated by four security experts who assessed the quality, utility and efficacy of the framework. Experts, supported the use of the framework as a relevant and comprehensive framework to guide the detection of vulnerabilities towards consumer health wearables and their associated applications. The implication of this research study is that the framework can be used by developers to better identify the vulnerabilities of consumer health wearables and their associated applications. This will assist in creating a more securer environment for the storage and use of health data by consumer health wearables.
- Full Text:
- Date Issued: 2018
A longitudinal study of DNS traffic: understanding current DNS practice and abuse
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
- Authors: Van Zyl, Ignus
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3707 , vital:20537
- Description: This thesis examines a dataset spanning 21 months, containing 3,5 billion DNS packets. Traffic on TCP and UDP port 53, was captured on a production /24 IP block. The purpose of this thesis is twofold. The first is to create an understanding of current practice and behavior within the DNS infrastructure, the second to explore current threats faced by the DNS and the various systems that implement it. This is achieved by drawing on analysis and observations from the captured data. Aspects of the operation of DNS on the greater Internet are considered in this research with reference to the observed trends in the dataset, A thorough analysis of current DNS TTL implementation is made with respect to all response traffic, as well as sections looking at observed DNS TTL values for ,za domain replies and NX DOMAIN flagged replies. This thesis found that TTL values implemented are much lower than has been recommended in previous years, and that the TTL decrease is prevalent in most, but not all EE TTL implementation. With respect to the nature of DNS operations, this thesis also concerns itself with an analysis of the geoloeation of authoritative servers for local (,za) domains, and offers further observations towards the latency generated by the choice of authoritative server location for a given ,za domain. It was found that the majority of ,za domain authoritative servers are international, which results in latency generation that is multiple times greater than observed latencies for local authoritative servers. Further analysis is done with respect to NX DOM AIN behavior captured across the dataset. These findings outlined the cost of DNS miseonfiguration as well as highlighting instances of NXDOMAIN generation through malicious practice. With respect to DNS abuses, original research with respect to long-term scanning generated as a result of amplification attack activity on the greater Internet is presented. Many instances of amplification domain scans were captured during the packet capture, and an attempt is made to correlate that activity temporally with known amplification attack reports. The final area that this thesis deals with is the relatively new field of Bitflipping and Bitsquatting, delivering results on bitflip detection and evaluation over the course of the entire dataset. The detection methodology is outlined, and the final results are compared to findings given in recent bitflip literature.
- Full Text:
- Date Issued: 2016
An investigation into the prevalence and growth of phishing attacks against South African financial targets
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
- Authors: Lala, Darshan Magan
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3157 , vital:20379
- Description: Phishing in the electronic communications medium is the act of sending unsolicited email messages with the intention of masquerading as a reputed business. The objective is to deceive the recipient into divulging personal and sensitive information such as bank account details, credit card numbers and passwords. Attacks against financial services are the most common types of targets for scammers. Phishing attacks in South Africa have cost businesses and consumers substantial amounts of financial loss. This research investigated existing literature to understand the basic concepts of email, phishing, spam and how these fit together. The research also looks into the increasing growth of phishing worldwide and in particular against South African targets. A quantitative study is performed and reported on; this involves the study and analysis of phishing statistics in a data set provided by the South African Anti-Phishing Working Group. The data set contains phishing URL information, country code where the site has been hosted, targeted company name, IP address information and timestamp of the phishing site. The data set contains 161 different phishing targets. The research primarily focuses on the trend in phishing attacks against six South African based financial institutions, but also correlates this with the overall global trend using statistical analysis. The results from the study of the data set are compared to existing statistics and literature regarding the prevalence and growth of phishing in South Africa. The question that this research answers is whether or not the prevalence and growth of phishing in South Africa correlates with the global trend in phishing attacks. The findings indicate that certain correlations exist between some of the South African phishing targets and global phishing trends.
- Full Text:
- Date Issued: 2016
FRAME: frame routing and manipulation engine
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
- Authors: Pennefather, Sean Niel
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3608 , vital:20529
- Description: This research reports on the design and implementation of FRAME: an embedded hardware network processing platform designed to perform network frame manipulation and monitoring. This is possible at line speeds compliant with the IEEE 802.3 Ethernet standard. The system provides frame manipulation functionality to aid in the development and implementation of network testing environments. Platform cost and ease of use are both considered during design resulting in fabrication of hardware and the development of Link, a Domain Specific Language used to create custom applications that are compatible with the platform. Functionality of the resulting platform is shown through conformance testing of designed modules and application examples. Throughput testing showed that the peak throughput achievable by the platform is limited to 86.4 Mbit/s, comparable to commodity 100 Mbit hardware and the total cost of the prototype platform ranged between $220 and $254.
- Full Text:
- Date Issued: 2016
Toward an automated botnet analysis framework: a darkcomet case-study
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
- Authors: du Bruyn, Jeremy Cecil
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2937 , vital:20344
- Full Text:
- Date Issued: 2016
An analysis of malware evasion techniques against modern AV engines
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Date Issued: 2015
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Date Issued: 2015
An investigation into the role played by perceived security concerns in the adoption of mobile money services : a Zimbabwean case study
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Date Issued: 2015
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
Towards a framework for building security operation centers
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Date Issued: 2015
Towards an evaluation and protection strategy for critical infrastructure
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
Towards large scale software based network routing simulation
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
Using risk mitigation approaches to define the requirements for software escrow
- Authors: Rode, Karl
- Date: 2015
- Subjects: Escrows , Source code (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4714 , http://hdl.handle.net/10962/d1017936
- Description: Two or more parties entering into a contract for service or goods may make use of an escrow of the funds for payment to enable trust in the contract. In such an event the documents or financial instruments, the object(s) in escrow, are held in trust by a trusted third party (escrow provider) until the specified conditions are fulfilled. In the scenario of software escrow, the object of escrow is typically the source code, and the specified release conditions usually address potential scenarios wherein the software provider becomes unable to continue providing services (such as due to bankruptcy or a change in services provided, etc.) The subject of software escrow is not well documented in the academic body of work, with the largest information sources, active commentary and supporting papers provided by commercial software escrow providers, both in South Africa and abroad. This work maps the software escrow topic onto the King III compliance framework in South Africa. This is of value since any users of bespoke developed applications may require extended professional assistance to align with the King III guidelines. The supporting risk assessment model developed in this work will serve as a tool to evaluate and motivate for software escrow agreements. It will also provide an overview of the various escrow agreement types and will transfer the focus to the value proposition that they each hold. Initial research has indicated that current awareness of software escrow in industry is still very low. This was evidenced by the significant number of approached specialists that declined to participate in the survey due to their own admitted inexperience in applying the discipline of software escrow within their companies. Moreover, the participants that contributed to the research indicated that they only required software escrow for medium to highly critical applications. This proved the value of assessing the various risk factors that bespoke software development introduces, as well as the risk mitigation options available, through tools such as escrow, to reduce the actual and residual risk to a manageable level.
- Full Text:
- Date Issued: 2015
- Authors: Rode, Karl
- Date: 2015
- Subjects: Escrows , Source code (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4714 , http://hdl.handle.net/10962/d1017936
- Description: Two or more parties entering into a contract for service or goods may make use of an escrow of the funds for payment to enable trust in the contract. In such an event the documents or financial instruments, the object(s) in escrow, are held in trust by a trusted third party (escrow provider) until the specified conditions are fulfilled. In the scenario of software escrow, the object of escrow is typically the source code, and the specified release conditions usually address potential scenarios wherein the software provider becomes unable to continue providing services (such as due to bankruptcy or a change in services provided, etc.) The subject of software escrow is not well documented in the academic body of work, with the largest information sources, active commentary and supporting papers provided by commercial software escrow providers, both in South Africa and abroad. This work maps the software escrow topic onto the King III compliance framework in South Africa. This is of value since any users of bespoke developed applications may require extended professional assistance to align with the King III guidelines. The supporting risk assessment model developed in this work will serve as a tool to evaluate and motivate for software escrow agreements. It will also provide an overview of the various escrow agreement types and will transfer the focus to the value proposition that they each hold. Initial research has indicated that current awareness of software escrow in industry is still very low. This was evidenced by the significant number of approached specialists that declined to participate in the survey due to their own admitted inexperience in applying the discipline of software escrow within their companies. Moreover, the participants that contributed to the research indicated that they only required software escrow for medium to highly critical applications. This proved the value of assessing the various risk factors that bespoke software development introduces, as well as the risk mitigation options available, through tools such as escrow, to reduce the actual and residual risk to a manageable level.
- Full Text:
- Date Issued: 2015
Visualisation of PF firewall logs using open source
- Authors: Coetzee, Dirk
- Date: 2015
- Subjects: Open source software -- South Africa , Firewalls (Computer security) -- South Africa , Data logging -- South Africa , Data integrity -- South Africa , Data protection -- South Africa , Computer crimes -- South Africa , Hacktivism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4719 , http://hdl.handle.net/10962/d1018552
- Description: If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage.
- Full Text:
- Date Issued: 2015
- Authors: Coetzee, Dirk
- Date: 2015
- Subjects: Open source software -- South Africa , Firewalls (Computer security) -- South Africa , Data logging -- South Africa , Data integrity -- South Africa , Data protection -- South Africa , Computer crimes -- South Africa , Hacktivism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4719 , http://hdl.handle.net/10962/d1018552
- Description: If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage.
- Full Text:
- Date Issued: 2015