Evolving IoT honeypots
- Authors: Genov, Todor Stanislavov
- Date: 2022-10-14
- Subjects: Internet of things , Malware (Computer software) , QEMU , Honeypot , Cowrie
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362819 , vital:65365
- Description: The Internet of Things (IoT) is the emerging world where arbitrary objects from our everyday lives gain basic computational and networking capabilities to become part of the Internet. Researchers are estimating between 25 and 35 billion devices will be part of Internet by 2022. Unlike conventional computers where one hardware platform (Intel x86) and three operating systems (Windows, Linux and OS X) dominate the market, the IoT landscape is far more heterogeneous. To meet the growth demand the number of The System-on-Chip (SoC) manufacturers has seen a corresponding exponential growth making embedded platforms based on ARM, MIPS or SH4 processors abundant. The pursuit for market share is further leading to a price war and cost-cutting ultimately resulting in cheap systems with limited hardware resources and capabilities. The frugality of IoT hardware has a domino effect. Due to resource constraints vendors are packaging devices with custom, stripped-down Linux-based firmwares optimized for performing the device’s primary function. Device management, monitoring and security features are by and far absent from IoT devices. This created an asymmetry favouring attackers and disadvantaging defenders. This research sets out to reduce the opacity and identify a viable strategy, tactics and tooling for gaining insight into the IoT threat landscape by leveraging honeypots to build and deploy an evolving world-wide Observatory, based on cloud platforms, to help with studying attacker behaviour and collecting IoT malware samples. The research produces useful tools and techniques for identifying behavioural differences between Medium-Interaction honeypots and real devices by replaying interactive attacker sessions collected from the Honeypot Network. The behavioural delta is used to evolve the Honeypot Network and improve its collection capabilities. Positive results are obtained with respect to effectiveness of the above technique. Findings by other researchers in the field are also replicated. The complete dataset and source code used for this research is made publicly available on the Open Science Framework website at https://osf.io/vkcrn/. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Authors: Genov, Todor Stanislavov
- Date: 2022-10-14
- Subjects: Internet of things , Malware (Computer software) , QEMU , Honeypot , Cowrie
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/362819 , vital:65365
- Description: The Internet of Things (IoT) is the emerging world where arbitrary objects from our everyday lives gain basic computational and networking capabilities to become part of the Internet. Researchers are estimating between 25 and 35 billion devices will be part of Internet by 2022. Unlike conventional computers where one hardware platform (Intel x86) and three operating systems (Windows, Linux and OS X) dominate the market, the IoT landscape is far more heterogeneous. To meet the growth demand the number of The System-on-Chip (SoC) manufacturers has seen a corresponding exponential growth making embedded platforms based on ARM, MIPS or SH4 processors abundant. The pursuit for market share is further leading to a price war and cost-cutting ultimately resulting in cheap systems with limited hardware resources and capabilities. The frugality of IoT hardware has a domino effect. Due to resource constraints vendors are packaging devices with custom, stripped-down Linux-based firmwares optimized for performing the device’s primary function. Device management, monitoring and security features are by and far absent from IoT devices. This created an asymmetry favouring attackers and disadvantaging defenders. This research sets out to reduce the opacity and identify a viable strategy, tactics and tooling for gaining insight into the IoT threat landscape by leveraging honeypots to build and deploy an evolving world-wide Observatory, based on cloud platforms, to help with studying attacker behaviour and collecting IoT malware samples. The research produces useful tools and techniques for identifying behavioural differences between Medium-Interaction honeypots and real devices by replaying interactive attacker sessions collected from the Honeypot Network. The behavioural delta is used to evolve the Honeypot Network and improve its collection capabilities. Positive results are obtained with respect to effectiveness of the above technique. Findings by other researchers in the field are also replicated. The complete dataset and source code used for this research is made publicly available on the Open Science Framework website at https://osf.io/vkcrn/. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
An investigation into the current state of web based cryptominers and cryptojacking
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
An Analysis of Internet Background Radiation within an African IPv4 netblock
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
- Authors: Hendricks, Wadeegh
- Date: 2020
- Subjects: Computer networks -- Monitoring –- South Africa , Dark Web , Computer networks -- Security measures –- South Africa , Universities and Colleges -- Computer networks -- Security measures , Malware (Computer software) , TCP/IP (Computer network protocol)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103791 , vital:32298
- Description: The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning.
- Full Text:
An exploration of the overlap between open source threat intelligence and active internet background radiation
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
- Authors: Pearson, Deon Turner
- Date: 2020
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Malware (Computer software) , TCP/IP (Computer network protocol) , Open source intelligence
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/103802 , vital:32299
- Description: Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP.
- Full Text:
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
A study of malicious software on the macOS operating system
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
- Authors: Regensberg, Mark Alan
- Date: 2019
- Subjects: Malware (Computer software) , Computer security , Computer viruses , Mac OS
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92302 , vital:30701
- Description: Much of the published malware research begins with a common refrain: the cost, quantum and complexity of threats are increasing, and research and practice should prioritise efforts to automate and reduce times to detect and prevent malware, while improving the consistency of categories and taxonomies applied to modern malware. Existing work related to malware targeting Apple's macOS platform has not been spared this approach, although limited research has been conducted on the true nature of threats faced by users of the operating system. While macOS focused research available consistently notes an increase in macOS users, devices and ultimately in threats, an opportunity exists to understand the real nature of threats faced by macOS users and suggest potential avenues for future work. This research provides a view of the current state of macOS malware by analysing and exploring a dataset of malware detections on macOS endpoints captured over a period of eleven months by an anti-malware software vendor. The dataset is augmented with malware information provided by the widely used Virus. Total service, as well as the application of prior automated malware categorisation work, AVClass to categorise and SSDeep to cluster and report on observed data. With Windows and Android platforms frequently in the spotlight as targets for highly disruptive malware like botnets, ransomware and cryptominers, research and intuition seem to suggest the threat of malware on this increasingly popular platform should be growing and evolving accordingly. Findings suggests that the direction and nature of growth and evolution may not be entirely as clear as industry reports suggest. Adware and Potentially Unwanted Applications (PUAs) make up the vast majority of the detected threats, with remote access trojans (RATs), ransomware and cryptocurrency miners comprising a relatively small proportion of the detected malware. This provides a number of avenues for potential future work to compare and contrast with research on other platforms, as well as identification of key factors that may influence its growth in the future.
- Full Text:
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
A framework for malicious host fingerprinting using distributed network sensors
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
An analysis of fusing advanced malware email protection logs, malware intelligence and active directory attributes as an instrument for threat intelligence
- Authors: Vermeulen, Japie
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Data mining , Phishing , Data logging , Quantitative research
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63922 , vital:28506
- Description: After more than four decades email is still the most widely used electronic communication medium today. This electronic communication medium has evolved into an electronic weapon of choice for cyber criminals ranging from the novice to the elite. As cyber criminals evolve with tools, tactics and procedures, so too are technology vendors coming forward with a variety of advanced malware protection systems. However, even if an organization adopts such a system, there is still the daily challenge of interpreting the log data and understanding the type of malicious email attack, including who the target was and what the payload was. This research examines a six month data set obtained from an advanced malware email protection system from a bank in South Africa. Extensive data fusion techniques are used to provide deeper insight into the data by blending these with malware intelligence and business context. The primary data set is fused with malware intelligence to identify the different malware families associated with the samples. Active Directory attributes such as the business cluster, department and job title of users targeted by malware are also fused into the combined data. This study provides insight into malware attacks experienced in the South African financial services sector. For example, most of the malware samples identified belonged to different types of ransomware families distributed by known botnets. However, indicators of targeted attacks were observed based on particular employees targeted with exploit code and specific strains of malware. Furthermore, a short time span between newly discovered vulnerabilities and the use of malicious code to exploit such vulnerabilities through email were observed in this study. The fused data set provided the context to answer the “who”, “what”, “where” and “when”. The proposed methodology can be applied to any organization to provide insight into the malware threats identified by advanced malware email protection systems. In addition, the fused data set provides threat intelligence that could be used to strengthen the cyber defences of an organization against cyber threats.
- Full Text:
- Authors: Vermeulen, Japie
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Data mining , Phishing , Data logging , Quantitative research
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63922 , vital:28506
- Description: After more than four decades email is still the most widely used electronic communication medium today. This electronic communication medium has evolved into an electronic weapon of choice for cyber criminals ranging from the novice to the elite. As cyber criminals evolve with tools, tactics and procedures, so too are technology vendors coming forward with a variety of advanced malware protection systems. However, even if an organization adopts such a system, there is still the daily challenge of interpreting the log data and understanding the type of malicious email attack, including who the target was and what the payload was. This research examines a six month data set obtained from an advanced malware email protection system from a bank in South Africa. Extensive data fusion techniques are used to provide deeper insight into the data by blending these with malware intelligence and business context. The primary data set is fused with malware intelligence to identify the different malware families associated with the samples. Active Directory attributes such as the business cluster, department and job title of users targeted by malware are also fused into the combined data. This study provides insight into malware attacks experienced in the South African financial services sector. For example, most of the malware samples identified belonged to different types of ransomware families distributed by known botnets. However, indicators of targeted attacks were observed based on particular employees targeted with exploit code and specific strains of malware. Furthermore, a short time span between newly discovered vulnerabilities and the use of malicious code to exploit such vulnerabilities through email were observed in this study. The fused data set provided the context to answer the “who”, “what”, “where” and “when”. The proposed methodology can be applied to any organization to provide insight into the malware threats identified by advanced malware email protection systems. In addition, the fused data set provides threat intelligence that could be used to strengthen the cyber defences of an organization against cyber threats.
- Full Text:
NetwIOC: a framework for the automated generation of network-based IOCS for malware information sharing and defence
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
DNS traffic based classifiers for the automatic classification of botnet domains
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- «
- ‹
- 1
- ›
- »