A Baseline Numeric Analysis of Network Telescope Data for Network Incident Discovery
- Cowie, Bradley, Irwin, Barry V W
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427971 , vital:72477 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: This paper investigates the value of Network Telescope data as a mechanism for network incident discovery by considering data summa-rization, simple heuristic identification and deviations from previously observed traffic distributions. It is important to note that the traffic ob-served is obtained from a Network Telescope and thus does not expe-rience the same fluctuations or vagaries experienced by normal traffic. The datasets used for this analysis were obtained from a Network Tele-scope for the time period August 2005 to September 2009 which had been allocated a Class-C network address block at Rhodes University. The nature of the datasets were considered in terms of simple statistical measures obtained through data summarization which greatly reduced the processing and observation required to determine whether an inci-dent had occurred. However, this raised issues relating to the time in-terval used for identification of an incident. A brief discussion into statis-tical summaries of Network Telescope data as" good" security metrics is provided. The summaries derived were then used to seek for signs of anomalous network activity. Anomalous activity detected was then rec-onciled by considering incidents that had occurred in the same or simi-lar time interval. Incidents identified included Conficker, Win32. RinBot, DDoS and Norton Netware vulnerabilities. Detection techniques includ-ed identification of rapid growth in packet count, packet size deviations, changes in the composition of the traffic expressed as a ratio of its constituents and changes in the modality of the data. Discussion into the appropriateness of this sort of manual analysis is provided and suggestions towards an automated solution are discussed.
- Full Text:
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427971 , vital:72477 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: This paper investigates the value of Network Telescope data as a mechanism for network incident discovery by considering data summa-rization, simple heuristic identification and deviations from previously observed traffic distributions. It is important to note that the traffic ob-served is obtained from a Network Telescope and thus does not expe-rience the same fluctuations or vagaries experienced by normal traffic. The datasets used for this analysis were obtained from a Network Tele-scope for the time period August 2005 to September 2009 which had been allocated a Class-C network address block at Rhodes University. The nature of the datasets were considered in terms of simple statistical measures obtained through data summarization which greatly reduced the processing and observation required to determine whether an inci-dent had occurred. However, this raised issues relating to the time in-terval used for identification of an incident. A brief discussion into statis-tical summaries of Network Telescope data as" good" security metrics is provided. The summaries derived were then used to seek for signs of anomalous network activity. Anomalous activity detected was then rec-onciled by considering incidents that had occurred in the same or simi-lar time interval. Incidents identified included Conficker, Win32. RinBot, DDoS and Norton Netware vulnerabilities. Detection techniques includ-ed identification of rapid growth in packet count, packet size deviations, changes in the composition of the traffic expressed as a ratio of its constituents and changes in the modality of the data. Discussion into the appropriateness of this sort of manual analysis is provided and suggestions towards an automated solution are discussed.
- Full Text:
A framework for DNS based detection and mitigation of malware infections on a network
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
A Framework for DNS Based Detection of Botnets at the ISP Level
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427984 , vital:72478 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622932_A_Framework_for_DNS_Based_Detection_of_Botnets_at_the_ISP_Level/links/5b9a14e1458515310583fc19/A-Framework-for-DNS-Based-Detection-of-Botnets-at-the-ISP-Level.pdf
- Description: The rapid expansion of networks and increase in internet connected devices has lead to a large number of hosts susceptible to virus infec-tion. Infected hosts are controlled by attackers and form so called bot-nets. These botnets are used to steal data, mask malicious activity and perform distributed denial of service attacks. Traditional protection mechanisms rely on host based detection of viruses. These systems are failing due to the rapid increase in the number of vulnerable hosts and attacks that easily bypass detection mechanisms. This paper pro-poses moving protection from the individual hosts to the Internet Ser-vice Provider (ISP), allowing for the detection and prevention of botnet traffic. DNS traffic inspection allows for the development of a lightweight and accurate classifier that has little or no effect on network perfor-mance. By preventing botnet activity at the ISP level, it is hoped that the threat of botnets can largely be mitigated.
- Full Text:
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427984 , vital:72478 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622932_A_Framework_for_DNS_Based_Detection_of_Botnets_at_the_ISP_Level/links/5b9a14e1458515310583fc19/A-Framework-for-DNS-Based-Detection-of-Botnets-at-the-ISP-Level.pdf
- Description: The rapid expansion of networks and increase in internet connected devices has lead to a large number of hosts susceptible to virus infec-tion. Infected hosts are controlled by attackers and form so called bot-nets. These botnets are used to steal data, mask malicious activity and perform distributed denial of service attacks. Traditional protection mechanisms rely on host based detection of viruses. These systems are failing due to the rapid increase in the number of vulnerable hosts and attacks that easily bypass detection mechanisms. This paper pro-poses moving protection from the individual hosts to the Internet Ser-vice Provider (ISP), allowing for the detection and prevention of botnet traffic. DNS traffic inspection allows for the development of a lightweight and accurate classifier that has little or no effect on network perfor-mance. By preventing botnet activity at the ISP level, it is hoped that the threat of botnets can largely be mitigated.
- Full Text:
A fuzz testing framework for evaluating and securing network applications
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428000 , vital:72479 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622655_A_Fuzz_Testing_Framework_for_Evaluating_and_Securing_Network_Applications/links/5b9a153b92851c4ba8181b0d/A-Fuzz-Testing-Framework-for-Evaluating-and-Securing-Network-Applications.pdf
- Description: Research has shown that fuzz-testing is an effective means of increasing the quality and security of software and systems. This project proposes the im-plementation of a testing framework based on numerous fuzz-testing tech-niques. The framework will allow a user to detect errors in applications and locate critical areas in the applications that are responsible for the detected errors. The aim is to provide an all-encompassing testing framework that will allow a developer to quickly and effectively deploy fuzz tests on an applica-tion and ensure a higher level of quality control before deployment.
- Full Text:
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428000 , vital:72479 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622655_A_Fuzz_Testing_Framework_for_Evaluating_and_Securing_Network_Applications/links/5b9a153b92851c4ba8181b0d/A-Fuzz-Testing-Framework-for-Evaluating-and-Securing-Network-Applications.pdf
- Description: Research has shown that fuzz-testing is an effective means of increasing the quality and security of software and systems. This project proposes the im-plementation of a testing framework based on numerous fuzz-testing tech-niques. The framework will allow a user to detect errors in applications and locate critical areas in the applications that are responsible for the detected errors. The aim is to provide an all-encompassing testing framework that will allow a developer to quickly and effectively deploy fuzz tests on an applica-tion and ensure a higher level of quality control before deployment.
- Full Text:
An evaluation of lightweight classification methods for identifying malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429839 , vital:72644 , 10.1109/ISSA.2011.6027532
- Description: Recent research has shown that it is possible to identify malicious URLs through lexical analysis of their URL structures alone. This paper intends to explore the effectiveness of these lightweight classification algorithms when working with large real world datasets including lists of malicious URLs obtained from Phishtank as well as largely filtered be-nign URLs obtained from proxy traffic logs. Lightweight algorithms are defined as methods by which URLs are analysed that do not use exter-nal sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the para-digm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and false-negatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accu-racy through the use of modern techniques in training classifiers. Anal-ysis of this type will also be useful in an incident response analysis where large numbers of URLs need to be filtered for potentially mali-cious URLs as an initial step in information gathering as well as end us-er implementations such as browser extensions which could help pro-tect the user from following potentially malicious links. Both AROW and CW classifier update methods will be used as prototype implementa-tions and their effectiveness will be compared to fully featured analysis results. These methods are interesting because they are able to train on any labelled data, including instances in which their prediction is cor-rect, allowing them to build a confidence in specific lexical features. This makes it possible for them to be trained using noisy input data, making them ideal for real world applications such as link filtering and information gathering.
- Full Text:
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429839 , vital:72644 , 10.1109/ISSA.2011.6027532
- Description: Recent research has shown that it is possible to identify malicious URLs through lexical analysis of their URL structures alone. This paper intends to explore the effectiveness of these lightweight classification algorithms when working with large real world datasets including lists of malicious URLs obtained from Phishtank as well as largely filtered be-nign URLs obtained from proxy traffic logs. Lightweight algorithms are defined as methods by which URLs are analysed that do not use exter-nal sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the para-digm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and false-negatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accu-racy through the use of modern techniques in training classifiers. Anal-ysis of this type will also be useful in an incident response analysis where large numbers of URLs need to be filtered for potentially mali-cious URLs as an initial step in information gathering as well as end us-er implementations such as browser extensions which could help pro-tect the user from following potentially malicious links. Both AROW and CW classifier update methods will be used as prototype implementa-tions and their effectiveness will be compared to fully featured analysis results. These methods are interesting because they are able to train on any labelled data, including instances in which their prediction is cor-rect, allowing them to build a confidence in specific lexical features. This makes it possible for them to be trained using noisy input data, making them ideal for real world applications such as link filtering and information gathering.
- Full Text:
An Evaluation of Trading Bands as Indicators for Network Telescope Datasets
- Cowie, Bradley, Irwin, Barry V W
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428013 , vital:72480 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: Large scale viral outbreaks such as Conficker, the Code Red worm and the Witty worm illustrate the importance of monitoring malevolent activity on the Internet. Careful monitoring of anomalous traffic allows organiza-tions to react appropriately and in a timely fashion to minimize economic damage. Network telescopes, a type of Internet monitor, provide ana-lysts with a way of decoupling anomalous traffic from legitimate traffic. Data from network telescopes is used by analysts to identify potential incidents by comparing recent trends with historical data. Analysis of network telescope datasets is complicated by the large quantity of data present, the number of subdivisions within the data and the uncertainty associated with received traffic. While there is considerable research being performed in the field of network telescopes little of this work is concerned with the analysis of alternative methods of incident identifi-cation. This paper considers trading bands, a subfield of technical analysis, as an approach to identifying potential Internet incidents such as worms. Trading bands construct boundaries that are used for meas-uring when certain quantities are high or low relative to recent values. This paper considers Bollinger Bands and associated Bollinger Indica-tors, Price Channels and Keltner Channels. These techniques are evaluated as indicators of malevolent activity by considering how these techniques react to incidents indentified in the captured data from a network telescope.
- Full Text:
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428013 , vital:72480 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: Large scale viral outbreaks such as Conficker, the Code Red worm and the Witty worm illustrate the importance of monitoring malevolent activity on the Internet. Careful monitoring of anomalous traffic allows organiza-tions to react appropriately and in a timely fashion to minimize economic damage. Network telescopes, a type of Internet monitor, provide ana-lysts with a way of decoupling anomalous traffic from legitimate traffic. Data from network telescopes is used by analysts to identify potential incidents by comparing recent trends with historical data. Analysis of network telescope datasets is complicated by the large quantity of data present, the number of subdivisions within the data and the uncertainty associated with received traffic. While there is considerable research being performed in the field of network telescopes little of this work is concerned with the analysis of alternative methods of incident identifi-cation. This paper considers trading bands, a subfield of technical analysis, as an approach to identifying potential Internet incidents such as worms. Trading bands construct boundaries that are used for meas-uring when certain quantities are high or low relative to recent values. This paper considers Bollinger Bands and associated Bollinger Indica-tors, Price Channels and Keltner Channels. These techniques are evaluated as indicators of malevolent activity by considering how these techniques react to incidents indentified in the captured data from a network telescope.
- Full Text:
High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
Near Real-time Aggregation and Visualisation of Hostile Network Traffic
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
Tartarus: A honeypot based malware tracking and mitigation framework
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- «
- ‹
- 1
- ›
- »