A network telescope perspective of the Conficker outbreak
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
An Analysis and Implementation of Methods for High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
Building a Graphical Fuzzing Framework
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
Remote fingerprinting and multisensor data fusion
- Hunter, Samuel O, Stalmans, Etienne, Irwin, Barry V W, Richter, John
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »