Mapping the most significant computer hacking events to a temporal computer attack model
- Authors: Van Heerden, Renier , Pieterse, Heloise , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429950 , vital:72654 , https://doi.org/10.1007/978-3-642-33332-3_21
- Description: This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Recon-naissance stages. The Attack stage is separated into: Ramp-up, Dam-age and Residue. This paper demonstrates how our eight significant hacking events are mapped to the temporal computer attack model. The temporal computer attack model becomes a valuable asset in the protection of critical infrastructure by being able to detect similar attacks earlier.
- Full Text:
- Date Issued: 2012
Remote fingerprinting and multisensor data fusion
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
A framework for DNS based detection and mitigation of malware infections on a network
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Date Issued: 2011
An evaluation of lightweight classification methods for identifying malicious URLs
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429839 , vital:72644 , 10.1109/ISSA.2011.6027532
- Description: Recent research has shown that it is possible to identify malicious URLs through lexical analysis of their URL structures alone. This paper intends to explore the effectiveness of these lightweight classification algorithms when working with large real world datasets including lists of malicious URLs obtained from Phishtank as well as largely filtered be-nign URLs obtained from proxy traffic logs. Lightweight algorithms are defined as methods by which URLs are analysed that do not use exter-nal sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the para-digm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and false-negatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accu-racy through the use of modern techniques in training classifiers. Anal-ysis of this type will also be useful in an incident response analysis where large numbers of URLs need to be filtered for potentially mali-cious URLs as an initial step in information gathering as well as end us-er implementations such as browser extensions which could help pro-tect the user from following potentially malicious links. Both AROW and CW classifier update methods will be used as prototype implementa-tions and their effectiveness will be compared to fully featured analysis results. These methods are interesting because they are able to train on any labelled data, including instances in which their prediction is cor-rect, allowing them to build a confidence in specific lexical features. This makes it possible for them to be trained using noisy input data, making them ideal for real world applications such as link filtering and information gathering.
- Full Text:
- Date Issued: 2011
Data classification for artificial intelligence construct training to aid in network incident identification using network telescope data
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430125 , vital:72667 , https://doi.org/10.1145/1899503.1899544
- Description: This paper considers the complexities involved in obtaining training da-ta for use by artificial intelligence constructs to identify potential network incidents using passive network telescope data. While a large amount of data obtained from network telescopes exists, this data is not current-ly marked for known incidents. Problems related to this marking process include the accuracy of the markings, the validity of the original data and the time involved. In an attempt to solve these issues two methods of training data generation are considered namely; manual identification and automated generation. The manual technique considers heuristics for finding network incidents while the automated technique considers building simulated data sets using existing models of virus propagation and malicious activity. An example artificial intelligence system is then constructed using these marked datasets.
- Full Text:
- Date Issued: 2010
Parallel packet classification using GPU co-processors
- Authors: Nottingham, Alistair , Irwin, Barry V W
- Date: 2010
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430250 , vital:72677 , https://doi.org/10.1145/1899503.1899529
- Description: In the domain of network security, packet filtering for classification pur-poses is of significant interest. Packet classification provides a mecha-nism for understanding the composition of packet streams arriving at distinct network interfaces, and is useful in diagnosing threats and un-covering vulnerabilities so as to maximise data integrity and system se-curity. Traditional packet classifiers, such as PCAP, have utilised Con-trol Flow Graphs (CFGs) in representing filter sets, due to both their amenability to optimisation, and their inherent structural applicability to the metaphor of decision-based classification. Unfortunately, CFGs do not map well to cooperative processing implementations, and single-threaded CPU-based implementations have proven too slow for real-time classification against multiple arbitrary filters on next generation networks. In this paper, we consider a novel multithreaded classification algorithm, optimised for execution on GPU co-processors, intended to accelerate classification throughput and maximise processing efficien-cy in a highly parallel execution context.
- Full Text:
- Date Issued: 2010
A Comparison Of The Resource Requirements Of Snort And Bro In Production Networks
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430040 , vital:72661 , https://www.iadisportal.org/applied-computing-2009-proceedings
- Description: Intrusion Detection is essential in modern networking. However, with the increas-ing load on modern networks, the resource requirements of NIDS are significant. This paper explores and compares the requirements of Snort and Bro, and finds that Snort is more efficient at processing network traffic than Bro. It also finds that both systems are capable of analysing current network loads on commodity hardware, but may be unable to do so for higher bandwidth networks. This is ben-eficial in a South African context due to the increasing international bandwidth that will come online with the launch of the SEACOM Cable, and local projects such as SANREN.
- Full Text:
- Date Issued: 2009
Evaluating text preprocessing to improve compression on maillogs
- Authors: Otten, Fred , Irwin, Barry V W , Thinyane, Hannah
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430138 , vital:72668 , https://doi.org/10.1145/1632149.1632157
- Description: Maillogs contain important information about mail which has been sent or received. This information can be used for statistical purposes, to help prevent viruses or to help prevent SPAM. In order to satisfy regula-tions and follow good security practices, maillogs need to be monitored and archived. Since there is a large quantity of data, some form of data reduction is necessary. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data. Text preprocessing can be used to aid the compression of English text files. This paper evaluates whether text preprocessing, particularly word replacement, can be used to improve the compression of maillogs. It presents an algorithm for constructing a dictionary for word replacement and provides the results of experiments conducted using the ppmd, gzip, bzip2 and 7zip programs. These tests show that text prepro-cessing improves data compression on maillogs. Improvements of up to 56 percent in compression time and up to 32 percent in compression ratio are achieved. It also shows that a dictionary may be generated and used on other maillogs to yield reductions within half a percent of the results achieved for the maillog used to generate the dictionary.
- Full Text:
- Date Issued: 2009
Extending the NFComms: framework for bulk data transfers
- Authors: Nottingham, Alastair , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430164 , vital:72670 , https://doi.org/10.1145/1632149.1632170
- Description: Packet analysis is an important aspect of network security, which typi-cally relies on a flexible packet filtering system to extrapolate important packet information from each processed packet. Packet analysis is a computationally intensive, highly parallelisable task, and as such, clas-sification of large packet sets, such as those collected by a network tel-escope, can require significant processing time. We wish to improve upon this, through parallel classification on a GPU. In this paper, we first consider the OpenCL architecture and its applicability to packet analy-sis. We then introduce a number of packet demultiplexing and routing algorithms, and finally present a discussion on how some of these techniques may be leveraged within a GPGPU context to improve packet classification speeds.
- Full Text:
- Date Issued: 2009
Performance Effects of Concurrent Virtual Machine Execution in VMware Workstation 6
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2009
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429966 , vital:72655 , https://doi.org/10.1007/978-90-481-3660-5_56
- Description: The recent trend toward virtualized computing both as a means of serv-er consolidation and as a powerful desktop computing tool has lead into a wide variety of studies into the performance of hypervisor products. This study has investigated the scalability of VMware Workstation 6 on the desktop platform. We present comparative performance results for the concurrent execution of a number of virtual machines. A through statistical analysis of the performance results highlights the perfor-mance trends of different numbers of concurrent virtual machines and concludes that VMware workstation can scale in certain contexts. We find that there are different performance benefits dependant on the ap-plication and that memory intensive applications perform less effective-ly than those applications which are IO intensive. We also find that run-ning concurrent virtual machines offers a significant performance de-crease, but that the drop thereafter is less significant.
- Full Text:
- Date Issued: 2009
A Canonical Implementation Of The Advanced Encryption Standard On The Graphics Processing Unit
- Authors: Pilkington, Nick , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430007 , vital:72659 , https://digifors.cs.up.ac.za/issa/2008/Proceedings/Research/47.pdf
- Description: This paper will present an implementation of the Advanced Encryption Standard (AES) on the graphics processing unit (GPU). It investigates the ease of implementation from first principles and the difficulties encountered. It also presents a performance analysis to evaluate if the GPU is a viable option for a cryptographics platform. The AES implementation is found to yield orders of maginitude increased performance when compared to CPU based implementations. Although the implementation introduces complica-tions, these are quickly becoming mitigated by the growing accessibility pro-vided by general programming on graphics processing units (GPGPU) frameworks like NVIDIA’s Compute Uniform Device Architechture (CUDA) and AMD/ATI’s Close to Metal (CTM).
- Full Text:
- Date Issued: 2008
An Investigation into the Performance of General Sorting on Graphics Processing Units
- Authors: Pilkington, Nick , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429881 , vital:72648 , https://doi.org/10.1007/978-1-4020-8741-7_65
- Description: Sorting is a fundamental operation in computing and there is a constant need to push the boundaries of performance with different sorting algo-rithms. With the advent of the programmable graphics pipeline, the par-allel nature of graphics processing units has been exposed allowing programmers to take advantage of it. By transforming the way that data is represented and operated on parallel sorting algorithms can be im-plemented on graphics processing units where previously only graphics processing could be performed. This paradigm of programming exhibits potentially large speedups for algorithms.
- Full Text:
- Date Issued: 2008
Guidelines for Constructing Robust Discrete-Time Computer Network Simulations
- Authors: Richter, John , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429896 , vital:72649 , https://doi.org/10.1007/978-1-4020-8737-0_69
- Description: Developing network simulations is a complex task that is often per-formed in research and testing. The components required to build a network simulator are common to many solutions. In order to expedite further simulation development, these components have been outlined and detailed in this paper. The process for generating and using these components is then detailed, and an example of a simulator that has been implemented using this system, is detailed
- Full Text:
- Date Issued: 2008
High level internet scale traffic visualization using hilbert curve mapping
- Authors: Irwin, Barry V W , Pilkington, Nick
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429911 , vital:72650 , https://doi.org/10.1007/978-3-540-78243-8_10
- Description: A high level analysis tool was developed for aiding in the analysis of large volumes of network telescope traffic, and in particular the comparisons of data col-lected from multiple telescope sources. Providing a visual means for the evaluation of worm propagation algorithms has also been achieved. By using a Hilbert curve as a means of ordering points within the visual-ization space, the concept of nearness between nu-merically sequential network blocks was preserved. The design premise and initial results obtained using the tool developed are discussed, and a number of fu-ture extensions proposed.
- Full Text:
- Date Issued: 2008
Towards a taxonomy of network scanning techniques
- Authors: Barnett, Richard J , Irwin, Barry V W
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430310 , vital:72682 , https://doi.org/10.1145/1456659.1456660
- Description: Network scanning is a common reconnaissance activity in network in-trusion. Despite this, it's classification remains vague and detection sys-tems in current Network Intrusion Detection Systems are incapable of detecting many forms of scanning traffic. This paper presents a classi-fication of network scanning and illustrates how complex and varied this activity is. The presented classification extends previous, well known, definitions of scanning traffic in a manner which reflects this complexity.
- Full Text:
- Date Issued: 2008
Using inetvis to evaluate snort and bro scan detection on a network telescope
- Authors: Irwin, Barry V W , van Riel, J P
- Date: 2008
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429981 , vital:72656 , https://doi.org/10.1007/978-3-540-78243-8_17
- Description: This paper presents an investigative analysis of net-work scans and scan detection algorithms. Visualisa-tion is employed to review network telescope traffic and identify incidents of scan activity. Some of the identified phenomena appear to be novel forms of host discovery. Scan detection algorithms used by the Snort and Bro intrusion detection systems are cri-tiqued by comparing the visualised scans with alert output. Where human assessment disagrees with the alert output, explanations are sought by analysing the detection algorithms. The Snort and Bro algorithms are based on counting unique connection attempts to destination addresses and ports. For Snort, notable false positive and false negative cases result due to a grossly oversimplified method of counting unique destination addresses and ports.
- Full Text:
- Date Issued: 2008
A Digital Forensic investigative model for business organisations
- Authors: Forrester, Jock , Irwin, Barry V W
- Date: 2007
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430078 , vital:72664 , https://www.researchgate.net/profile/Barry-Ir-win/publication/228783555_A_Digital_Forensic_investigative_model_for_business_organisations/links/53e9c5e80cf28f342f414987/A-Digital-Forensic-investigative-model-for-business-organisations.pdf
- Description: When a digital incident occurs there are generally three courses of ac-tions that are taken, generally dependant on the type of organisation within which the incident occurs, or which is responding the event. In the case of law enforcement the priority is to secure the crime scene, followed by the identification of evidentiary sources which should be dispatched to a specialist laboratory for analysis. In the case of an inci-dent military (or similar critical infrastructures) infrastructure the primary goal becomes one of risk identification and elimination, followed by re-covery and possible offensive measures. Where financial impact is caused by an incident, and revenue earning potential is adversely af-fected, as in the case of most commercial organisations), root cause analysis, and system remediation is of primary concern, with in-depth analysis of the how and why left until systems have been restored.
- Full Text:
- Date Issued: 2007
Inetvis: a graphical aid for the detection and visualisation of network scans
- Authors: Irwin, Barry V W , van Riel, Jean-Pierre
- Date: 2007
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430381 , vital:72687 , https://www.cs.ru.ac.za/research/g02V2468/publications/Irwin-VizSEC2007_draft.pdf
- Description: This paper presents an investigative analysis of network scans and scan detection algorithms. Visualisation is employed to review network telescope traffic and identify incidents of scan activity. Some of the identified phenomena appear to be novel forms of host discovery. The scan detection algorithms of Snort and Bro are critiqued by comparing the visualised scans with alert output. Where human assessment disa-grees with the alert output, explanations are sought after by analysing the detection algorithms. The algorithms of the Snort and Bro intrusion detection systems are based on counting unique connection attempts to destination addresses and ports. For Snort, notable false positive and false negative cases result due to a grossly oversimplified method of counting unique destination addresses and ports.
- Full Text:
- Date Issued: 2007
A Discussion Of Wireless Security Technologies
- Authors: Janse van Rensburg, Johanna , Irwin, Barry V W
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429852 , vital:72645 , https://www.researchgate.net/profile/Barry-Ir-win/publication/228864029_A_DISCUSSION_OF_WIRELESS_SECURITY_TECHNOLOGIES/links/53e9c5190cf28f342f41492b/A-DISCUSSION-OF-WIRELESS-SECURITY-TECHNOLOGIES.pdf
- Description: The 802.11 standard contains a number of problems, ranging from in-terference, co-existence issues, exposed terminal problems and regula-tions to security. Despite all of these it has become a widely deployed technology as an extension of companies’ networks to provide mobility. In this paper the focus will be on the security issues of 802.11. Several solutions for the deployment of 802.11 security exists today, ranging from WEP, WPA, VPN and 802.11 i, each providing a different level of security. These technologies contain pros and cons which need to be understood in order to implement an appropriate solution suited to a specific scenario.
- Full Text:
- Date Issued: 2006
Inetvis, a visual tool for network telescope traffic analysis
- Authors: van Riel, Jean-Pierre , Irwin, Barry V W
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430176 , vital:72671 , https://doi.org/10.1145/1108590.1108604
- Description: This article illustrates the merits of visual analysis as it presents prelimi-nary findings using InetVis - an animated 3-D scatter plot visualization of network events. The concepts and features of InetVis are evaluated with reference to related work in the field. Tested against a network scanning tool, anticipated visual signs of port scanning and network mapping serve as a proof of concept. This research also unveils sub-stantial amounts of suspicious activity present in Internet traffic during August 2005, as captured by a class C network telescope. InetVis is found to have promising scalability whilst offering salient depictions of intrusive network activity.
- Full Text:
- Date Issued: 2006