Capturefoundry: a gpu accelerated packet capture analysis tool
- Nottingham, Alastair, Richter, John, Irwin, Barry V W
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair , Richter, John , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430112 , vital:72666 , https://doi.org/10.1145/2389836.2389877
- Description: Packet captures are used to support a variety of tasks, including network administration, fault diagnosis and security and network related research. Despite their usefulness, processing packet capture files is a slow and tedious process that impedes the analysis of large, long-term captures. This paper discusses the primary components and observed performance of CaptureFoundry, a stand-alone capture analysis support tool designed to quickly map, filter and extract packets from large capture files using a combination of indexing techniques and GPU accelerated packet classification. All results are persistent, and may be used to rapidly extract small pre-filtered captures on demand that may be analysed quickly in existing capture analysis applications. Performance results show that CaptureFoundry is capable of generating multiple indexes and classification results for large captures at hundreds of megabytes per second, with minimal CPU and memory overhead and only minor additional storage space requirements.
- Full Text:
- Date Issued: 2012
Classifying network attack scenarios using an ontology
- Van Heerden, Renier, Irwin, Barry V W, Burke, I D
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier , Irwin, Barry V W , Burke, I D
- Date: 2012
- Language: English
- Type: Conference paper
- Identifier: vital:6606 , http://hdl.handle.net/10962/d1009326
- Description: This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class.
- Full Text:
- Date Issued: 2012
Cost-effective realisation of the Internet of Things
- Andersen, Michael, Irwin, Barry V W
- Authors: Andersen, Michael , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427930 , vital:72474 , https://www.researchgate.net/profile/Barry-Irwin/publication/326225063_Cost-effec-tive_realisation_of_the_Internet_of_Things/links/5b3f2262a6fdcc8506ffe75e/Cost-effective-realisation-of-the-Internet-of-Things.pdf
- Description: A hardware and software platform, created to facilitate power usage and power quality measurements along with direct power line actuation is under development. Additional general purpose control and sensing interfaces have been integrated. Measurements are persistently stored on each node to allow asynchronous retrieval of data without the need for a central server. The device communicates using an IEEE 802.15. 4 radio transceiver to create a self-configuring mesh network. Users can interface with the mesh network by connecting to any node via USB and utilising the developed high level API and interactive environment.
- Full Text:
- Date Issued: 2012
- Authors: Andersen, Michael , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427930 , vital:72474 , https://www.researchgate.net/profile/Barry-Irwin/publication/326225063_Cost-effec-tive_realisation_of_the_Internet_of_Things/links/5b3f2262a6fdcc8506ffe75e/Cost-effective-realisation-of-the-Internet-of-Things.pdf
- Description: A hardware and software platform, created to facilitate power usage and power quality measurements along with direct power line actuation is under development. Additional general purpose control and sensing interfaces have been integrated. Measurements are persistently stored on each node to allow asynchronous retrieval of data without the need for a central server. The device communicates using an IEEE 802.15. 4 radio transceiver to create a self-configuring mesh network. Users can interface with the mesh network by connecting to any node via USB and utilising the developed high level API and interactive environment.
- Full Text:
- Date Issued: 2012
Geo-spatial autocorrelation as a metric for the detection of fast-flux botnet domains
- Stalmans, Etienne, Hunter, Samuel O, Irwin, Barry V W
- Authors: Stalmans, Etienne , Hunter, Samuel O , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429799 , vital:72640 , 10.1109/ISSA.2012.6320433
- Description: Botnets consist of thousands of hosts infected with malware. Botnet owners communicate with these hosts using Command and Control (C2) servers. These C2 servers are usually infected hosts which the botnet owners do not have physical access to. For this reason botnets can be shut down by taking over or blocking the C2 servers. Botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. The distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Moran's I and Geary's C are used to produce classifiers using multiple geographic co-ordinate systems to produce efficient and accurate results. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced.
- Full Text:
- Date Issued: 2012
- Authors: Stalmans, Etienne , Hunter, Samuel O , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429799 , vital:72640 , 10.1109/ISSA.2012.6320433
- Description: Botnets consist of thousands of hosts infected with malware. Botnet owners communicate with these hosts using Command and Control (C2) servers. These C2 servers are usually infected hosts which the botnet owners do not have physical access to. For this reason botnets can be shut down by taking over or blocking the C2 servers. Botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. The distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Moran's I and Geary's C are used to produce classifiers using multiple geographic co-ordinate systems to produce efficient and accurate results. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced.
- Full Text:
- Date Issued: 2012
Mapping the most significant computer hacking events to a temporal computer attack model
- Van Heerden, Renier, Pieterse, Heloise, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429950 , vital:72654 , https://doi.org/10.1007/978-3-642-33332-3_21
- Description: This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Recon-naissance stages. The Attack stage is separated into: Ramp-up, Dam-age and Residue. This paper demonstrates how our eight significant hacking events are mapped to the temporal computer attack model. The temporal computer attack model becomes a valuable asset in the protection of critical infrastructure by being able to detect similar attacks earlier.
- Full Text:
- Date Issued: 2012
- Authors: Van Heerden, Renier , Pieterse, Heloise , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429950 , vital:72654 , https://doi.org/10.1007/978-3-642-33332-3_21
- Description: This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Recon-naissance stages. The Attack stage is separated into: Ramp-up, Dam-age and Residue. This paper demonstrates how our eight significant hacking events are mapped to the temporal computer attack model. The temporal computer attack model becomes a valuable asset in the protection of critical infrastructure by being able to detect similar attacks earlier.
- Full Text:
- Date Issued: 2012
Network telescope metrics
- Authors: Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427944 , vital:72475 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265121268_Network_Telescope_Metrics/links/58e23f70a6fdcc41bf973e69/Network-Telescope-Metrics.pdf
- Description: Network telescopes are a means of passive network monitoring, increasingly being used as part of a holistic network security program. One problem encountered by researchers in the sharing of the collected data form these systems. This is either due to the size of the data, or possibly a need to maintain the privacy of the Network address space being used for monitoring. This paper proposes a selection of metrics which can be used to communicate the most salient information contained in the data-set with other researchers, without the need to exchange or disclose the data-sets. Descriptive metrics for the sensor system are discussed along with numerical analysis data. The case for the use of graphical summary data is also presented.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427944 , vital:72475 , https://www.researchgate.net/profile/Barry-Ir-win/publication/265121268_Network_Telescope_Metrics/links/58e23f70a6fdcc41bf973e69/Network-Telescope-Metrics.pdf
- Description: Network telescopes are a means of passive network monitoring, increasingly being used as part of a holistic network security program. One problem encountered by researchers in the sharing of the collected data form these systems. This is either due to the size of the data, or possibly a need to maintain the privacy of the Network address space being used for monitoring. This paper proposes a selection of metrics which can be used to communicate the most salient information contained in the data-set with other researchers, without the need to exchange or disclose the data-sets. Descriptive metrics for the sensor system are discussed along with numerical analysis data. The case for the use of graphical summary data is also presented.
- Full Text:
- Date Issued: 2012
Normandy: A Framework for Implementing High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427958 , vital:72476 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326224974_Normandy_A_Framework_for_Implementing_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f21074585150d2309dd50/Normandy-A-Framework-for-Implementing-High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: Research has shown that it is possible to classify malicious URLs using state of the art techniques to train Artificial Neural Networks (ANN) using only lexical features of a URL. This has the advantage of being high speed and does not add any overhead to classifications as it does not require look-ups from external services. This paper discusses our method for implementing and testing a framework which automates the generation of these neural networks as well as testing involved in trying to optimize the performance of these ANNs.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427958 , vital:72476 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326224974_Normandy_A_Framework_for_Implementing_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f21074585150d2309dd50/Normandy-A-Framework-for-Implementing-High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: Research has shown that it is possible to classify malicious URLs using state of the art techniques to train Artificial Neural Networks (ANN) using only lexical features of a URL. This has the advantage of being high speed and does not add any overhead to classifications as it does not require look-ups from external services. This paper discusses our method for implementing and testing a framework which automates the generation of these neural networks as well as testing involved in trying to optimize the performance of these ANNs.
- Full Text:
- Date Issued: 2012
Remote fingerprinting and multisensor data fusion
- Hunter, Samuel O, Stalmans, Etienne, Irwin, Barry V W, Richter, John
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
Social recruiting: a next generation social engineering attack
- Schoeman, A H B, Irwin, Barry V W
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
A Baseline Numeric Analysis of Network Telescope Data for Network Incident Discovery
- Cowie, Bradley, Irwin, Barry V W
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427971 , vital:72477 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: This paper investigates the value of Network Telescope data as a mechanism for network incident discovery by considering data summa-rization, simple heuristic identification and deviations from previously observed traffic distributions. It is important to note that the traffic ob-served is obtained from a Network Telescope and thus does not expe-rience the same fluctuations or vagaries experienced by normal traffic. The datasets used for this analysis were obtained from a Network Tele-scope for the time period August 2005 to September 2009 which had been allocated a Class-C network address block at Rhodes University. The nature of the datasets were considered in terms of simple statistical measures obtained through data summarization which greatly reduced the processing and observation required to determine whether an inci-dent had occurred. However, this raised issues relating to the time in-terval used for identification of an incident. A brief discussion into statis-tical summaries of Network Telescope data as" good" security metrics is provided. The summaries derived were then used to seek for signs of anomalous network activity. Anomalous activity detected was then rec-onciled by considering incidents that had occurred in the same or simi-lar time interval. Incidents identified included Conficker, Win32. RinBot, DDoS and Norton Netware vulnerabilities. Detection techniques includ-ed identification of rapid growth in packet count, packet size deviations, changes in the composition of the traffic expressed as a ratio of its constituents and changes in the modality of the data. Discussion into the appropriateness of this sort of manual analysis is provided and suggestions towards an automated solution are discussed.
- Full Text:
- Date Issued: 2011
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427971 , vital:72477 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: This paper investigates the value of Network Telescope data as a mechanism for network incident discovery by considering data summa-rization, simple heuristic identification and deviations from previously observed traffic distributions. It is important to note that the traffic ob-served is obtained from a Network Telescope and thus does not expe-rience the same fluctuations or vagaries experienced by normal traffic. The datasets used for this analysis were obtained from a Network Tele-scope for the time period August 2005 to September 2009 which had been allocated a Class-C network address block at Rhodes University. The nature of the datasets were considered in terms of simple statistical measures obtained through data summarization which greatly reduced the processing and observation required to determine whether an inci-dent had occurred. However, this raised issues relating to the time in-terval used for identification of an incident. A brief discussion into statis-tical summaries of Network Telescope data as" good" security metrics is provided. The summaries derived were then used to seek for signs of anomalous network activity. Anomalous activity detected was then rec-onciled by considering incidents that had occurred in the same or simi-lar time interval. Incidents identified included Conficker, Win32. RinBot, DDoS and Norton Netware vulnerabilities. Detection techniques includ-ed identification of rapid growth in packet count, packet size deviations, changes in the composition of the traffic expressed as a ratio of its constituents and changes in the modality of the data. Discussion into the appropriateness of this sort of manual analysis is provided and suggestions towards an automated solution are discussed.
- Full Text:
- Date Issued: 2011
A framework for DNS based detection and mitigation of malware infections on a network
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Date Issued: 2011
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Date Issued: 2011
A Framework for DNS Based Detection of Botnets at the ISP Level
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427984 , vital:72478 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622932_A_Framework_for_DNS_Based_Detection_of_Botnets_at_the_ISP_Level/links/5b9a14e1458515310583fc19/A-Framework-for-DNS-Based-Detection-of-Botnets-at-the-ISP-Level.pdf
- Description: The rapid expansion of networks and increase in internet connected devices has lead to a large number of hosts susceptible to virus infec-tion. Infected hosts are controlled by attackers and form so called bot-nets. These botnets are used to steal data, mask malicious activity and perform distributed denial of service attacks. Traditional protection mechanisms rely on host based detection of viruses. These systems are failing due to the rapid increase in the number of vulnerable hosts and attacks that easily bypass detection mechanisms. This paper pro-poses moving protection from the individual hosts to the Internet Ser-vice Provider (ISP), allowing for the detection and prevention of botnet traffic. DNS traffic inspection allows for the development of a lightweight and accurate classifier that has little or no effect on network perfor-mance. By preventing botnet activity at the ISP level, it is hoped that the threat of botnets can largely be mitigated.
- Full Text:
- Date Issued: 2011
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427984 , vital:72478 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622932_A_Framework_for_DNS_Based_Detection_of_Botnets_at_the_ISP_Level/links/5b9a14e1458515310583fc19/A-Framework-for-DNS-Based-Detection-of-Botnets-at-the-ISP-Level.pdf
- Description: The rapid expansion of networks and increase in internet connected devices has lead to a large number of hosts susceptible to virus infec-tion. Infected hosts are controlled by attackers and form so called bot-nets. These botnets are used to steal data, mask malicious activity and perform distributed denial of service attacks. Traditional protection mechanisms rely on host based detection of viruses. These systems are failing due to the rapid increase in the number of vulnerable hosts and attacks that easily bypass detection mechanisms. This paper pro-poses moving protection from the individual hosts to the Internet Ser-vice Provider (ISP), allowing for the detection and prevention of botnet traffic. DNS traffic inspection allows for the development of a lightweight and accurate classifier that has little or no effect on network perfor-mance. By preventing botnet activity at the ISP level, it is hoped that the threat of botnets can largely be mitigated.
- Full Text:
- Date Issued: 2011
A fuzz testing framework for evaluating and securing network applications
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428000 , vital:72479 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622655_A_Fuzz_Testing_Framework_for_Evaluating_and_Securing_Network_Applications/links/5b9a153b92851c4ba8181b0d/A-Fuzz-Testing-Framework-for-Evaluating-and-Securing-Network-Applications.pdf
- Description: Research has shown that fuzz-testing is an effective means of increasing the quality and security of software and systems. This project proposes the im-plementation of a testing framework based on numerous fuzz-testing tech-niques. The framework will allow a user to detect errors in applications and locate critical areas in the applications that are responsible for the detected errors. The aim is to provide an all-encompassing testing framework that will allow a developer to quickly and effectively deploy fuzz tests on an applica-tion and ensure a higher level of quality control before deployment.
- Full Text:
- Date Issued: 2011
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428000 , vital:72479 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622655_A_Fuzz_Testing_Framework_for_Evaluating_and_Securing_Network_Applications/links/5b9a153b92851c4ba8181b0d/A-Fuzz-Testing-Framework-for-Evaluating-and-Securing-Network-Applications.pdf
- Description: Research has shown that fuzz-testing is an effective means of increasing the quality and security of software and systems. This project proposes the im-plementation of a testing framework based on numerous fuzz-testing tech-niques. The framework will allow a user to detect errors in applications and locate critical areas in the applications that are responsible for the detected errors. The aim is to provide an all-encompassing testing framework that will allow a developer to quickly and effectively deploy fuzz tests on an applica-tion and ensure a higher level of quality control before deployment.
- Full Text:
- Date Issued: 2011
An evaluation of lightweight classification methods for identifying malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429839 , vital:72644 , 10.1109/ISSA.2011.6027532
- Description: Recent research has shown that it is possible to identify malicious URLs through lexical analysis of their URL structures alone. This paper intends to explore the effectiveness of these lightweight classification algorithms when working with large real world datasets including lists of malicious URLs obtained from Phishtank as well as largely filtered be-nign URLs obtained from proxy traffic logs. Lightweight algorithms are defined as methods by which URLs are analysed that do not use exter-nal sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the para-digm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and false-negatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accu-racy through the use of modern techniques in training classifiers. Anal-ysis of this type will also be useful in an incident response analysis where large numbers of URLs need to be filtered for potentially mali-cious URLs as an initial step in information gathering as well as end us-er implementations such as browser extensions which could help pro-tect the user from following potentially malicious links. Both AROW and CW classifier update methods will be used as prototype implementa-tions and their effectiveness will be compared to fully featured analysis results. These methods are interesting because they are able to train on any labelled data, including instances in which their prediction is cor-rect, allowing them to build a confidence in specific lexical features. This makes it possible for them to be trained using noisy input data, making them ideal for real world applications such as link filtering and information gathering.
- Full Text:
- Date Issued: 2011
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429839 , vital:72644 , 10.1109/ISSA.2011.6027532
- Description: Recent research has shown that it is possible to identify malicious URLs through lexical analysis of their URL structures alone. This paper intends to explore the effectiveness of these lightweight classification algorithms when working with large real world datasets including lists of malicious URLs obtained from Phishtank as well as largely filtered be-nign URLs obtained from proxy traffic logs. Lightweight algorithms are defined as methods by which URLs are analysed that do not use exter-nal sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the para-digm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and false-negatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accu-racy through the use of modern techniques in training classifiers. Anal-ysis of this type will also be useful in an incident response analysis where large numbers of URLs need to be filtered for potentially mali-cious URLs as an initial step in information gathering as well as end us-er implementations such as browser extensions which could help pro-tect the user from following potentially malicious links. Both AROW and CW classifier update methods will be used as prototype implementa-tions and their effectiveness will be compared to fully featured analysis results. These methods are interesting because they are able to train on any labelled data, including instances in which their prediction is cor-rect, allowing them to build a confidence in specific lexical features. This makes it possible for them to be trained using noisy input data, making them ideal for real world applications such as link filtering and information gathering.
- Full Text:
- Date Issued: 2011
An Evaluation of Trading Bands as Indicators for Network Telescope Datasets
- Cowie, Bradley, Irwin, Barry V W
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428013 , vital:72480 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: Large scale viral outbreaks such as Conficker, the Code Red worm and the Witty worm illustrate the importance of monitoring malevolent activity on the Internet. Careful monitoring of anomalous traffic allows organiza-tions to react appropriately and in a timely fashion to minimize economic damage. Network telescopes, a type of Internet monitor, provide ana-lysts with a way of decoupling anomalous traffic from legitimate traffic. Data from network telescopes is used by analysts to identify potential incidents by comparing recent trends with historical data. Analysis of network telescope datasets is complicated by the large quantity of data present, the number of subdivisions within the data and the uncertainty associated with received traffic. While there is considerable research being performed in the field of network telescopes little of this work is concerned with the analysis of alternative methods of incident identifi-cation. This paper considers trading bands, a subfield of technical analysis, as an approach to identifying potential Internet incidents such as worms. Trading bands construct boundaries that are used for meas-uring when certain quantities are high or low relative to recent values. This paper considers Bollinger Bands and associated Bollinger Indica-tors, Price Channels and Keltner Channels. These techniques are evaluated as indicators of malevolent activity by considering how these techniques react to incidents indentified in the captured data from a network telescope.
- Full Text:
- Date Issued: 2011
- Authors: Cowie, Bradley , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428013 , vital:72480 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225071_An_Evaluation_of_Trading_Bands_as_Indicators_for_Network_Telescope_Datasets/links/5b3f231a4585150d2309e1c0/An-Evaluation-of-Trading-Bands-as-Indicators-for-Network-Telescope-Datasets.pdf
- Description: Large scale viral outbreaks such as Conficker, the Code Red worm and the Witty worm illustrate the importance of monitoring malevolent activity on the Internet. Careful monitoring of anomalous traffic allows organiza-tions to react appropriately and in a timely fashion to minimize economic damage. Network telescopes, a type of Internet monitor, provide ana-lysts with a way of decoupling anomalous traffic from legitimate traffic. Data from network telescopes is used by analysts to identify potential incidents by comparing recent trends with historical data. Analysis of network telescope datasets is complicated by the large quantity of data present, the number of subdivisions within the data and the uncertainty associated with received traffic. While there is considerable research being performed in the field of network telescopes little of this work is concerned with the analysis of alternative methods of incident identifi-cation. This paper considers trading bands, a subfield of technical analysis, as an approach to identifying potential Internet incidents such as worms. Trading bands construct boundaries that are used for meas-uring when certain quantities are high or low relative to recent values. This paper considers Bollinger Bands and associated Bollinger Indica-tors, Price Channels and Keltner Channels. These techniques are evaluated as indicators of malevolent activity by considering how these techniques react to incidents indentified in the captured data from a network telescope.
- Full Text:
- Date Issued: 2011
High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
- Date Issued: 2011
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428055 , vital:72483 , https://www.researchgate.net/profile/Barry-Ir-win/publication/326225046_High_Speed_Lexical_Classification_of_Malicious_URLs/links/5b3f20acaca27207851c60f9/High-Speed-Lexical-Classification-of-Malicious-URLs.pdf
- Description: It has been shown in recent research that it is possible to identify malicious URLs through lexi-cal analysis of their URL structures alone. Lightweight algorithms are defined as methods by which URLs are analyzed that do not use external sources of information such as WHOIS lookups, blacklist lookups and content analysis. These parameters include URL length, number of delimiters as well as the number of traversals through the directory structure and are used throughout much of the research in the paradigm of lightweight classification. Methods which include external sources of information are often called fully featured classifications and have been shown to be only slightly more effective than a purely lexical analysis when considering both false-positives and falsenegatives. This distinction allows these algorithms to be run client side without the introduction of additional latency, but still providing a high level of accuracy through the use of modern techniques in training classifiers. Both AROW and CW classifier update methods will be used as prototype implementations and their effectiveness will be com-pared to fully featured analysis results. These methods are selected because they are able to train on any labeled data, including instances in which their prediction is correct, allowing them to build a confidence in specific lexical features.
- Full Text:
- Date Issued: 2011
Near Real-time Aggregation and Visualisation of Hostile Network Traffic
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
- Date Issued: 2011
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428067 , vital:72484 , https://www.researchgate.net/profile/Barry-Irwin/publication/327622653_Near_Real-time_Aggregation_and_Visualisation_of_Hostile_Network_Traffic/links/5b9a1474a6fdcc59bf8dfcc2/Near-Real-time-Aggregation-and-Visualisation-of-Hostile-Network-Traffic.pdf4
- Description: Efficient utilization of hostile network traffic for visualization and defen-sive purposes require near real-time availability of such data. Hostile or malicious traffic was obtained through the use of network telescopes and honeypots, as they are effective at capturing mostly illegitimate and nefarious traffic. The data is then exposed in near real-time through a messaging framework and visualized with the help of a geolocation based visualization tool. Defensive applications with regards to hostile network traffic are explored; these include the dynamic quarantine of malicious hosts internal to a network and the egress filtering of denial of service traffic originating from inside a network.
- Full Text:
- Date Issued: 2011
Tartarus: A honeypot based malware tracking and mitigation framework
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
Bandwidth management and monitoring for community networks
- Irwin, Barry V W, Siebörger, Ingrid, Wells, Daniel
- Authors: Irwin, Barry V W , Siebörger, Ingrid , Wells, Daniel
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428040 , vital:72482 , https://www.researchgate.net/profile/Ingrid-Sieboerger/publication/265121154_Bandwidth_management_and_monitoring_for_community_networks/links/5e538b85458515072db7a686/Bandwidth-management-and-monitoring-for-community-networks.pdf
- Description: This paper describes a custom-built system to replace existing routing solutions within an identified community network. The community net-work in question shares a VSAT Internet connection to provide Internet access to a number of schools and their surrounding communities. This connection provides a limited resource which needs to be managed in order to ensure equitable use by members of the community. The community network originally lacked any form of bandwidth manage-ment or monitoring which often resulted in unfair use and abuse. The solution implemented is based on a client-server architecture. The Community Access Points (CAPs) are the client components which are located at each school; providing the computers and servers with ac-cess to the rest of the community network and the Internet. These nodes also perform a number of monitoring tasks for the computers at the schools. The server component is the Access Concentrator (AC) and connects the CAPs together using encrypted and authenticated PPPoE tunnels. The AC performs several additional monitoring func-tions, both on the individual links and on the upstream Internet connec-tion. The AC provides a means of effectively and centrally managing and allocating Internet bandwidth between the schools. The system that was developed has a number of features, including Quality of Service adjustments limiting network usage and fairly billing each school for their Internet use. The system provides an effective means for sharing bandwidth between users in a community network.
- Full Text:
- Date Issued: 2010
- Authors: Irwin, Barry V W , Siebörger, Ingrid , Wells, Daniel
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428040 , vital:72482 , https://www.researchgate.net/profile/Ingrid-Sieboerger/publication/265121154_Bandwidth_management_and_monitoring_for_community_networks/links/5e538b85458515072db7a686/Bandwidth-management-and-monitoring-for-community-networks.pdf
- Description: This paper describes a custom-built system to replace existing routing solutions within an identified community network. The community net-work in question shares a VSAT Internet connection to provide Internet access to a number of schools and their surrounding communities. This connection provides a limited resource which needs to be managed in order to ensure equitable use by members of the community. The community network originally lacked any form of bandwidth manage-ment or monitoring which often resulted in unfair use and abuse. The solution implemented is based on a client-server architecture. The Community Access Points (CAPs) are the client components which are located at each school; providing the computers and servers with ac-cess to the rest of the community network and the Internet. These nodes also perform a number of monitoring tasks for the computers at the schools. The server component is the Access Concentrator (AC) and connects the CAPs together using encrypted and authenticated PPPoE tunnels. The AC performs several additional monitoring func-tions, both on the individual links and on the upstream Internet connec-tion. The AC provides a means of effectively and centrally managing and allocating Internet bandwidth between the schools. The system that was developed has a number of features, including Quality of Service adjustments limiting network usage and fairly billing each school for their Internet use. The system provides an effective means for sharing bandwidth between users in a community network.
- Full Text:
- Date Issued: 2010
Cyber security: Challenges and the way forward
- Ayofe, Azeez N, Irwin, Barry V W
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010