An Analysis and Implementation of Methods for High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
Building a Graphical Fuzzing Framework
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
Design of realistic hybrid marine resource management programs in Oceania
- Aswani, Shankar, Ruddle, Kenneth
- Authors: Aswani, Shankar , Ruddle, Kenneth
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/70615 , vital:29681 , https://doi.org/10.2984/67.3.11
- Description: This review article synthesizes the authors' several decades of multidisciplinary natural and social science and applied marine resource management experience in the Asia-Pacific region to examine the strengthening of coastal and marine resource management and conservation using alliances between local communities and external institutions. The objective is to assist the design of resource management and conservation programs that enhance the capacity of coastal communities in Oceania to confront both diminishing marine resources and the effects of climate change by providing guidelines for protecting marine biodiversity and vulnerable ecosystem functions. This article describes a management framework that hybridizes local beliefs and institutions expressed in customary management (CM) with such modern management concepts as marine protected areas (MPAs) and ecosystem-based management (EBM). Hybrid management accommodates the social, political, economic, and cultural contexts of Oceanic communities and, compared with recent or conventional management approaches, can therefore better address fundamental local concerns such as coastal degradation, climate change, sea level rise, weak governance, corruption, limited resources and staff to manage and monitor marine resources, and increasing poverty. Research on the hybridization of management systems demonstrates opportunities to establish context-appropriate EBM and/or other managerial arrangements that include terrestrial and adjacent coastal-marine ecosystems. Formal and informal CM systems are widespread in Oceania and in some parts of Southeast Asia, and if appropriate strategies are employed rapid progress toward hybrid CM-EBM could be enabled.
- Full Text:
- Date Issued: 2012
- Authors: Aswani, Shankar , Ruddle, Kenneth
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/70615 , vital:29681 , https://doi.org/10.2984/67.3.11
- Description: This review article synthesizes the authors' several decades of multidisciplinary natural and social science and applied marine resource management experience in the Asia-Pacific region to examine the strengthening of coastal and marine resource management and conservation using alliances between local communities and external institutions. The objective is to assist the design of resource management and conservation programs that enhance the capacity of coastal communities in Oceania to confront both diminishing marine resources and the effects of climate change by providing guidelines for protecting marine biodiversity and vulnerable ecosystem functions. This article describes a management framework that hybridizes local beliefs and institutions expressed in customary management (CM) with such modern management concepts as marine protected areas (MPAs) and ecosystem-based management (EBM). Hybrid management accommodates the social, political, economic, and cultural contexts of Oceanic communities and, compared with recent or conventional management approaches, can therefore better address fundamental local concerns such as coastal degradation, climate change, sea level rise, weak governance, corruption, limited resources and staff to manage and monitor marine resources, and increasing poverty. Research on the hybridization of management systems demonstrates opportunities to establish context-appropriate EBM and/or other managerial arrangements that include terrestrial and adjacent coastal-marine ecosystems. Formal and informal CM systems are widespread in Oceania and in some parts of Southeast Asia, and if appropriate strategies are employed rapid progress toward hybrid CM-EBM could be enabled.
- Full Text:
- Date Issued: 2012
Life-history characteristics of an age-validated established invasive African sharptooth catfish, Clarias gariepinus, population in a warm–temperate African impoundment
- Wartenberg, Reece, Weyl, Olaf L F, Booth, Anthony J, Winker, A Henning
- Authors: Wartenberg, Reece , Weyl, Olaf L F , Booth, Anthony J , Winker, A Henning
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/124921 , vital:35710 , https://doi.10.3377/004.048.0225
- Description: African sharptooth catfish Clarias gariepinus (Burchell, 1822) is a widely distributed fish that has now invaded water bodies in South America, Eastern Europe, Asia and South Africa (Cambray 2003). In South Africa it is native as far south as the Orange-Vaal river system, but inter-basin water transfer schemes (IBWTs), illegal stocking by anglers and from aquaculture has resulted in the establishment of extralimital populations in almost all river systems (van Rensburg et al. 2011). Within the Eastern Cape Province, C. gariepinus has invaded the Great Fish and Sundays rivers through IBWTs, that connect the Orange River to the Great Fish River and then to the Sundays River system which flows directly into Darlington Dam (Kadye & Booth 2013a) (Fig. 1). Soon after the completion of the IBWTs sharptooth catfish were recorded in Grassridge Dam in 1976 (Laurenson & Hocutt 1985), and later from Darlington Dam in 1981 (Scott et al. 2006). Although Cambray & Jubb (1977) are of the opinion that the species was translocated prior to the IBWT connection, there is now a permanent corridor between the Orange River and its receiving river systems that can facilitate the continued introduction of non-native Orange River fishes and other aquatic biota.
- Full Text:
- Date Issued: 2012
- Authors: Wartenberg, Reece , Weyl, Olaf L F , Booth, Anthony J , Winker, A Henning
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/124921 , vital:35710 , https://doi.10.3377/004.048.0225
- Description: African sharptooth catfish Clarias gariepinus (Burchell, 1822) is a widely distributed fish that has now invaded water bodies in South America, Eastern Europe, Asia and South Africa (Cambray 2003). In South Africa it is native as far south as the Orange-Vaal river system, but inter-basin water transfer schemes (IBWTs), illegal stocking by anglers and from aquaculture has resulted in the establishment of extralimital populations in almost all river systems (van Rensburg et al. 2011). Within the Eastern Cape Province, C. gariepinus has invaded the Great Fish and Sundays rivers through IBWTs, that connect the Orange River to the Great Fish River and then to the Sundays River system which flows directly into Darlington Dam (Kadye & Booth 2013a) (Fig. 1). Soon after the completion of the IBWTs sharptooth catfish were recorded in Grassridge Dam in 1976 (Laurenson & Hocutt 1985), and later from Darlington Dam in 1981 (Scott et al. 2006). Although Cambray & Jubb (1977) are of the opinion that the species was translocated prior to the IBWT connection, there is now a permanent corridor between the Orange River and its receiving river systems that can facilitate the continued introduction of non-native Orange River fishes and other aquatic biota.
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »