A comparison of web-based technologies to serve images from an Oracle9i database
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
- Authors: Swales, Dylan
- Date: 2004 , 2013-06-18
- Subjects: Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4583 , http://hdl.handle.net/10962/d1004380 , Active server pages , Microsoft .NET , JavaServer pages , Oracle (Computer file) , Internet searching , Web site development--Computer programs , World Wide Web , Online information services
- Description: The nature of Internet and Intranet Web applications has changed from a static content-distribution medium into an interactive, dynamic medium, often used to serve multimedia from back-end object-relational databases to Web-enabled clients. Consequently, developers need to make an informed technological choice for developing software that supports a Web-based application for distributing multimedia over networks. This decision is based on several factors. Among the factors are ease of programming, richness of features, scalability, and performance. The research focuses on these key factors when distributing images from an Oracle9i database using Java Servlets, JSP, ASP, and ASP.NET as the server-side development technologies. Prototype applications are developed and tested within each technology: one for single image serving and the other for multiple image serving. A matrix of recommendations is provided to distinguish which technology, or combination of technologies, provides the best performance and development platform for image serving within the studied envirorunent. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2004
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
NetwIOC: a framework for the automated generation of network-based IOCS for malware information sharing and defence
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
- Authors: Rudman, Lauren Lynne
- Date: 2018
- Subjects: Malware (Computer software) , Computer networks Security measures , Computer security , Python (Computer program language)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60639 , vital:27809
- Description: With the substantial number of new malware variants found each day, it is useful to have an efficient way to retrieve Indicators of Compromise (IOCs) from the malware in a format suitable for sharing and detection. In the past, these indicators were manually created after inspection of binary samples and network traffic. The Cuckoo Sandbox, is an existing dynamic malware analysis system which meets the requirements for the proposed framework and was extended by adding a few custom modules. This research explored a way to automate the generation of detailed network-based IOCs in a popular format which can be used for sharing. This was done through careful filtering and analysis of the PCAP hie generated by the sandbox, and placing these values into the correct type of STIX objects using Python, Through several evaluations, analysis of what type of network traffic can be expected for the creation of IOCs was conducted, including a brief ease study that examined the effect of analysis time on the number of IOCs created. Using the automatically generated IOCs to create defence and detection mechanisms for the network was evaluated and proved successful, A proof of concept sharing platform developed for the STIX IOCs is showcased at the end of the research.
- Full Text:
- Date Issued: 2018
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
Towards a collection of cost-effective technologies in support of the NIST cybersecurity framework
- Shackleton, Bruce Michael Stuart
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
- Authors: Shackleton, Bruce Michael Stuart
- Date: 2018
- Subjects: National Institute of Standards and Technology (U.S.) , Computer security , Computer networks Security measures , Small business Information technology Cost effectiveness , Open source software
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/62494 , vital:28199
- Description: The NIST Cybersecurity Framework (CSF) is a specific risk and cybersecurity framework. It provides guidance on controls that can be implemented to help improve an organisation’s cybersecurity risk posture. The CSF Functions consist of Identify, Protect, Detect, Respond, and Recover. Like most Information Technology (IT) frameworks, there are elements of people, processes, and technology. The same elements are required to successfully implement the NIST CSF. This research specifically focuses on the technology element. While there are many commercial technologies available for a small to medium sized business, the costs can be prohibitively expensive. Therefore, this research investigates cost-effective technologies and assesses their alignment to the NIST CSF. The assessment was made against the NIST CSF subcategories. Each subcategory was analysed to identify where a technology would likely be required. The framework provides a list of Informative References. These Informative References were used to create high- level technology categories, as well as identify the technical controls against which the technologies were measured. The technologies tested were either open source or proprietary. All open source technologies tested were free to use, or have a free community edition. Proprietary technologies would be free to use, or considered generally available to most organisations, such as components contained within Microsoft platforms. The results from the experimentation demonstrated that there are multiple cost-effective technologies that can support the NIST CSF. Once all technologies were tested, the NIST CSF was extended. Two new columns were added, namely high-level technology category, and tested technology. The columns were populated with output from the research. This extended framework begins an initial collection of cost-effective technologies in support of the NIST CSF.
- Full Text:
- Date Issued: 2018
Buffering strategies and bandwidth renegotiation for MPEG video streams
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
Towards large scale software based network routing simulation
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Date Issued: 2015
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
The implementation of a mobile application to decrease occupational sitting through goal setting and social comparison
- Authors: Tsaoane, Moipone Lipalesa
- Date: 2022-10-14
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365544 , vital:65758
- Description: Background: Feedback proves to be a valuable tool in behaviour change as it is said to increase compliance and improve the effectiveness of interventions. Interventions that focus on decreasing sedentary behaviour as an independent factor from physical activity are necessary, especially for office workers who spend most of their day seated. There is insufficient knowledge regarding the effectiveness of feedback as a tool to decrease sedentary behaviour. This project implemented a tool that can be used to determine this. To take advantage of the cost-effectiveness and scalability of digital technologies, a mobile application was selected as the mode of delivery. Method: The application was designed as an intervention, using the Theoretical Domains Framework. It was then implemented into a fully functioning application through an agile development process, using Xam- arin.Forms framework. Due to challenges with this framework, a second application was developed using the React Native framework. Pilot studies were used for testing, with the final one consisting of Rhodes University employees. Results: The Xamarin.Forms application proved to be unfeasible; some users experienced fatal errors and crashes. The React Native application worked as desired and produced accurate and consistent step count readings, proving feasible from a functionality standpoint. The agile methodology enabled the developer to focus on implementing and testing one component at a time, which made the development process more manageable. Conclusion: Future work must conduct empirical studies to determine if feedback is an effective tool compared to a control group and which type of feedback (between goal-setting and social comparison) is most effective. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
- Authors: Tsaoane, Moipone Lipalesa
- Date: 2022-10-14
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/365544 , vital:65758
- Description: Background: Feedback proves to be a valuable tool in behaviour change as it is said to increase compliance and improve the effectiveness of interventions. Interventions that focus on decreasing sedentary behaviour as an independent factor from physical activity are necessary, especially for office workers who spend most of their day seated. There is insufficient knowledge regarding the effectiveness of feedback as a tool to decrease sedentary behaviour. This project implemented a tool that can be used to determine this. To take advantage of the cost-effectiveness and scalability of digital technologies, a mobile application was selected as the mode of delivery. Method: The application was designed as an intervention, using the Theoretical Domains Framework. It was then implemented into a fully functioning application through an agile development process, using Xam- arin.Forms framework. Due to challenges with this framework, a second application was developed using the React Native framework. Pilot studies were used for testing, with the final one consisting of Rhodes University employees. Results: The Xamarin.Forms application proved to be unfeasible; some users experienced fatal errors and crashes. The React Native application worked as desired and produced accurate and consistent step count readings, proving feasible from a functionality standpoint. The agile methodology enabled the developer to focus on implementing and testing one component at a time, which made the development process more manageable. Conclusion: Future work must conduct empirical studies to determine if feedback is an effective tool compared to a control group and which type of feedback (between goal-setting and social comparison) is most effective. , Thesis (MSc) -- Faculty of Science, Computer Science, 2022
- Full Text:
- Date Issued: 2022-10-14
A decision-making model to guide securing blockchain deployments
- Authors: Cronje, Gerhard Roets
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Bitcoin , Cryptocurrencies , Distributed databases , Computer networks Security measures , Computer networks Security measures Decision making , Ethereum
- Language: English
- Type: Masters theses , text
- Identifier: http://hdl.handle.net/10962/188865 , vital:44793
- Description: Satoshi Nakamoto, the pseudo-identity accredit with the paper that sparked the implementation of Bitcoin, is famously quoted as remarking, electronically of course, that “If you don’t believe it or don’t get it, I don’t have time to try and convince you, sorry” (Tsapis, 2019, p. 1). What is noticeable, 12 years after the famed Satoshi paper that initiated Bitcoin (Nakamoto, 2008), is that blockchain at the very least has staying power and potentially wide application. A lesser known figure Marc Kenisberg, founder of Bitcoin Chaser which is one of the many companies formed around the Bitcoin ecosystem, summarised it well saying “…Blockchain is the tech - Bitcoin is merely the first mainstream manifestation of its potential” (Tsapis, 2019, p. 1). With blockchain still trying to reach its potential and still maturing on its way towards a mainstream technology the main question that arises for security professionals is how do I ensure we do it securely? This research seeks to address that question by proposing a decision-making model that can be used by a security professional to guide them through ensuring appropriate security for blockchain deployments. This research is certainly not the first attempt at discussing the security of the blockchain and will not be the last, as the technology around blockchain and distributed ledger technology is still rapidly evolving. What this research does try to achieve is not to delve into extremely specific areas of blockchain security, or get bogged down in technical details, but to provide a reference framework that aims to cover all the major areas to be considered. The approach followed was to review the literature regarding blockchain and to identify the main security areas to be addressed. It then proposes a decision-making model and tests the model against a fictitious but relevant real-world example. It concludes with learnings from this research. The reader can be the judge, but the model aims to be a practical valuable resource to be used by any security professional, to navigate the security aspects logically and understandably when being involved in a blockchain deployment. In contrast to the Satoshi quote, this research tries to convince the reader and assist him/her in understanding the security choices related to every blockchain deployment. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Cronje, Gerhard Roets
- Date: 2021-10-29
- Subjects: Blockchains (Databases) , Bitcoin , Cryptocurrencies , Distributed databases , Computer networks Security measures , Computer networks Security measures Decision making , Ethereum
- Language: English
- Type: Masters theses , text
- Identifier: http://hdl.handle.net/10962/188865 , vital:44793
- Description: Satoshi Nakamoto, the pseudo-identity accredit with the paper that sparked the implementation of Bitcoin, is famously quoted as remarking, electronically of course, that “If you don’t believe it or don’t get it, I don’t have time to try and convince you, sorry” (Tsapis, 2019, p. 1). What is noticeable, 12 years after the famed Satoshi paper that initiated Bitcoin (Nakamoto, 2008), is that blockchain at the very least has staying power and potentially wide application. A lesser known figure Marc Kenisberg, founder of Bitcoin Chaser which is one of the many companies formed around the Bitcoin ecosystem, summarised it well saying “…Blockchain is the tech - Bitcoin is merely the first mainstream manifestation of its potential” (Tsapis, 2019, p. 1). With blockchain still trying to reach its potential and still maturing on its way towards a mainstream technology the main question that arises for security professionals is how do I ensure we do it securely? This research seeks to address that question by proposing a decision-making model that can be used by a security professional to guide them through ensuring appropriate security for blockchain deployments. This research is certainly not the first attempt at discussing the security of the blockchain and will not be the last, as the technology around blockchain and distributed ledger technology is still rapidly evolving. What this research does try to achieve is not to delve into extremely specific areas of blockchain security, or get bogged down in technical details, but to provide a reference framework that aims to cover all the major areas to be considered. The approach followed was to review the literature regarding blockchain and to identify the main security areas to be addressed. It then proposes a decision-making model and tests the model against a fictitious but relevant real-world example. It concludes with learnings from this research. The reader can be the judge, but the model aims to be a practical valuable resource to be used by any security professional, to navigate the security aspects logically and understandably when being involved in a blockchain deployment. In contrast to the Satoshi quote, this research tries to convince the reader and assist him/her in understanding the security choices related to every blockchain deployment. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-10-29
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Developing high-fidelity mental models of programming concepts using manipulatives and interactive metaphors
- Authors: Funcke, Matthew
- Date: 2015
- Subjects: Computer programming -- Study and teaching (Higher) , Computer programmers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4707 , http://hdl.handle.net/10962/d1017929
- Description: It is well established that both learning and teaching programming are difficult tasks. Difficulties often occur due to weak mental models and common misconceptions. This study proposes a method of teaching programming that both encourages high-fidelity mental models and attempts to minimise misconceptions in novice programmers, through the use of metaphors and manipulatives. The elements in ActionWorld with which the students interact are realizations of metaphors. By simple example, a variable has a metaphorical representation as a labelled box that can hold a value. The dissertation develops a set of metaphors which have several core requirements: metaphors should avoid causing misconceptions, they need to be high-fidelity so as to avoid failing when used with a new concept, students must be able to relate to them, and finally, they should be usable across multiple educational media. The learning style that ActionWorld supports is one which requires active participation from the student - the system acts as a foundation upon which students are encouraged to build their mental models. This teaching style is achieved by placing the student in the role of code interpreter, the code they need to interpret will not advance until they have demonstrated its meaning via use of the aforementioned metaphors. ActionWorld was developed using an iterative developmental process that consistently improved upon various aspects of the project through a continual evaluation-enhancement cycle. The primary outputs of this project include a unified set of high-fidelity metaphors, a virtual-machine API for use in similar future projects, and two metaphor-testing games. All of the aforementioned deliverables were tested using multiple quality-evaluation criteria, the results of which were consistently positive. ActionWorld and its constituent components contribute to the wide assortment of methods one might use to teach novice programmers.
- Full Text:
- Date Issued: 2015
- Authors: Funcke, Matthew
- Date: 2015
- Subjects: Computer programming -- Study and teaching (Higher) , Computer programmers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4707 , http://hdl.handle.net/10962/d1017929
- Description: It is well established that both learning and teaching programming are difficult tasks. Difficulties often occur due to weak mental models and common misconceptions. This study proposes a method of teaching programming that both encourages high-fidelity mental models and attempts to minimise misconceptions in novice programmers, through the use of metaphors and manipulatives. The elements in ActionWorld with which the students interact are realizations of metaphors. By simple example, a variable has a metaphorical representation as a labelled box that can hold a value. The dissertation develops a set of metaphors which have several core requirements: metaphors should avoid causing misconceptions, they need to be high-fidelity so as to avoid failing when used with a new concept, students must be able to relate to them, and finally, they should be usable across multiple educational media. The learning style that ActionWorld supports is one which requires active participation from the student - the system acts as a foundation upon which students are encouraged to build their mental models. This teaching style is achieved by placing the student in the role of code interpreter, the code they need to interpret will not advance until they have demonstrated its meaning via use of the aforementioned metaphors. ActionWorld was developed using an iterative developmental process that consistently improved upon various aspects of the project through a continual evaluation-enhancement cycle. The primary outputs of this project include a unified set of high-fidelity metaphors, a virtual-machine API for use in similar future projects, and two metaphor-testing games. All of the aforementioned deliverables were tested using multiple quality-evaluation criteria, the results of which were consistently positive. ActionWorld and its constituent components contribute to the wide assortment of methods one might use to teach novice programmers.
- Full Text:
- Date Issued: 2015
DRUBIS : a distributed face-identification experimentation framework - design, implementation and performance issues
- Authors: Ndlangisa, Mboneli
- Date: 2004
- Subjects: Principal components analysis , Human face recognition (Computer science) , Image processing , Biometric identification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4704 , http://hdl.handle.net/10962/d1015768
- Description: We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination.
- Full Text:
- Date Issued: 2004
- Authors: Ndlangisa, Mboneli
- Date: 2004
- Subjects: Principal components analysis , Human face recognition (Computer science) , Image processing , Biometric identification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4704 , http://hdl.handle.net/10962/d1015768
- Description: We report on the design, implementation and performance issues of the DRUBIS (Distributed Rhodes University Biometric Identification System) experimentation framework. The Principal Component Analysis (PCA) face-recognition approach is used as a case study. DRUBIS is a flexible experimentation framework, distributed over a number of modules that are easily pluggable and swappable, allowing for the easy construction of prototype systems. Web services are the logical means of distributing DRUBIS components and a number of prototype applications have been implemented from this framework. Different popular PCA face-recognition related experiments were used to evaluate our experimentation framework. We extract recognition performance measures from these experiments. In particular, we use the framework for a more indepth study of the suitability of the DFFS (Difference From Face Space) metric as a means for image classification in the area of race and gender determination.
- Full Text:
- Date Issued: 2004
An automatic programming system to generate payroll programs
- Fielding, Elizabeth Vera Catherine
- Authors: Fielding, Elizabeth Vera Catherine
- Date: 1979
- Subjects: Computer software -- Development , Programming (computers) , Software architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4695 , http://hdl.handle.net/10962/d1011829 , Computer software -- Development , Programming (computers) , Software architecture
- Description: The purpose of this project was to try to investigate one approach to the problem of automatically generating programs from some specification. Rather than following the approach which requires the user to define his problem using some formulation, it was decided to look at a class of problems that have similar solutions, but have many variations, and to try to design a system capable of obtaining user requirements and generating solutions tailored to these requirements. The aim was to design the system in such a way that it could be extended to cater for other classes of problems, so that eventually a system which could automatically generate program solutions for a range of problems might be developed. Intro. p. 1.
- Full Text:
- Date Issued: 1979
- Authors: Fielding, Elizabeth Vera Catherine
- Date: 1979
- Subjects: Computer software -- Development , Programming (computers) , Software architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4695 , http://hdl.handle.net/10962/d1011829 , Computer software -- Development , Programming (computers) , Software architecture
- Description: The purpose of this project was to try to investigate one approach to the problem of automatically generating programs from some specification. Rather than following the approach which requires the user to define his problem using some formulation, it was decided to look at a class of problems that have similar solutions, but have many variations, and to try to design a system capable of obtaining user requirements and generating solutions tailored to these requirements. The aim was to design the system in such a way that it could be extended to cater for other classes of problems, so that eventually a system which could automatically generate program solutions for a range of problems might be developed. Intro. p. 1.
- Full Text:
- Date Issued: 1979
A multispectral and machine learning approach to early stress classification in plants
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Poole, Louise Carmen
- Date: 2022-04-06
- Subjects: Machine learning , Neural networks (Computer science) , Multispectral imaging , Image processing , Plant stress detection
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/232410 , vital:49989
- Description: Crop loss and failure can impact both a country’s economy and food security, often to devastating effects. As such, the importance of successfully detecting plant stresses early in their development is essential to minimize spread and damage to crop production. Identification of the stress and the stress-causing agent is the most critical and challenging step in plant and crop protection. With the development of and increase in ease of access to new equipment and technology in recent years, the use of spectroscopy in the early detection of plant diseases has become notably popular. This thesis narrows down the most suitable multispectral imaging techniques and machine learning algorithms for early stress detection. Datasets were collected of visible images and multispectral images. Dehydration was selected as the plant stress type for the main experiments, and data was collected from six plant species typically used in agriculture. Key contributions of this thesis include multispectral and visible datasets showing plant dehydration as well as a separate preliminary dataset on plant disease. Promising results on dehydration showed statistically significant accuracy improvements in the multispectral imaging compared to visible imaging for early stress detection, with multispectral input obtaining a 92.50% accuracy over visible input’s 77.50% on general plant species. The system was effective at stress detection on known plant species, with multispectral imaging introducing greater improvement to early stress detection than advanced stress detection. Furthermore, strong species discrimination was achieved when exclusively testing either early or advanced dehydration against healthy species. , Thesis (MSc) -- Faculty of Science, Ichthyology & Fisheries Sciences, 2022
- Full Text:
- Date Issued: 2022-04-06
Detecting derivative malware samples using deobfuscation-assisted similarity analysis
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
- Authors: Wrench, Peter Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/383 , vital:19954
- Description: The overwhelming popularity of PHP as a hosting platform has made it the language of choice for developers of Remote Access Trojans (RATs or web shells) and other malicious software. These shells are typically used to compromise and monetise web platforms by providing the attacker with basic remote access to the system, including _le transfer, command execution, network reconnaissance, and database connectivity. Once infected, compromised systems can be used to defraud users by hosting phishing sites, performing Distributed Denial of Service attacks, or serving as anonymous platforms for sending spam or other malfeasance. The vast majority of these threats are largely derivative, incorporating core capabilities found in more established RATs such as c99 and r57. Authors of malicious software routinely produce new shell variants by modifying the behaviours of these ubiquitous RATs, either to add desired functionality or to avoid detection by signature-based detection systems. Once these modified shells are eventually identified (or additional functionality is required), the process of shell adaptation begins again. The end result of this iterative process is a web of separate but related shell variants, many of which are at least partially derived from one of the more popular and influential RATs. In response to the problem outlined above, the author set out to design and implement a system capable of circumventing common obfuscation techniques and identifying derivative malware samples in a given collection. To begin with, a decoder component was developed to syntactically deobfuscate and normalise PHP code by detecting and reversing idiomatic obfuscation constructs, and to apply uniform formatting conventions to all system inputs. A unified malware analysis framework, called Viper, was then extended to create a modular similarity analysis system comprised of individual feature extraction modules, modules responsible for batch processing, a matrix module for comparing sample features, and two visualisation modules capable of generating visual representations of shell similarity. The principal conclusion of the research was that the deobfuscation performed by the decoder component prior to analysis dramatically improved the observed levels of similarity between test samples. This in turn allowed the modular similarity analysis system to identify derivative clusters (or families) within a large collection of shells more accurately. Techniques for isolating and re-rendering these clusters were also developed and demonstrated to be effective at increasing the amount of detail available for evaluating the relative magnitudes of the relationships within each cluster.
- Full Text:
- Date Issued: 2016
A knowledge-oriented, context-sensitive architectural framework for service deployment in marginalized rural communities
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
- Authors: Thinyane, Mamello P
- Date: 2009
- Subjects: Information technology Expert systems (Computer science) Software architecture User interfaces (Computer systems) Ethnoscience Social networks Rural development Technical assistance -- Developing countries Information networks -- Developing countries
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4599 , http://hdl.handle.net/10962/d1004843
- Description: The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
- Full Text:
- Date Issued: 2009
Static analysis of functional languages
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Mountjoy, Jon-Dean
- Date: 1994 , 2012-10-10
- Subjects: Functional programming languages
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4669 , http://hdl.handle.net/10962/d1006690 , Functional programming languages
- Description: Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines. , KMBT_223
- Full Text:
- Date Issued: 1994
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996