NFComms: A synchronous communication framework for the CPU-NFP heterogeneous system
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
- Authors: Pennefather, Sean
- Date: 2020
- Subjects: Network processors , Computer programming , Parallel processing (Electronic computers) , Netronome
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/144181 , vital:38318
- Description: This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state.
- Full Text:
- Date Issued: 2020
Technology in conservation: towards a system for in-field drone detection of invasive vegetation
- James, Katherine Margaret Frances
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
- Authors: James, Katherine Margaret Frances
- Date: 2020
- Subjects: Drone aircraft in remote sensing , Neural networks (Computer science) , Drone aircraft in remote sensing -- Case studies , Machine learning , Computer vision , Environmental monitoring -- Remote sensing , Invasive plants -- Monitoring
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143408 , vital:38244
- Description: Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
- Full Text:
- Date Issued: 2020
A development method for deriving reusable concurrent programs from verified CSP models
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
- Authors: Dibley, James
- Date: 2019
- Subjects: CSP (Computer program language) , Sequential processing (Computer science) , Go (Computer program language) , CSPIDER (Open source tool)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/72329 , vital:30035
- Description: This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
- Full Text:
- Date Issued: 2019
An investigation of the security of passwords derived from African languages
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019
- Authors: Sishi, Sibusiso Teboho
- Date: 2019
- Subjects: Computers -- Access control -- Passwords , Computer users -- Attitudes , Internet -- Access control , Internet -- Security measures , Internet -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/163273 , vital:41024
- Description: Password authentication has become ubiquitous in the cyber age. To-date, there have been several studies on country based passwords by authors who studied, amongst others, English, Finnish, Italian and Chinese based passwords. However, there has been a lack of focused study on the type of passwords that are being created in Africa and whether there are benefits in creating passwords in an African language. For this research, password databases containing LAN Manager (LM) and NT LAN Manager (NTLM) hashes extracted from South African organisations in a variety of sectors in the economy, were obtained to gain an understanding of user behaviour in creating passwords. Analysis of the passwords obtained from these hashes (using several cracking methods) showed that many organisational passwords are based on the English language. This is understandable considering that the business language in South Africa is English even though South Africa has 11 official languages. African language based passwords were derived from known English weak passwords and some of the passwords were appended with numbers and special characters. The African based passwords created using eight Southern African languages were then uploaded to the Internet to test the security around using passwords based on African languages. Since most of the passwords were able to be cracked by third party researchers, we conclude that any password that is derived from known weak English words marked no improvement in the security of a password written in an African language, especially the more widely spoken languages, namely, isiZulu, isiXhosa and Setswana.
- Full Text:
- Date Issued: 2019
Towards understanding and mitigating attacks leveraging zero-day exploits
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
- Authors: Smit, Liam
- Date: 2019
- Subjects: Computer crimes -- Prevention , Data protection , Hacking , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/115718 , vital:34218
- Description: Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future.
- Full Text:
- Date Issued: 2019
Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
Pursuing cost-effective secure network micro-segmentation
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
- Authors: Fürst, Mark Richard
- Date: 2018
- Subjects: Computer networks -- Security measures , Computer networks -- Access control , Firewalls (Computer security) , IPSec (Computer network protocol) , Network micro-segmentation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/131106 , vital:36524
- Description: Traditional network segmentation allows discrete trust levels to be defined for different network segments, using physical firewalls or routers that control north-south traffic flowing between different interfaces. This technique reduces the attack surface area should an attacker breach one of the perimeter defences. However, east-west traffic flowing between endpoints within the same network segment does not pass through a firewall, and an attacker may be able to move laterally between endpoints within that segment. Network micro-segmentation was designed to address the challenge of controlling east-west traffic, and various solutions have been released with differing levels of capabilities and feature sets. These approaches range from simple network switch Access Control List based segmentation to complex hypervisor based software-defined security segments defined down to the individual workload, container or process level, and enforced via policy based security controls for each segment. Several commercial solutions for network micro-segmentation exist, but these are primarily focused on physical and cloud data centres, and are often accompanied by significant capital outlay and resource requirements. Given these constraints, this research determines whether existing tools provided with operating systems can be re-purposed to implement micro-segmentation and restrict east-west traffic within one or more network segments for a small-to-medium sized corporate network. To this end, a proof-of-concept lab environment was built with a heterogeneous mix of Windows and Linux virtual servers and workstations deployed in an Active Directory domain. The use of Group Policy Objects to deploy IPsec Server and Domain Isolation for controlling traffic between endpoints is examined, in conjunction with IPsec Authenticated Header and Encapsulating Security Payload modes as an additional layer of security. The outcome of the research shows that revisiting existing tools can enable organisations to implement an additional, cost-effective secure layer of defence in their network.
- Full Text:
- Date Issued: 2018
A risk-based framework for new products: a South African telecommunication’s study
- Authors: Jeffries, Michael
- Date: 2017
- Subjects: Telephone companies -- Risk management -- South Africa , Telephone companies -- South Africa -- Case studies , Telecommunication -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4765 , vital:20722
- Description: The integrated reports of Vodacom, Telkom and MTN — telecommunication organisations in South Africa — show that they are diversifying their product offerings from traditional voice and data services. These organisations are including new offerings covering the financial, health, insurance and mobile education services. The potential exists for these organisations to launch products that are substandard and which either do not take into account customer needs or do not comply with current legislations or regulations. Most telecommunication organisations have a well-defined enterprise risk management program, to ensure compliance to King III, however risk management processes specifically for new products and services might be lacking. The responsibility usually resides with the product managers for the implementation of robust products; however they do not always have the correct skillset to ensure adherence to governance requirements and therefore might not be aware of which laws they might not be adhering to, or understand the customers’ requirements. More complex products, additional competition, changes to technology and new business ventures have reinforced the need to manage risk on telecommunication products. Failure to take risk requirements into account could lead to potential fines, damage the organisation’s reputation which could lead to customers churning from these service providers. This research analyses three periods of data captured from a mobile telecommunication organisation to assess the current status quo of risk management maturity within the organisation’s product and service environment. Based on the analysis as well as industry best practices, a risk management framework for products is proposed that can assist product managers analyse concepts to ensure adherence to governance requirements. This could assist new product or service offerings in the marketplace do not create a perception of distrust by consumers.
- Full Text:
- Date Issued: 2017
- Authors: Jeffries, Michael
- Date: 2017
- Subjects: Telephone companies -- Risk management -- South Africa , Telephone companies -- South Africa -- Case studies , Telecommunication -- Security measures -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4765 , vital:20722
- Description: The integrated reports of Vodacom, Telkom and MTN — telecommunication organisations in South Africa — show that they are diversifying their product offerings from traditional voice and data services. These organisations are including new offerings covering the financial, health, insurance and mobile education services. The potential exists for these organisations to launch products that are substandard and which either do not take into account customer needs or do not comply with current legislations or regulations. Most telecommunication organisations have a well-defined enterprise risk management program, to ensure compliance to King III, however risk management processes specifically for new products and services might be lacking. The responsibility usually resides with the product managers for the implementation of robust products; however they do not always have the correct skillset to ensure adherence to governance requirements and therefore might not be aware of which laws they might not be adhering to, or understand the customers’ requirements. More complex products, additional competition, changes to technology and new business ventures have reinforced the need to manage risk on telecommunication products. Failure to take risk requirements into account could lead to potential fines, damage the organisation’s reputation which could lead to customers churning from these service providers. This research analyses three periods of data captured from a mobile telecommunication organisation to assess the current status quo of risk management maturity within the organisation’s product and service environment. Based on the analysis as well as industry best practices, a risk management framework for products is proposed that can assist product managers analyse concepts to ensure adherence to governance requirements. This could assist new product or service offerings in the marketplace do not create a perception of distrust by consumers.
- Full Text:
- Date Issued: 2017
Selecting and augmenting a FOSS development and deployment environment for personalized video-oriented services in a Telco context
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
- Authors: Shibeshi, Zelalem Sintayehu
- Date: 2016
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/943 , vital:20005
- Description: The great demand for video services on the Internet is one contributing factor that led telecom companies to search for solutions to deliver innovative video services, using the different access technologies managed by them and leveraging the capacity of enforcing Quality of Service (QoS). One part of the solution was an infrastructure that guarantees QoS for these services, in the form of the IP Multimedia Subsystem (IMS) framework. The IMS framework was developed for delivering innovative multimedia services, but IMS alone does not provide the required services. This has led to further work in the area of multimedia service architectures. One noteworthy architecture is IPTV. IPTV is more than what its name implies, as it allows the development of various innovative video-oriented services and not just tv. When IPTV was introduced, many thought that it would bring back the revenue loss that telecom companies experienced to over-the-top (OTT) service providers. However, despite all its promises, the IPTV implementation has not shown as wide an uptake as one would expect. Although there could be various reasons for the slow penetration of IPTV, one reason could be the technical challenge that IPTV poses to service developers. One of the main reasons for the embarking of the research reported in this thesis was to identify and select free and open source software (FOSS) based platforms and augment them for easy development and deployment of video-oriented services. The thesis motivated how the IPTV architecture, with some modification, can be a good architecture to develop innovative video-oriented services. For a better understanding and investigate the issues of video-oriented service development on different platforms, we followed an incremental and iterative prototyping method. As a result, various video-oriented services were first developed and implementation-related issues were analyzed. This has helped us to identify problems that service developers face, including the requirement to utilize a number of protocols to develop an IPTV-based video-oriented service and the lack of a platform that provides a consistent programming interface to implement them all. The process also helped us to identify new uses cases through the process. As part of our selection process, we found that the Mobicents service development platform can be used as the basis for a good service development and deployment environment for video-oriented services. Mobicents is a Java-based service delivery platform for quick development, deployment and management of next generation network applications. Mobicents is a good choice because it provides a consistent programming interface and supports the various protocols needed in a consistent manner or an easy way to include the support for them. We used Mobicents to compose the environment that developers can use to build video-oriented services. Specifically we developed components and service building blocks that service developer can use to develop various innovative video-oriented services. During our research, we also identified various issues with regard to support from streaming servers in general and open source streaming servers in particular and also with the protocol they use. Specifically, we identified issues with Real Time Streaming Protocol (RTSP), a protocol specified as the media control protocol in the IPTV specification, and made proposals for solving them. We developed an RSTP proxy to augment the features lacking in the current streaming servers and implemented some of the features we proposed in it.
- Full Text:
- Date Issued: 2016
A Framework for using Open Source intelligence as a Digital Forensic Investigative tool
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
- Authors: Rule, Samantha Elizabeth
- Date: 2015
- Subjects: Open source intelligence , Criminal investigation , Electronic evidence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4715 , http://hdl.handle.net/10962/d1017937
- Description: The proliferation of the Internet has amplified the use of social networking sites by creating a platform that encourages individuals to share information. As a result there is a wealth of information that is publically and easily accessible. This research explores whether open source intelligence (OSINT), which is freely available, could be used as a digital forensic investigative tool. A survey was created and sent to digital forensic investigators to establish whether they currently use OSINT when performing investigations. The survey results confirm that OSINT is being used by digital forensic investigators when performing investigations but there are currently no guidelines or frameworks available to support the use thereof. Additionally, the survey results showed a belief amongst those surveyed that evidence gleaned from OSINT sources is considered supplementary rather than evidentiary. The findings of this research led to the development of a framework that identifies and recommends key processes to follow when conducting OSINT investigations. The framework can assist digital forensic investigators to follow a structured and rigorous process, which may lead to the unanimous acceptance of information obtained via OSINT sources as evidentiary rather than supplementary in the near future.
- Full Text:
- Date Issued: 2015
An investigation of ISO/IEC 27001 adoption in South Africa
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
- Authors: Coetzer, Christo
- Date: 2015
- Subjects: ISO 27001 Standard , Information technology -- Security measures , Computer security , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4720 , http://hdl.handle.net/10962/d1018669
- Description: The research objective of this study is to investigate the low adoption of the ISO/IEC 27001 standard in South African organisations. This study does not differentiate between the ISO/IEC 27001:2005 and ISO/IEC 27001:2013 versions, as the focus is on adoption of the ISO/IEC 27001 standard. A survey-based research design was selected as the data collection method. The research instruments used in this study include a web-based questionnaire and in-person interviews with the participants. Based on the findings of this research, the organisations that participated in this study have an understanding of the ISO/IEC 27001 standard; however, fewer than a quarter of these have fully adopted the ISO/IEC 27001 standard. Furthermore, the main business objectives for organisations that have adopted the ISO/IEC 27001 standard were to ensure legal and regulatory compliance, and to fulfil client requirements. An Information Security Management System management guide based on the ISO/IEC 27001 Plan-Do-Check-Act model is developed to help organisations interested in the standard move towards ISO/IEC 27001 compliance.
- Full Text:
- Date Issued: 2015
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
An examination of validation practices in relation to the forensic acquisition of digital evidence in South Africa
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
The role of computational thinking in introductory computer science
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
Information technology audits in South African higher education institutions
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
Automated grid fault detection and repair
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
μCloud : a P2P cloud platform for computing service provision
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »