Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
Modernisation and extension of InetVis: a network security data visualisation tool
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
- Date Issued: 2019
- Authors: Johnson, Yestin
- Date: 2019
- Subjects: Data visualization , InetVis (Application software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/69223 , vital:29447
- Description: This research undertook an investigation in digital archaeology, modernisation, and revitalisation of the InetVis software application, developed at Rhodes University in 2007. InetVis allows users to visualise network traffic in an interactive 3D scatter plot. This software is based on the idea of the Spinning Cube of Potential Doom, introduced by Stephen Lau. The original InetVis research project aimed to extend this concept and implementation, specifically for use in analysing network telescope traffic. The InetVis source code was examined and ported to run on modern operating systems. The porting process involved updating the UI framework, Qt, from version 3 to 5, as well as adding support for 64-bit compilation. This research extended its usefulness with the implementation of new, high-value, features and improvements. The most notable new features include the addition of a general settings framework, improved screenshot generation, automated visualisation modes, new keyboard shortcuts, and support for building and running InetVis on macOS. Additional features and improvements were identified for future work. These consist of support for a plug-in architecture and an extended heads-up display. A user survey was then conducted, determining that respondents found InetVis to be easy to use and useful. The user survey also allowed the identification of new and proposed features that the respondents found to be most useful. At this point, no other tool offers the simplicity and user-friendliness of InetVis when it comes to the analysis of network packet captures, especially those from network telescopes.
- Full Text:
- Date Issued: 2019
Towards an evaluation and protection strategy for critical infrastructure
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
Using semantic knowledge to improve compression on log files
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
A framework for high speed lexical classification of malicious URLs
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
- Authors: Egan, Shaun Peter
- Date: 2014
- Subjects: Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4696 , http://hdl.handle.net/10962/d1011933 , Internet -- Security measures -- Research , Uniform Resource Identifiers -- Security measures -- Research , Neural networks (Computer science) -- Research , Computer security -- Research , Computer crimes -- Prevention , Phishing
- Description: Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
- Full Text:
- Date Issued: 2014
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Date Issued: 2015
Building IKhwezi, a digital platform to capture everyday Indigenous Knowledge for improving educational outcomes in marginalised communities
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
- Authors: Ntšekhe, Mathe V K
- Date: 2018
- Subjects: Information technology , Knowledge management , Traditional ecological knowledge , Pedagogical content knowledge , Traditional ecological knowledge -- Technological innovations , IKhwezi , ICT4D , Indigenous Technological Pedagogical Content Knowledge (I-TPACK) , Siyakhula Living Lab
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/62505 , vital:28200
- Description: Aptly captured in the name, the broad mandate of Information and Communications Technologies for Development (ICT4D) is to facilitate the use of Information and Communication Technologies (ICTs) in society to support development. Education, as often stated, is the cornerstone for development, imparting knowledge for conceiving and realising development. In this thesis, we explore how everyday Indigenous Knowledge (IK) can be collected digitally, to enhance the educational outcomes of learners from marginalised backgrounds, by stimulating the production of teaching and learning materials that include the local imagery to have resonance with the learners. As part of the exploration, we reviewed a framework known as Technological Pedagogical Content Knowledge (TPACK), which spells out the different kinds of knowledge needed by teachers to teach effectively with ICTs. In this framework, IK is not present explicitly, but through the concept of context(s). Using Afrocentric and Pan-African scholarship, we argue that this logic is linked to colonialism and a critical decolonising pedagogy necessarily demands explication of IK: to make visible the cultures of the learners in the margins (e.g. Black rural learners). On the strength of this argument, we have proposed that TPACK be augumented to become Indigenous Technological Pedagogical Content Knowledge (I-TPACK). Through this augumentation, I-TPACK becomes an Afrocentric framework for a multicultural education in the digital era. The design of the digital platform for capturing IK relevant for formal education, was done in the Siyakhula Living Lab (SLL). The core idea of a Living Lab (LL) is that users must be understood in the context of their lived everyday reality. Further, they must be involved as co-creators in the design and innovation processes. On a methodological level, the LL environment allowed for the fusing together of multiple methods that can help to create a fitting solution. In this thesis, we followed an iterative user-centred methodology rooted in ethnography and phenomenology. Specifically, through long term conversations and interaction with teachers and ethnographic observations, we conceptualized a platform, IKhwezi, that facilitates the collection of context-sensitive content, collaboratively, and with cost and convenience in mind. We implemented this platform using MediaWiki, based on a number of considerations. From the ICT4D disciplinary point of view, a major consideration was being open to the possibility that other forms of innovation—and, not just ‘technovelty’ (i.e. technological/- technical innovation)—can provide a breakthrough or ingenious solution to the problem at hand. In a sense, we were reinforcing the growing sentiment within the discipline that technology is not the goal, but the means to foregrounding the commonality of the human experience in working towards development. Testing confirmed that there is some value in the platform. This is despite the challenges to onboard users, in pursuit of more content that could bolster the value of everyday IK in improving the educational outcomes of all learners.
- Full Text:
- Date Issued: 2018
A mobile toolkit and customised location server for the creation of cross-referencing location-based services
- Ndakunda, Shange-Ishiwa Tangeni
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
A platform for computer-assisted multilingual literacy development
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
Culturally-relevant augmented user interfaces for illiterate and semi-literate users
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Investigating call control using MGCP in conjuction with SIP and H.323
- Authors: Jacobs, Ashley
- Date: 2005 , 2005-03-14
- Subjects: Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4631 , http://hdl.handle.net/10962/d1006516 , Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Description: Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323.
- Full Text:
- Date Issued: 2005
- Authors: Jacobs, Ashley
- Date: 2005 , 2005-03-14
- Subjects: Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4631 , http://hdl.handle.net/10962/d1006516 , Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Description: Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323.
- Full Text:
- Date Issued: 2005
An investigation into the use of IEEE 1394 for audio and control data distribution in music studio environments
- Authors: Laubscher, Robert Alan
- Date: 1999 , 2011-11-10
- Subjects: Digital electronics , Sound -- Recording and reproducing -- Digital techniques , MIDI (Standard) , Music -- Data processing , Computer sound processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4619 , http://hdl.handle.net/10962/d1006483 , Digital electronics , Sound -- Recording and reproducing -- Digital techniques , MIDI (Standard) , Music -- Data processing , Computer sound processing
- Description: This thesis investigates the feasibility of using a new digital interconnection technology, the IEEE-1394 High Performance Serial Bus, for audio and control data distribution in local and remote music recording studio environments. Current methods for connecting studio devices are described, and the need for a new digital interconnection technology explained. It is shown how this new interconnection technology and developing protocol standards make provision for multi-channel audio and control data distribution, routing, copyright protection, and device synchronisation. Feasibility is demonstrated by the implementation of a custom hardware and software solution. Remote music studio connectivity is considered, and the emerging standards and technologies for connecting future music studio utilising this new technology are discussed. , Microsoft Word , Adobe Acrobat 9.46 Paper Capture Plug-in
- Full Text:
- Date Issued: 1999
- Authors: Laubscher, Robert Alan
- Date: 1999 , 2011-11-10
- Subjects: Digital electronics , Sound -- Recording and reproducing -- Digital techniques , MIDI (Standard) , Music -- Data processing , Computer sound processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4619 , http://hdl.handle.net/10962/d1006483 , Digital electronics , Sound -- Recording and reproducing -- Digital techniques , MIDI (Standard) , Music -- Data processing , Computer sound processing
- Description: This thesis investigates the feasibility of using a new digital interconnection technology, the IEEE-1394 High Performance Serial Bus, for audio and control data distribution in local and remote music recording studio environments. Current methods for connecting studio devices are described, and the need for a new digital interconnection technology explained. It is shown how this new interconnection technology and developing protocol standards make provision for multi-channel audio and control data distribution, routing, copyright protection, and device synchronisation. Feasibility is demonstrated by the implementation of a custom hardware and software solution. Remote music studio connectivity is considered, and the emerging standards and technologies for connecting future music studio utilising this new technology are discussed. , Microsoft Word , Adobe Acrobat 9.46 Paper Capture Plug-in
- Full Text:
- Date Issued: 1999
Analyzing communication flow and process placement in Linda programs on transputers
- De-Heer-Menlah, Frederick Kofi
- Authors: De-Heer-Menlah, Frederick Kofi
- Date: 1992 , 2012-11-28
- Subjects: LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4675 , http://hdl.handle.net/10962/d1006698 , LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Description: With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda. , KMBT_223
- Full Text:
- Date Issued: 1992
- Authors: De-Heer-Menlah, Frederick Kofi
- Date: 1992 , 2012-11-28
- Subjects: LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4675 , http://hdl.handle.net/10962/d1006698 , LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Description: With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda. , KMBT_223
- Full Text:
- Date Issued: 1992
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
Adaptive flow management of multimedia data with a variable quality of service
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
Implementing non-photorealistic rendering enhancements with real-time performance
- Authors: Winnemöller, Holger
- Date: 2002 , 2013-05-09
- Subjects: Computer animation , Computer graphics , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4580 , http://hdl.handle.net/10962/d1003135 , Computer animation , Computer graphics , Real-time data processing
- Description: We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2002
- Authors: Winnemöller, Holger
- Date: 2002 , 2013-05-09
- Subjects: Computer animation , Computer graphics , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4580 , http://hdl.handle.net/10962/d1003135 , Computer animation , Computer graphics , Real-time data processing
- Description: We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2002
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
Automated grid fault detection and repair
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
A parser generator system to handle complete syntax
- Authors: Ossher, Harold Leon
- Date: 1982
- Subjects: Grammar, Comparative and general -- Syntax Parsing (Computer grammar) Programming languages (Electronic computers) Compilers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4571 , http://hdl.handle.net/10962/d1002036
- Description: To define a language completely, it is necessary to define both its syntax and semantics. If these definitions are in a suitable form, the parser and code-generator of a compiler, respectively, can be generated from them. This thesis addresses the problem of syntax definition and automatic parser generation.
- Full Text:
- Date Issued: 1982
- Authors: Ossher, Harold Leon
- Date: 1982
- Subjects: Grammar, Comparative and general -- Syntax Parsing (Computer grammar) Programming languages (Electronic computers) Compilers (Computer programs)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4571 , http://hdl.handle.net/10962/d1002036
- Description: To define a language completely, it is necessary to define both its syntax and semantics. If these definitions are in a suitable form, the parser and code-generator of a compiler, respectively, can be generated from them. This thesis addresses the problem of syntax definition and automatic parser generation.
- Full Text:
- Date Issued: 1982