De-identification of personal information for use in software testing to ensure compliance with the Protection of Personal Information Act
- Authors: Mark, Stephen John
- Date: 2018
- Subjects: Data processing , Information technology -- Security measures , Computer security -- South Africa , Data protection -- Law and legislation -- South Africa , Data encryption (Computer science) , Python (Computer program language) , SQL (Computer program language) , Protection of Personal Information Act (POPI)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63888 , vital:28503
- Description: Encryption of Personally Identifiable Information stored in a Structured Query Language Database has been difficult for a long time. This is owing to block-cipher encryption algorithms changing the length and type of the input data when encrypted, which cannot subsequently be stored in the database without altering its structure. As the enactment of the South African Protection of Personal Information Act, No 4 of 2013 (POPI), was set in motion with the appointment of the Information Regulators Office in December 2016, South African companies are intensely focused on implementing compliance strategies and processes. The legislation, promulgated in 2013, encompasses the processing and storage of personally identifiable information (PII), ensuring that corporations act responsibly when collecting, storing and using individuals’ personal data. The Act comprises eight broad conditions that will become legislation once the new Information Regulator’s office is fully equipped to carry out their duties. POPI requires that individuals’ data should be kept confidential from all but those who specifically have permission to access the data. This means that not all members of IT teams should have access to the data unless it has been de-identified. This study tests an implementation of the Fixed Feistel 1 algorithm from the National Institute of Standards and Technology (NIST) “Special Publication 800-38G: Recommendation for Block Cipher Modes of Operation : Methods for Format-Preserving Encryption” using the LibFFX Python library. The Python scripting language was used for the experiments. The research shows that it is indeed possible to encrypt data in a Structured Query Language Database without changing the database schema using the new Format-Preserving encryption technique from NIST800-38G. Quality Assurance software testers can then run their full set of tests on the encrypted database. There is no reduction of encryption strength when using the FF1 encryption technique, compared to the underlying AES-128 encryption algorithm. It further shows that the utility of the data is not lost once it is encrypted.
- Full Text:
- Date Issued: 2018
- Authors: Mark, Stephen John
- Date: 2018
- Subjects: Data processing , Information technology -- Security measures , Computer security -- South Africa , Data protection -- Law and legislation -- South Africa , Data encryption (Computer science) , Python (Computer program language) , SQL (Computer program language) , Protection of Personal Information Act (POPI)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/63888 , vital:28503
- Description: Encryption of Personally Identifiable Information stored in a Structured Query Language Database has been difficult for a long time. This is owing to block-cipher encryption algorithms changing the length and type of the input data when encrypted, which cannot subsequently be stored in the database without altering its structure. As the enactment of the South African Protection of Personal Information Act, No 4 of 2013 (POPI), was set in motion with the appointment of the Information Regulators Office in December 2016, South African companies are intensely focused on implementing compliance strategies and processes. The legislation, promulgated in 2013, encompasses the processing and storage of personally identifiable information (PII), ensuring that corporations act responsibly when collecting, storing and using individuals’ personal data. The Act comprises eight broad conditions that will become legislation once the new Information Regulator’s office is fully equipped to carry out their duties. POPI requires that individuals’ data should be kept confidential from all but those who specifically have permission to access the data. This means that not all members of IT teams should have access to the data unless it has been de-identified. This study tests an implementation of the Fixed Feistel 1 algorithm from the National Institute of Standards and Technology (NIST) “Special Publication 800-38G: Recommendation for Block Cipher Modes of Operation : Methods for Format-Preserving Encryption” using the LibFFX Python library. The Python scripting language was used for the experiments. The research shows that it is indeed possible to encrypt data in a Structured Query Language Database without changing the database schema using the new Format-Preserving encryption technique from NIST800-38G. Quality Assurance software testers can then run their full set of tests on the encrypted database. There is no reduction of encryption strength when using the FF1 encryption technique, compared to the underlying AES-128 encryption algorithm. It further shows that the utility of the data is not lost once it is encrypted.
- Full Text:
- Date Issued: 2018
A comparison of open source and proprietary digital forensic software
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
- Authors: Sonnekus, Michael Hendrik
- Date: 2015
- Subjects: Computer crimes , Computer crimes -- Investigation , Electronic evidence , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4717 , http://hdl.handle.net/10962/d1017939
- Description: Scrutiny of the capabilities and accuracy of computer forensic tools is increasing as the number of incidents relying on digital evidence and the weight of that evidence increase. This thesis describes the capabilities of the leading proprietary and open source digital forensic tools. The capabilities of the tools were tested separately on digital media that had been formatted using Windows and Linux. Experiments were carried out with the intention of establishing whether the capabilities of open source computer forensics are similar to those of proprietary computer forensic tools, and whether these tools could complement one another. The tools were tested with regards to their capabilities to make and analyse digital forensic images in a forensically sound manner. The tests were carried out on each media type after deleting data from the media, and then repeated after formatting the media. The results of the experiments performed demonstrate that both proprietary and open source computer forensic tools have superior capabilities in different scenarios, and that the toolsets can be used to validate and complement one another. The implication of these findings is that investigators have an affordable means of validating their findings and are able to more effectively investigate digital media.
- Full Text:
- Date Issued: 2015
An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
A framework for the application of network telescope sensors in a global IP network
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
Prototyping a peer-to-peer session initiation protocol user agent
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
The role of parallel computing in bioinformatics
- Authors: Akhurst, Timothy John
- Date: 2005
- Subjects: Bioinformatics , Parallel programming (Computer science) , LINDA (Computer system) , Java (Computer program language) , Parallel processing (Electronic computers) , Genomics -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:3986 , http://hdl.handle.net/10962/d1004045 , Bioinformatics , Parallel programming (Computer science) , LINDA (Computer system) , Java (Computer program language) , Parallel processing (Electronic computers) , Genomics -- Data processing
- Description: The need to intelligibly capture, manage and analyse the ever-increasing amount of publicly available genomic data is one of the challenges facing bioinformaticians today. Such analyses are in fact impractical using uniprocessor machines, which has led to an increasing reliance on clusters of commodity-priced computers. An existing network of cheap, commodity PCs was utilised as a single computational resource for parallel computing. The performance of the cluster was investigated using a whole genome-scanning program written in the Java programming language. The TSpaces framework, based on the Linda parallel programming model, was used to parallelise the application. Maximum speedup was achieved at between 30 and 50 processors, depending on the size of the genome being scanned. Together with this, the associated significant reductions in wall-clock time suggest that both parallel computing and Java have a significant role to play in the field of bioinformatics.
- Full Text:
- Date Issued: 2005
- Authors: Akhurst, Timothy John
- Date: 2005
- Subjects: Bioinformatics , Parallel programming (Computer science) , LINDA (Computer system) , Java (Computer program language) , Parallel processing (Electronic computers) , Genomics -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:3986 , http://hdl.handle.net/10962/d1004045 , Bioinformatics , Parallel programming (Computer science) , LINDA (Computer system) , Java (Computer program language) , Parallel processing (Electronic computers) , Genomics -- Data processing
- Description: The need to intelligibly capture, manage and analyse the ever-increasing amount of publicly available genomic data is one of the challenges facing bioinformaticians today. Such analyses are in fact impractical using uniprocessor machines, which has led to an increasing reliance on clusters of commodity-priced computers. An existing network of cheap, commodity PCs was utilised as a single computational resource for parallel computing. The performance of the cluster was investigated using a whole genome-scanning program written in the Java programming language. The TSpaces framework, based on the Linda parallel programming model, was used to parallelise the application. Maximum speedup was achieved at between 30 and 50 processors, depending on the size of the genome being scanned. Together with this, the associated significant reductions in wall-clock time suggest that both parallel computing and Java have a significant role to play in the field of bioinformatics.
- Full Text:
- Date Issued: 2005
CREWS : a Component-driven, Run-time Extensible Web Service framework
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
Novel approaches to the monitoring of computer networks
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
- Authors: Halse, G A
- Date: 2003
- Subjects: Computer networks , Computer networks -- Management , Computer networks -- South Africa -- Grahamstown , Rhodes University -- Information Technology Division
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4645 , http://hdl.handle.net/10962/d1006601
- Description: Traditional network monitoring techniques suffer from a number of limitations. They are usually designed to solve the most general case, and as a result often fall short of expectation. This project sets out to provide the network administrator with a set of alternative tools to solve specific, but common, problems. It uses the network at Rhodes University as a case study and addresses a number of issues that arise on this network. Four problematic areas are identified within this network: the automatic determination of network topology and layout, the tracking of network growth, the determination of the physical and logical locations of hosts on the network, and the need for intelligent fault reporting systems. These areas are chosen because other network monitoring techniques have failed to adequately address these problems, and because they present problems that are common across a large number of networks. Each area is examined separately and a solution is sought for each of the problems identified. As a result, a set of tools is developed to solve these problems using a number of novel network monitoring techniques. These tools are designed to be as portable as possible so as not to limit their use to the case study network. Their use within Rhodes, as well as their applicability to other situations is discussed. In all cases, any limitations and shortfalls in the approaches that were employed are examined.
- Full Text:
- Date Issued: 2003
RADGIS - an improved architecture for runtime-extensible, distributed GIS applications
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
Bandwidth management and monitoring for IP network traffic : an investigation
- Authors: Irwin, Barry Vivian William
- Date: 2001
- Subjects: TCP/IP (Computer network protocol) , Computer networks , Electronic data processing -- Management , Computer networks -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4624 , http://hdl.handle.net/10962/d1006492 , TCP/IP (Computer network protocol) , Computer networks , Electronic data processing -- Management , Computer networks -- Management
- Description: Bandwidth management is a topic which is often discussed, but on which relatively little work has been done with regard to compiling a comprehensive set of techniques and methods for managing traffic on a network. What work has been done has concentrated on higher end networks, rather than the low bandwidth links which are commonly available in South Africa and other areas outside the United States. With more organisations increasingly making use of the Internet on a daily basis, the demand for bandwidth is outstripping the ability of providers to upgrade their infrastructure. This resource is therefore in need of management. In addition, for Internet access to become economically viable for widespread use by schools, NGOs and other academic institutions, the associated costs need to be controlled. Bandwidth management not only impacts on direct cost control, but encompasses the process of engineering a network and network resources in order to ensure the provision of as optimal a service as possible. Included in this is the provision of user education. Software has been developed for the implementation of traffic quotas, dynamic firewalling and visualisation. The research investigates various methods for monitoring and management of IP traffic with particular applicability to low bandwidth links. Several forms of visualisation for the analysis of historical and near-realtime traffic data are also discussed, including the use of three-dimensional landscapes. A number of bandwidth management practices are proposed, and the advantages of their combination, and complementary use are highlighted. By implementing these suggested policies, a holistic approach can be taken to the issue of bandwidth management on Internet links.
- Full Text:
- Date Issued: 2001
- Authors: Irwin, Barry Vivian William
- Date: 2001
- Subjects: TCP/IP (Computer network protocol) , Computer networks , Electronic data processing -- Management , Computer networks -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4624 , http://hdl.handle.net/10962/d1006492 , TCP/IP (Computer network protocol) , Computer networks , Electronic data processing -- Management , Computer networks -- Management
- Description: Bandwidth management is a topic which is often discussed, but on which relatively little work has been done with regard to compiling a comprehensive set of techniques and methods for managing traffic on a network. What work has been done has concentrated on higher end networks, rather than the low bandwidth links which are commonly available in South Africa and other areas outside the United States. With more organisations increasingly making use of the Internet on a daily basis, the demand for bandwidth is outstripping the ability of providers to upgrade their infrastructure. This resource is therefore in need of management. In addition, for Internet access to become economically viable for widespread use by schools, NGOs and other academic institutions, the associated costs need to be controlled. Bandwidth management not only impacts on direct cost control, but encompasses the process of engineering a network and network resources in order to ensure the provision of as optimal a service as possible. Included in this is the provision of user education. Software has been developed for the implementation of traffic quotas, dynamic firewalling and visualisation. The research investigates various methods for monitoring and management of IP traffic with particular applicability to low bandwidth links. Several forms of visualisation for the analysis of historical and near-realtime traffic data are also discussed, including the use of three-dimensional landscapes. A number of bandwidth management practices are proposed, and the advantages of their combination, and complementary use are highlighted. By implementing these suggested policies, a holistic approach can be taken to the issue of bandwidth management on Internet links.
- Full Text:
- Date Issued: 2001
Minimal motion capture with inverse kinematics for articulated human figure animation
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- Date Issued: 2000
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- Date Issued: 2000
- «
- ‹
- 1
- ›
- »