A framework for the application of network telescope sensors in a global IP network
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
Extending the reach of personal area networks by transporting Bluetooth communications over IP networks
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
A framework for responsive content adaptation in electronic display networks
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
Investigating the viability of a framework for small scale, easily deployable and extensible hotspot management systems
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
- Authors: Thinyane, Mamello P
- Date: 2006
- Subjects: Local area networks (Computer networks) , Computer networks -- Management , Computer network architectures , Computer network protocols , Wireless communication systems , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4638 , http://hdl.handle.net/10962/d1006553
- Description: The proliferation of PALs (Public Access Locations) is fuelling the development of new standards, protocols, services, and applications for WLANs (Wireless Local Area Networks). PALs are set up at public locations to meet continually changing, multiservice, multi-protocol user requirements. This research investigates the essential infrastructural requirements that will enable further proliferation of PALs, and consequently facilitate ubiquitous computing. Based on these requirements, an extensible architectural framework for PAL management systems that inherently facilitates the provisioning of multiple services and multiple protocols on PALs is derived. The ensuing framework, which is called Xobogel, is based on the microkernel architectural pattern, and the IPDR (Internet Protocol Data Record) specification. Xobogel takes into consideration and supports the implementation of diverse business models for PALs, in respect of distinct environmental factors. It also facilitates next-generation network service usage accounting through a simple, flexible, and extensible XML based usage record. The framework is subsequently validated for service element extensibility and simplicity through the design, implementation, and experimental deployment of SEHS (Small Extensible Hotspot System), a system based on the framework. The robustness and scalability of the framework is observed to be sufficient for SMME deployment, withstanding the stress testing experiments performed on SEHS. The range of service element and charging modules implemented confirm an acceptable level of flexibility and extensibility within the framework.
- Full Text:
- Date Issued: 2006
Investigating call control using MGCP in conjuction with SIP and H.323
- Authors: Jacobs, Ashley
- Date: 2005 , 2005-03-14
- Subjects: Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4631 , http://hdl.handle.net/10962/d1006516 , Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Description: Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323.
- Full Text:
- Date Issued: 2005
- Authors: Jacobs, Ashley
- Date: 2005 , 2005-03-14
- Subjects: Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4631 , http://hdl.handle.net/10962/d1006516 , Communication -- Technological innovations , Digital telephone systems , Computer networks , Computer network protocols , Internet telephony
- Description: Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323.
- Full Text:
- Date Issued: 2005
CREWS : a Component-driven, Run-time Extensible Web Service framework
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
An investigation into the viability of deploying thin client technology to support effective learning in a disadvantaged, rural high school setting
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
RADGIS - an improved architecture for runtime-extensible, distributed GIS applications
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
- Authors: Preston, Richard Michael
- Date: 2002
- Subjects: Geographic information systems
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4626 , http://hdl.handle.net/10962/d1006497
- Description: A number of GIS architectures and technologies have emerged recently to facilitate the visualisation and processing of geospatial data over the Web. The work presented in this dissertation builds on these efforts and undertakes to overcome some of the major problems with traditional GIS client architectures, including application bloat, lack of customisability, and lack of interoperability between GIS products. In this dissertation we describe how a new client-side GIS architecture was developed and implemented as a proof-of-concept application called RADGIS, which is based on open standards and emerging distributed component-based software paradigms. RADGIS reflects the current trend in development focus from Web browser-based applications to customised clients, based on open standards, that make use of distributed Web services. While much attention has been paid to exposing data on the Web, there is growing momentum towards providing “value-added” services. A good example of this is the tremendous industry interest in the provision of location-based services, which has been discussed as a special use-case of our RADGIS architecture. Thus, in the near future client applications will not simply be used to access data transparently, but will also become facilitators for the location-transparent invocation of local and remote services. This flexible architecture will ensure that data can be stored and processed independently of the location of the client that wishes to view or interact with it. Our RADGIS application enables content developers and end-users to create and/or customise GIS applications dynamically at runtime through the incorporation of GIS services. This ensures that the client application has the flexibility to withstand changing levels of expertise or user requirements. These GIS services are implemented as components that execute locally on the client machine, or as remote CORBA Objects or EJBs. Assembly and deployment of these components is achieved using a specialised XML descriptor. This XML descriptor is written using a markup language that we developed specifically for this purpose, called DGCML, which contains deployment information, as well as a GUI specification and links to an XML-based help system that can be merged with the RADGIS client application’s existing help system. Thus, no additional requirements are imposed on object developers by the RADGIS architecture, i.e. there is no need to rewrite existing objects since DGCML acts as a runtime-customisable wrapper, allowing existing objects to be utilised by RADGIS. While the focus of this thesis has been on overcoming the above-mentioned problems with traditional GIS applications, the work described here can also be applied in a much broader context, especially in the development of highly customisable client applications that are able to integrate Web services at runtime.
- Full Text:
- Date Issued: 2002
Adaptive flow management of multimedia data with a variable quality of service
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
Grouping complex systems for classification and parallel simulation
- Authors: Ikram, Ismail Mohamed
- Date: 1997
- Subjects: Digital computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4662 , http://hdl.handle.net/10962/d1006665
- Description: This thesis is concerned with grouping complex systems by means of concurrent model, in order to aid in (i) formulation of classifications and (ii) induction of parallel simulation programs. It observes, and seeks f~ furmalize _ and then exploit, the strong structural resemblance between complex systems and occam programs. The thesis hypothesizes that groups of complex systems may be discriminated according to shared structural and behavioural characteristics. Such an analysis of the complex systems domain may be performed in the abstract with the aid of a model for capturing interesting features of complex systems. The resulting groups would form a classification of complex systems. An additional hypothesis is that, insofar as the model is able to capture sufficient . programmatic information, these groups may be used to define, automatically, algorithmic skeletons for the concurrent simulation of complex systems. In order to test these hypotheses, a specification model and an accompanying formal notation are developed. The model expresses properties of complex systems in a mixture of object-oriented and process-oriented styles .. The model is then used as the basis for performing both classification and automatic induction of parallel simulation programs. The thesis takes the view that specification models should not be overly complex, especially if the specifications are meant to be executable. Therefore the requirement for explicit consideration of concurrency on the part of specifiers is minimized. The thesis formulates specifications of classes of cellular automata and neural networks according to the proposed model. Procedures for verificati6If - and induction of parallel simulation programs are also included.
- Full Text:
- Date Issued: 1997
- Authors: Ikram, Ismail Mohamed
- Date: 1997
- Subjects: Digital computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4662 , http://hdl.handle.net/10962/d1006665
- Description: This thesis is concerned with grouping complex systems by means of concurrent model, in order to aid in (i) formulation of classifications and (ii) induction of parallel simulation programs. It observes, and seeks f~ furmalize _ and then exploit, the strong structural resemblance between complex systems and occam programs. The thesis hypothesizes that groups of complex systems may be discriminated according to shared structural and behavioural characteristics. Such an analysis of the complex systems domain may be performed in the abstract with the aid of a model for capturing interesting features of complex systems. The resulting groups would form a classification of complex systems. An additional hypothesis is that, insofar as the model is able to capture sufficient . programmatic information, these groups may be used to define, automatically, algorithmic skeletons for the concurrent simulation of complex systems. In order to test these hypotheses, a specification model and an accompanying formal notation are developed. The model expresses properties of complex systems in a mixture of object-oriented and process-oriented styles .. The model is then used as the basis for performing both classification and automatic induction of parallel simulation programs. The thesis takes the view that specification models should not be overly complex, especially if the specifications are meant to be executable. Therefore the requirement for explicit consideration of concurrency on the part of specifiers is minimized. The thesis formulates specifications of classes of cellular automata and neural networks according to the proposed model. Procedures for verificati6If - and induction of parallel simulation programs are also included.
- Full Text:
- Date Issued: 1997
Modelling parallel and distributed virtual reality systems for performance analysis and comparison
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
- Authors: Bangay, Shaun Douglas
- Date: 1997
- Subjects: Virtual reality Computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4657 , http://hdl.handle.net/10962/d1006656
- Description: Most Virtual Reality systems employ some form of parallel processing, making use of multiple processors which are often distributed over large areas geographically, and which communicate via various forms of message passing. The approaches to parallel decomposition differ for each system, as do the performance implications of each approach. Previous comparisons have only identified and categorized the different approaches. None have examined the performance issues involved in the different parallel decompositions. Performance measurement for a Virtual Reality system differs from that of other parallel systems in that some measure of the delays involved with the interaction of the separate components is required, in addition to the measure of the throughput of the system. Existing performance analysis approaches are typically not well suited to providing both these measures. This thesis describes the development of a performance analysis technique that is able to provide measures of both interaction latency and cycle time for a model of a Virtual Reality system. This technique allows performance measures to be generated as symbolic expressions describing the relationships between the delays in the model. It automatically generates constraint regions, specifying the values of the system parameters for which performance characteristics change. The performance analysis technique shows strong agreement with values measured from implementation of three common decomposition strategies on two message passing architectures. The technique is successfully applied to a range of parallel decomposition strategies found in Parallel and Distributed Virtual Reality systems. For each system, the primary decomposition techniques are isolated and analysed to determine their performance characteristics. This analysis allows a comparison of the various decomposition techniques, and in many cases reveals trends in their behaviour that would have gone unnoticed with alternative analysis techniques. The work described in this thesis supports the Performance Analysis and Comparison of Parallel and Distributed Virtual Reality systems. In addition it acts as a reference, describing the performance characteristics of decomposition strategies used in Virtual Reality systems.
- Full Text:
- Date Issued: 1997
A networking approach to sharing music studio resources
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
Virtual sculpting : an investigation of directly manipulated free-form deformation in a virtual environment
- Authors: Gain, James Edward
- Date: 1996
- Subjects: Computer simulation , Computer graphics , Virtual reality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4660 , http://hdl.handle.net/10962/d1006661 , Computer simulation , Computer graphics , Virtual reality
- Description: This thesis presents a Virtual Sculpting system, which addresses the problem of Free-Form Solid Modelling. The disparate elements of a Polygon-Mesh representation, a Directly Manipulated Free-Form Deformation sculpting tool, and a Virtual Environment are drawn into a cohesive whole under the mantle of a clay-sculpting metaphor. This enables a user to mould and manipulate a synthetic solid interactively as if it were composed of malleable clay. The focus of this study is on the interactivity, intuitivity and versatility of such a system. To this end, a range of improvements is investigated which significantly enhances the efficiency and correctness of Directly Manipulated Free-Form Deformation, both separately and as a seamless component of the Virtual Sculpting system.
- Full Text:
- Date Issued: 1996
- Authors: Gain, James Edward
- Date: 1996
- Subjects: Computer simulation , Computer graphics , Virtual reality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4660 , http://hdl.handle.net/10962/d1006661 , Computer simulation , Computer graphics , Virtual reality
- Description: This thesis presents a Virtual Sculpting system, which addresses the problem of Free-Form Solid Modelling. The disparate elements of a Polygon-Mesh representation, a Directly Manipulated Free-Form Deformation sculpting tool, and a Virtual Environment are drawn into a cohesive whole under the mantle of a clay-sculpting metaphor. This enables a user to mould and manipulate a synthetic solid interactively as if it were composed of malleable clay. The focus of this study is on the interactivity, intuitivity and versatility of such a system. To this end, a range of improvements is investigated which significantly enhances the efficiency and correctness of Directly Manipulated Free-Form Deformation, both separately and as a seamless component of the Virtual Sculpting system.
- Full Text:
- Date Issued: 1996
Remora : implementing adaptive parallelism on a heterogeneous cluster of networked workstations
- Authors: Rehmet, Geoffrey Michael
- Date: 1995
- Subjects: LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4673 , http://hdl.handle.net/10962/d1006696 , LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Description: Computers connected to a local area network are often only fully utilized for short periods of time. In fact, most workstations are not used at all for a significant portion of the day. The combined "idle time" of the workstations on a network constitutes a significant computing resource, which is generally wasted. If harnessed properly, such a resource could constitute a cheap alternative to expensive high-performance computers. Adaptive parallelism refers to the parallel execution of a computation on a dynamically changing set of processors. This thesis investigates the viability of this approach as a vehicle to harness the "idle cycles" available on a heterogeneous cluster of networked computers. A system, called Remora, which implements adaptive parallelism via the Linda programming paradigm, is presented. Experiments, performed using Remora, show that adaptive parallelism provides an efficient vehicle for using idle processor cycles, without having an adverse effect on the tasks which constitute the normal workload of the computers being used.
- Full Text:
- Date Issued: 1995
- Authors: Rehmet, Geoffrey Michael
- Date: 1995
- Subjects: LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4673 , http://hdl.handle.net/10962/d1006696 , LINDA (Computer system) , Local area networks (Computer networks) , Computer networks , Remora (Computer system)
- Description: Computers connected to a local area network are often only fully utilized for short periods of time. In fact, most workstations are not used at all for a significant portion of the day. The combined "idle time" of the workstations on a network constitutes a significant computing resource, which is generally wasted. If harnessed properly, such a resource could constitute a cheap alternative to expensive high-performance computers. Adaptive parallelism refers to the parallel execution of a computation on a dynamically changing set of processors. This thesis investigates the viability of this approach as a vehicle to harness the "idle cycles" available on a heterogeneous cluster of networked computers. A system, called Remora, which implements adaptive parallelism via the Linda programming paradigm, is presented. Experiments, performed using Remora, show that adaptive parallelism provides an efficient vehicle for using idle processor cycles, without having an adverse effect on the tasks which constitute the normal workload of the computers being used.
- Full Text:
- Date Issued: 1995
Behavioural model debugging in Linda
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
Cogitator : a parallel, fuzzy, database-driven expert system
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994