A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
A framework for responsive content adaptation in electronic display networks
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
A framework for the application of network telescope sensors in a global IP network
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
A machine-independent microprogram development system
- Authors: Ward, Michael John
- Date: 1987 , 2013-03-11
- Subjects: Microprogramming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4581 , http://hdl.handle.net/10962/d1003738 , Microprogramming
- Description: The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown.
- Full Text:
- Date Issued: 1987
- Authors: Ward, Michael John
- Date: 1987 , 2013-03-11
- Subjects: Microprogramming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4581 , http://hdl.handle.net/10962/d1003738 , Microprogramming
- Description: The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown.
- Full Text:
- Date Issued: 1987
A networking approach to sharing music studio resources
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
- Authors: Foss, Richard John
- Date: 1996
- Subjects: MIDI (Standard) Computer sound processing Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4659 , http://hdl.handle.net/10962/d1006660
- Description: This thesis investigates the extent to which networking technology can be used to provide remote workstation access to a pool of shared music studio resources. A pilot system is described in which MIDI messages, studio control data, and audio signals flow between the workstations and a studio server. A booking and timing facility avoids contention and allows for accurate reports of studio usage. The operation of the system has been evaluated in terms of its ability to satislY three fundamental goals, namely the remote, shared and centralized access to studio resources. Three essential network configurations have been identified, incorporating a mix of star and bus topologies, and their relative potential for satisfYing the fundamental goals has been highlighted.
- Full Text:
- Date Issued: 1996
A study of real-time operating systems for microcomputers
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990
- Authors: Wells, George Clifford
- Date: 1990
- Subjects: Operating systems (Computers) , Microcomputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4611 , http://hdl.handle.net/10962/d1004896 , Operating systems (Computers) , Microcomputers
- Description: This thesis describes the evaluation of four operating systems for microcomputers. The emphasis of the study is on the suitability of the operating systems for use in real-time applications, such as process control. The evaluation was performed in two sections. The first section was a quantitative assessment of the performance of the real-time features of the operating system. This was performed using benchmarks. The criteria for the benchmarks and their design are discussed. The second section was a qualitative assessment of the suitability of the operating systems for the development and implementation of real-time systems. This was assessed through the implementation of a small simulation of a manufacturing process and its associated control system. The simulation was designed using the Ward and Mellor real-time design method which was extended to handle the special case of a real-time simulation. The operating systems which were selected for the study covered a spectrum from general purpose operating systems to small, specialised real-time operating systems. From the quantitative assessment it emerged that QNX (from Quantum Software Systems) had the best overall performance. Qualitatively, UNIX was found to offer the best system development environment, but it does not have the performance and the characteristics required for real-time applications. This suggests that versions of UNIX that are adapted for real-time applications are worth careful consideration for use both as development systems and implementation systems.
- Full Text:
- Date Issued: 1990
Adaptive flow management of multimedia data with a variable quality of service
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
- Authors: Littlejohn, Paul Stephen
- Date: 1999
- Subjects: Multimedia systems , Multimedia systems -- Evaluation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4605 , http://hdl.handle.net/10962/d1004863 , Multimedia systems , Multimedia systems -- Evaluation
- Description: Much of the current research involving the delivery of multimedia data focuses on the need to maintain a constant Quality of Service (QoS) throughout the lifetime of the connection. Delivery of a constant QoS requires that a guaranteed bandwidth is available for the entire connection. Techniques, such as resource reservation, are able to provide for this. These approaches work well across networks that are fairly homogeneous, and which have sufficient resources to sustain the guarantees, but are not currently viable over either heterogeneous or unreliable networks. To cater for the great number of networks (including the Internet) which do not conform to the ideal conditions required by constant Quality of Service mechanisms, this thesis proposes a different approach, that of dynamically adjusting the QoS in response to changing network conditions. Instead of optimizing the Quality of Service, the approach used in this thesis seeks to ensure the delivery of the information, at the best possible quality, as determined by the carrying ability of the poorest segment in the network link. To illustrate and examine this model, a service-adaptive system is described, which allows for the streaming of multimedia audio data across a network using the RealTime Transport Protocol. This application continually adjusts its service requests in response to the current network conditions. A client/server model is outlined whereby the server attempts to provide scalable media content, in this case audio data, to a client at the highest possible Quality of Service. The thesis presents and evaluates a number of renegotiation methods for adjusting the Quality of Service between the client and server. An A djusted QoS renegotiation method algorithm is suggested, which delivers the best possible quality, within an acceptable loss boundary.
- Full Text:
- Date Issued: 1999
Algorithmic skeletons as a method of parallel programming
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
An investigation into some critical computer networking parameters : Internet addressing and routing
- Authors: Isted, Edwin David
- Date: 1996
- Subjects: Computer networks , Internet , Electronic mail systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4608 , http://hdl.handle.net/10962/d1004874 , Computer networks , Internet , Electronic mail systems
- Description: This thesis describes the evaluation of several proposals suggested as replacements for the currenT Internet's TCPJIP protocol suite. The emphasis of this thesis is on how the proposals solve the current routing and addressing problems associated with the Internet. The addressing problem is found to be related to address space depletion, and the routing problem related to excessive routing costs. The evaluation is performed based on criteria selected for their applicability as future Internet design criteria. AIl the protocols are evaluated using the above-mentioned criteria. It is concluded that the most suitable addressing mechanism is an expandable multi-level format, with a logical separation of location and host identification information. Similarly, the most suitable network representation technique is found to be an unrestricted hierarchical structure which uses a suitable abstraction mechanism. It is further found that these two solutions could adequately solve the existing addressing and routing problems and allow substantial growth of the Internet.
- Full Text:
- Date Issued: 1996
An investigation into the viability of deploying thin client technology to support effective learning in a disadvantaged, rural high school setting
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
- Authors: Ndwe, Tembalethu Jama
- Date: 2002
- Subjects: Network computers , Education -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4627 , http://hdl.handle.net/10962/d1006500 , Network computers , Education -- Data processing
- Description: Computer Based Training offers many attractive learning opportunities for high school pupils. Its deployment in economically depressed and educationally marginalized rural schools is extremely uncommon due to the high technology skills and costs involved in its deployment and ongoing maintenance. This thesis puts forward thin client technology as a potential solution to the needs of education environments of this kind. A functional business case is developed and evaluated in this thesis, based upon a requirements analysis of media delivery in learning, and upon formal cost/performance models and a deployment field trial. Because of the economic constraints of the envisaged deployment area in rural education, an industrial field trial is used, and the aspects of this trial that can be carried over to the rural school situation have been used to assess performance and cost indicators. Our study finds that thin client technology could be deployed and maintained more cost effectively than conventional fat client solutions in rural schools, that it is capable of supporting the learning elements needed in this deployment area, and that it is able to deliver the predominantly text based applications currently being used in schools. However, we find that technological improvements are needed before future multimediaintensive applications can be adequately supported.
- Full Text:
- Date Issued: 2002
Analyzing communication flow and process placement in Linda programs on transputers
- De-Heer-Menlah, Frederick Kofi
- Authors: De-Heer-Menlah, Frederick Kofi
- Date: 1992 , 2012-11-28
- Subjects: LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4675 , http://hdl.handle.net/10962/d1006698 , LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Description: With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda. , KMBT_223
- Full Text:
- Date Issued: 1992
- Authors: De-Heer-Menlah, Frederick Kofi
- Date: 1992 , 2012-11-28
- Subjects: LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4675 , http://hdl.handle.net/10962/d1006698 , LINDA (Computer system) , Transputers , Parallel programming (Computer science)
- Description: With the evolution of parallel and distributed systems, users from diverse disciplines have looked to these systems as a solution to their ever increasing needs for computer processing resources. Because parallel processing systems currently require a high level of expertise to program, many researchers are investing effort into developing programming approaches which hide some of the difficulties of parallel programming from users. Linda, is one such parallel paradigm, which is intuitive to use, and which provides a high level decoupling between distributable components of parallel programs. In Linda, efficiency becomes a concern of the implementation rather than of the programmer. There is a substantial overhead in implementing Linda, an inherently shared memory model on a distributed system. This thesis describes the compile-time analysis of tuple space interactions which reduce the run-time matching costs, and permits the distributon of the tuple space data. A language independent module which partitions the tuple space data and suggests appropriate storage schemes for the partitions so as to optimise Linda operations is presented. The thesis also discusses hiding the network topology from the user by automatically allocating Linda processes and tuple space partitons to nodes in the network of transputers. This is done by introducing a fast placement algorithm developed for Linda. , KMBT_223
- Full Text:
- Date Issued: 1992
Behavioural model debugging in Linda
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
- Authors: Sewry, David Andrew
- Date: 1994
- Subjects: LINDA (Computer system) Debugging in computer science
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4674 , http://hdl.handle.net/10962/d1006697
- Description: This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system.
- Full Text:
- Date Issued: 1994
Cogitator : a parallel, fuzzy, database-driven expert system
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
- Authors: Baise, Paul
- Date: 1994 , 2012-10-08
- Subjects: Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4667 , http://hdl.handle.net/10962/d1006684 , Expert systems (Computer science) , Artificial intelligence -- Computer programs , System design , Cogitator (Computer system)
- Description: The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. , KMBT_223
- Full Text:
- Date Issued: 1994
CREWS : a Component-driven, Run-time Extensible Web Service framework
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
- Authors: Parry, Dominic Charles
- Date: 2004
- Subjects: Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4628 , http://hdl.handle.net/10962/d1006501 , Component software -- Development , Computer software -- Reusability , Software reengineering , Web services
- Description: There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
- Full Text:
- Date Issued: 2004
CSP-i : an implementation of CSP
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
- Authors: Wrench, Karen Lee
- Date: 1987 , 2013-03-08
- Subjects: Synchronization--Computers , Programming languages (Electronic computers)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4579 , http://hdl.handle.net/10962/d1003124 , Synchronization--Computers , Programming languages (Electronic computers)
- Description: CSP (Communicating Sequential Processes) is a notation proposed by Hoare, for expressing process communication and synchronization. Although this notation has been widely acclaimed, Hoare himself never implemented it as a computer language. He did however produce the necessary correctness proofs and subsequently the notation has been adopted (in various guises) by the designers of other concurrent languages such as Ada and occam. Only two attempts have been made at a direct and precise implementation of CSP. With closer scrutiny, even these implementations are found to deviate from the specifications expounded by Hoare, and in so doing restrict the original proposal. This thesis comprises two main sections. The first of these includes a brief look at the primitives of concurrent programming, followed by a comparative study of the existing adaptations of CSP and other message passing languages. The latter section is devoted to a description of the author's attempt at an original implementation of the notation. The result of this attempt is the creation of the CSP-i language and a suitable environment for executing CSP-i programs on an IBM PC. The CSP-i implementation is comparable with other concurrent systems presently available. In some aspects, the primitives featured in CSP-i provide the user with a more efficient and concise notation for expressing concurrent algorithms than several other message-based languages, notably occam. , KMBT_363 , Adobe Acrobat 9.53 Paper Capture Plug-in
- Full Text:
- Date Issued: 1987
Extending the reach of personal area networks by transporting Bluetooth communications over IP networks
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
Grouping complex systems for classification and parallel simulation
- Authors: Ikram, Ismail Mohamed
- Date: 1997
- Subjects: Digital computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4662 , http://hdl.handle.net/10962/d1006665
- Description: This thesis is concerned with grouping complex systems by means of concurrent model, in order to aid in (i) formulation of classifications and (ii) induction of parallel simulation programs. It observes, and seeks f~ furmalize _ and then exploit, the strong structural resemblance between complex systems and occam programs. The thesis hypothesizes that groups of complex systems may be discriminated according to shared structural and behavioural characteristics. Such an analysis of the complex systems domain may be performed in the abstract with the aid of a model for capturing interesting features of complex systems. The resulting groups would form a classification of complex systems. An additional hypothesis is that, insofar as the model is able to capture sufficient . programmatic information, these groups may be used to define, automatically, algorithmic skeletons for the concurrent simulation of complex systems. In order to test these hypotheses, a specification model and an accompanying formal notation are developed. The model expresses properties of complex systems in a mixture of object-oriented and process-oriented styles .. The model is then used as the basis for performing both classification and automatic induction of parallel simulation programs. The thesis takes the view that specification models should not be overly complex, especially if the specifications are meant to be executable. Therefore the requirement for explicit consideration of concurrency on the part of specifiers is minimized. The thesis formulates specifications of classes of cellular automata and neural networks according to the proposed model. Procedures for verificati6If - and induction of parallel simulation programs are also included.
- Full Text:
- Date Issued: 1997
- Authors: Ikram, Ismail Mohamed
- Date: 1997
- Subjects: Digital computer simulation
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4662 , http://hdl.handle.net/10962/d1006665
- Description: This thesis is concerned with grouping complex systems by means of concurrent model, in order to aid in (i) formulation of classifications and (ii) induction of parallel simulation programs. It observes, and seeks f~ furmalize _ and then exploit, the strong structural resemblance between complex systems and occam programs. The thesis hypothesizes that groups of complex systems may be discriminated according to shared structural and behavioural characteristics. Such an analysis of the complex systems domain may be performed in the abstract with the aid of a model for capturing interesting features of complex systems. The resulting groups would form a classification of complex systems. An additional hypothesis is that, insofar as the model is able to capture sufficient . programmatic information, these groups may be used to define, automatically, algorithmic skeletons for the concurrent simulation of complex systems. In order to test these hypotheses, a specification model and an accompanying formal notation are developed. The model expresses properties of complex systems in a mixture of object-oriented and process-oriented styles .. The model is then used as the basis for performing both classification and automatic induction of parallel simulation programs. The thesis takes the view that specification models should not be overly complex, especially if the specifications are meant to be executable. Therefore the requirement for explicit consideration of concurrency on the part of specifiers is minimized. The thesis formulates specifications of classes of cellular automata and neural networks according to the proposed model. Procedures for verificati6If - and induction of parallel simulation programs are also included.
- Full Text:
- Date Issued: 1997
Initial findings of an investigation into the feasibility of a low level image processing workstation using transputers
- Authors: Cooke, Nicholas Duncan
- Date: 1990 , 2013-02-07
- Subjects: Image processing , Computer graphics , Fourier transformations -- Data processing , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4679 , http://hdl.handle.net/10962/d1006702 , Image processing , Computer graphics , Fourier transformations -- Data processing , Transputers
- Description: From Introduction: The research concentrates primarily on a feasibility study involving the setting up of an image processing workstation. As broad as this statement concerning the workstation may seem, there are several factors limiting the extent of the research. This project is not concerned with the design and implementation of a fully-fledged image processing workstation. Rather, it concerns an initial feasibility study of such a workstation, centered on the theme image processing aided by the parallel processing paradigm. In looking at the hardware available for the project, in the context of an image processing environment, a large amount of initial investigation was required prior to that concerned with the transputer and parallel processing. Work was done on the capturing and displaying of images. This formed a vital part of the project. Furthermore, considering that a new architecture was being used as the work horse within a conventional host architecture, the INTEL 80286, several aspects of the host architecture had also to be investigated. These included the actual processing capabilities of the host, the capturing and storing of the images on the host, and most importantly, the interface between the host and the transputer [C0089]. Benchmarking was important in order for good conclusions to be drawn about the viability of the two types of hardware used, both individually and together. On the subject of the transputer as the workhorse, there were several areas whlch required investigation. Initial work had to cover the choice of network topology on whlch the benchmarking of some of the image processing applications were performed. Research into this was based on the previous work of several authors, whlch introduced features relevant to this investigation. The network used for this investigation was chosen to be generally applicable to a broad spectrum of applications in image processing. It was not chosen for its applicability for a single dedicated application, as has been the case for much of the past research performed in image processing [SAN88] [SCH89]. The concept of image processing techniques being implemented on the transputer required careful consideration in respect of what should be implemented. Image processing is not a new subject, and it encompasses a large spectrum of applications. The transputer, with image processing being hlghly suited to it, has attracted a good deal of research. It would not be rash to say that the easy research was covered first. The more trivial operations in image processing, requiring matrix type operations on the pixels attracted, the most coverage. Several researchers in the field of image processing on the transputer have broken the back of this set of problems. Conclusions regarding these operations on the transputer returned a fairly standard answer. An area of image processing which has not produced the same volume of return as that concerning the more trivial operations, is the subject of Fourier Analysis, that is, the Fourier Transform. Thus a major part of this project concerns an investigation into the Fourier Transform in image processing, in particular the Fast Fourier Transform. The network chosen for thls research has placed some constraint upon the degree of parallelism that can be achleved. It should be emphasized that this project is not concerned with the most efficient implementation of a specific image processing algorithm on a dedicated topology. Rather, it looks at the feasibility of a general system in the domain of image processing, concerned with a hlghly computationally intensive operation. This has had the effect of testing the processing power of the hardware used, and contributing a widely applicable parallel algorithm for use in Fourier Analysis. 3 These are discussed more fully in Chapter 2, which covers the work related to tbis project. The results of the investigation are presented along with a discussion of the methods throughout the thesis. The final chapter summarizes the findings of the research, assesses the value of the investigation, and points out areas for future investigation.
- Full Text:
- Date Issued: 1990
- Authors: Cooke, Nicholas Duncan
- Date: 1990 , 2013-02-07
- Subjects: Image processing , Computer graphics , Fourier transformations -- Data processing , Transputers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4679 , http://hdl.handle.net/10962/d1006702 , Image processing , Computer graphics , Fourier transformations -- Data processing , Transputers
- Description: From Introduction: The research concentrates primarily on a feasibility study involving the setting up of an image processing workstation. As broad as this statement concerning the workstation may seem, there are several factors limiting the extent of the research. This project is not concerned with the design and implementation of a fully-fledged image processing workstation. Rather, it concerns an initial feasibility study of such a workstation, centered on the theme image processing aided by the parallel processing paradigm. In looking at the hardware available for the project, in the context of an image processing environment, a large amount of initial investigation was required prior to that concerned with the transputer and parallel processing. Work was done on the capturing and displaying of images. This formed a vital part of the project. Furthermore, considering that a new architecture was being used as the work horse within a conventional host architecture, the INTEL 80286, several aspects of the host architecture had also to be investigated. These included the actual processing capabilities of the host, the capturing and storing of the images on the host, and most importantly, the interface between the host and the transputer [C0089]. Benchmarking was important in order for good conclusions to be drawn about the viability of the two types of hardware used, both individually and together. On the subject of the transputer as the workhorse, there were several areas whlch required investigation. Initial work had to cover the choice of network topology on whlch the benchmarking of some of the image processing applications were performed. Research into this was based on the previous work of several authors, whlch introduced features relevant to this investigation. The network used for this investigation was chosen to be generally applicable to a broad spectrum of applications in image processing. It was not chosen for its applicability for a single dedicated application, as has been the case for much of the past research performed in image processing [SAN88] [SCH89]. The concept of image processing techniques being implemented on the transputer required careful consideration in respect of what should be implemented. Image processing is not a new subject, and it encompasses a large spectrum of applications. The transputer, with image processing being hlghly suited to it, has attracted a good deal of research. It would not be rash to say that the easy research was covered first. The more trivial operations in image processing, requiring matrix type operations on the pixels attracted, the most coverage. Several researchers in the field of image processing on the transputer have broken the back of this set of problems. Conclusions regarding these operations on the transputer returned a fairly standard answer. An area of image processing which has not produced the same volume of return as that concerning the more trivial operations, is the subject of Fourier Analysis, that is, the Fourier Transform. Thus a major part of this project concerns an investigation into the Fourier Transform in image processing, in particular the Fast Fourier Transform. The network chosen for thls research has placed some constraint upon the degree of parallelism that can be achleved. It should be emphasized that this project is not concerned with the most efficient implementation of a specific image processing algorithm on a dedicated topology. Rather, it looks at the feasibility of a general system in the domain of image processing, concerned with a hlghly computationally intensive operation. This has had the effect of testing the processing power of the hardware used, and contributing a widely applicable parallel algorithm for use in Fourier Analysis. 3 These are discussed more fully in Chapter 2, which covers the work related to tbis project. The results of the investigation are presented along with a discussion of the methods throughout the thesis. The final chapter summarizes the findings of the research, assesses the value of the investigation, and points out areas for future investigation.
- Full Text:
- Date Issued: 1990