Automating the conversion of natural language fiction to multi-modal 3D animated virtual environments
- Authors: Glass, Kevin Robert
- Date: 2009
- Subjects: Virtual computer systems , Virtual storage (Computer science) , Virtual reality , Computer animation , Fiction -- Computer programs , Narration (Rhetoric) -- Computer simulation , Animation (Cinematography) , Natural language processing (Computer Science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4632 , http://hdl.handle.net/10962/d1006518
- Description: Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
- Full Text:
- Date Issued: 2009
- Authors: Glass, Kevin Robert
- Date: 2009
- Subjects: Virtual computer systems , Virtual storage (Computer science) , Virtual reality , Computer animation , Fiction -- Computer programs , Narration (Rhetoric) -- Computer simulation , Animation (Cinematography) , Natural language processing (Computer Science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4632 , http://hdl.handle.net/10962/d1006518
- Description: Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
- Full Text:
- Date Issued: 2009
Using semantic knowledge to improve compression on log files
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Date Issued: 2009
An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
- Authors: Zhao, Xiaogeng
- Date: 2008
- Subjects: Ad hoc networks (Computer networks) Computer network architectures Computer networks Routing protocols (Computer network protocols)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4588 , http://hdl.handle.net/10962/d1004822
- Description: This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
- Full Text:
- Date Issued: 2008
An investigation into interoperable end-to-end mobile web service security
- Authors: Moyo, Thamsanqa
- Date: 2008
- Subjects: Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4595 , http://hdl.handle.net/10962/d1004838 , Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Description: The capacity to engage in web services transactions on smartphones is growing as these devices become increasingly powerful and sophisticated. This capacity for mobile web services is being realised through mobile applications that consume web services hosted on larger computing devices. This thesis investigates the effect that end-to-end web services security has on the interoperability between mobile web services requesters and traditional web services providers. SOAP web services are the preferred web services approach for this investigation. Although WS-Security is recognised as demanding on mobile hardware and network resources, the selection of appropriate WS-Security mechanisms lessens this burden. An attempt to implement such mechanisms on smartphones is carried out via an experiment. Smartphones are selected as the mobile device type used in the experiment. The experiment is conducted on the Java Micro Edition (Java ME) and the .NET Compact Framework (.NET CF) smartphone platforms. The experiment shows that the implementation of interoperable, end-to-end, mobile web services security on both platforms is reliant on third-party libraries. This reliance on third-party libraries results in poor developer support and exposes developers to the complexity of cryptography. The experiment also shows that there are no standard message size optimisation libraries available for both platforms. The implementation carried out on the .NET CF is also shown to rely on the underlying operating system. It is concluded that standard WS-Security APIs must be provided on smartphone platforms to avoid the problems of poor developer support and the additional complexity of cryptography. It is recommended that these APIs include a message optimisation technique. It is further recommended that WS-Security APIs be completely operating system independent when they are implemented in managed code. This thesis contributes by: providing a snapshot of mobile web services security; identifying the smartphone platform state of readiness for end-to-end secure web services; and providing a set of recommendations that may improve this state of readiness. These contributions are of increasing importance as mobile web services evolve from a simple point-to-point environment to the more complex enterprise environment.
- Full Text:
- Date Issued: 2008
- Authors: Moyo, Thamsanqa
- Date: 2008
- Subjects: Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4595 , http://hdl.handle.net/10962/d1004838 , Web services , Mobile computing , Smartphones , Internetworking (Telecommunication) , Computer networks -- Security measures , XML (Document markup language) , Microsoft .NET Framework , Java (Computer program language)
- Description: The capacity to engage in web services transactions on smartphones is growing as these devices become increasingly powerful and sophisticated. This capacity for mobile web services is being realised through mobile applications that consume web services hosted on larger computing devices. This thesis investigates the effect that end-to-end web services security has on the interoperability between mobile web services requesters and traditional web services providers. SOAP web services are the preferred web services approach for this investigation. Although WS-Security is recognised as demanding on mobile hardware and network resources, the selection of appropriate WS-Security mechanisms lessens this burden. An attempt to implement such mechanisms on smartphones is carried out via an experiment. Smartphones are selected as the mobile device type used in the experiment. The experiment is conducted on the Java Micro Edition (Java ME) and the .NET Compact Framework (.NET CF) smartphone platforms. The experiment shows that the implementation of interoperable, end-to-end, mobile web services security on both platforms is reliant on third-party libraries. This reliance on third-party libraries results in poor developer support and exposes developers to the complexity of cryptography. The experiment also shows that there are no standard message size optimisation libraries available for both platforms. The implementation carried out on the .NET CF is also shown to rely on the underlying operating system. It is concluded that standard WS-Security APIs must be provided on smartphone platforms to avoid the problems of poor developer support and the additional complexity of cryptography. It is recommended that these APIs include a message optimisation technique. It is further recommended that WS-Security APIs be completely operating system independent when they are implemented in managed code. This thesis contributes by: providing a snapshot of mobile web services security; identifying the smartphone platform state of readiness for end-to-end secure web services; and providing a set of recommendations that may improve this state of readiness. These contributions are of increasing importance as mobile web services evolve from a simple point-to-point environment to the more complex enterprise environment.
- Full Text:
- Date Issued: 2008
Connection management applications for high-speed audio networking
- Authors: Sibanda, Phathisile
- Date: 2008 , 2008-03-12
- Subjects: Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4634 , http://hdl.handle.net/10962/d1006532 , Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Description: Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
- Full Text:
- Date Issued: 2008
- Authors: Sibanda, Phathisile
- Date: 2008 , 2008-03-12
- Subjects: Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4634 , http://hdl.handle.net/10962/d1006532 , Flash (Computer file) , Computer networks , Computer networks -- Management , Digital communications , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques , Broadcast data systems , C# (Computer program language) , C++ (Computer program language) , ActionScript (Computer program language)
- Description: Traditionally, connection management applications (referred to as patchbays) for high-speed audio networking, are predominantly developed using third-generation languages such as C, C# and C++. Due to the rapid increase in distributed audio/video network usage in the world today, connection management applications that control signal routing over these networks have also evolved in complexity to accommodate more functionality. As the result, high-speed audio networking application developers require a tool that will enable them to develop complex connection management applications easily and within the shortest possible time. In addition, this tool should provide them with the reliability and flexibility required to develop applications controlling signal routing in networks carrying real-time data. High-speed audio networks are used for various purposes that include audio/video production and broadcasting. This investigation evaluates the possibility of using Adobe Flash Professional 8, using ActionScript 2.0, for developing connection management applications. Three patchbays, namely the Broadcast patchbay, the Project studio patchbay, and the Hospitality/Convention Centre patchbay were developed and tested for connection management in three sound installation networks, namely the Broadcast network, the Project studio network, and the Hospitality/Convention Centre network. Findings indicate that complex connection management applications can effectively be implemented using the Adobe Flash IDE and ActionScript 2.0.
- Full Text:
- Date Issued: 2008
Network-layer reservation TDM for ad-hoc 802.11 networks
- Authors: Duff, Kevin Craig
- Date: 2008
- Subjects: Computer networks -- Access control , Computers -- Access control , Computer networks -- Management , Time division multiple access , Ad hoc networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4574 , http://hdl.handle.net/10962/d1002773 , Computer networks -- Access control , Computers -- Access control , Computer networks -- Management , Time division multiple access , Ad hoc networks (Computer networks)
- Description: Ad-Hoc mesh networks offer great promise. Low-cost ad-hoc mesh networks can be built using popular IEEE 802.11 equipment, but such networks are unable to guarantee each node a fair share of bandwidth. Furthermore, hidden node problems cause collisions which can cripple the throughput of a network. This research proposes a novel mechanism which is able to overcome hidden node problems and provide fair bandwidth sharing among nodes on ad-hoc 802.11 networks, and can be implemented on existing network devices. The scheme uses TDM (time division multiplexing) with slot reservation. A distributed beacon packet latency measurement mechanism is used to achieve node synchronisation. The distributed nature of the mechanism makes it applicable to ad-hoc 802.11 networks, which can either grow or fragment dynamically.
- Full Text:
- Date Issued: 2008
- Authors: Duff, Kevin Craig
- Date: 2008
- Subjects: Computer networks -- Access control , Computers -- Access control , Computer networks -- Management , Time division multiple access , Ad hoc networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4574 , http://hdl.handle.net/10962/d1002773 , Computer networks -- Access control , Computers -- Access control , Computer networks -- Management , Time division multiple access , Ad hoc networks (Computer networks)
- Description: Ad-Hoc mesh networks offer great promise. Low-cost ad-hoc mesh networks can be built using popular IEEE 802.11 equipment, but such networks are unable to guarantee each node a fair share of bandwidth. Furthermore, hidden node problems cause collisions which can cripple the throughput of a network. This research proposes a novel mechanism which is able to overcome hidden node problems and provide fair bandwidth sharing among nodes on ad-hoc 802.11 networks, and can be implemented on existing network devices. The scheme uses TDM (time division multiplexing) with slot reservation. A distributed beacon packet latency measurement mechanism is used to achieve node synchronisation. The distributed nature of the mechanism makes it applicable to ad-hoc 802.11 networks, which can either grow or fragment dynamically.
- Full Text:
- Date Issued: 2008
Prototyping a peer-to-peer session initiation protocol user agent
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2008 , 2008-03-10
- Subjects: Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4646 , http://hdl.handle.net/10962/d1006603 , Computer networks , Computer network protocols -- Standards , Data transmission systems -- Standards , Peer-to-peer architecture (Computer networks) , Computer network architectures
- Description: The Session Initiation Protocol (SIP) has in recent years become a popular protocol for the exchange of text, voice and video over IP networks. This thesis proposes the use of a class of structured peer to peer protocols - commonly known as Distributed Hash Tables (DHTs) - to provide a SIP overlay with services such as end-point location management and message relay, in the absence of traditional, centralised resources such as SIP proxies and registrars. A peer-to-peer layer named OverCord, which allows the interaction with any specific DHT protocol via the use of appropriate plug-ins, was designed, implemented and tested. This layer was then incorporated into a SIP user agent distributed by NIST (National Institute of Standards and Technology, USA). The modified user agent is capable of reliably establishing text, audio and video communication with similarly modified agents (peers) as well as conventional, centralized SIP overlays.
- Full Text:
- Date Issued: 2008
The development and evaluation of a custom-built synchronous online learning environment for tertiary education in South Africa
- Authors: Halse, Michelle Louise
- Date: 2008 , 2008-02-23
- Subjects: Education, Higher -- South Africa -- Computer-assisted instruction , Universities and colleges -- Computer networks -- South Africa , Internet in higher education -- South Africa , Distance education -- South Africa -- Computer-assisted instruction , Computer-assisted instruction -- South Africa , Computer science -- Study and teaching (Higher) -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4636 , http://hdl.handle.net/10962/d1006545 , Education, Higher -- South Africa -- Computer-assisted instruction , Universities and colleges -- Computer networks -- South Africa , Internet in higher education -- South Africa , Distance education -- South Africa -- Computer-assisted instruction , Computer-assisted instruction -- South Africa , Computer science -- Study and teaching (Higher) -- South Africa
- Description: The Departments of Computer Science and Information Systems at Rhodes University currently share certain honours-level (fourth year) course modules with students from the corresponding departments at the previously disadvantaged University of Fort Hare. These lectures are currently delivered using video-conferencing. This was found to present a number of problems including challenges in terms of implementing desired pedagogical approaches, inequitable learning experiences, student disengagement at the remote venue, and inflexibility of the video-conferencing system. In order to address these problems, various e-learning modes were investigated and synchronous e-learning were found to offer a number of advantages over asynchronous e-learning. Live Virtual Classrooms (LVCs) were identified as synchronous e-learning tools that support the pedagogical principles important to the two universities and to the broader context of South African tertiary education, and commercial LVC applications were investigated and evaluated. Informed by the results of this investigation a small, simple LVC was designed, developed and customised for use in a predominantly academic sphere and deployment in a South African tertiary educational context. Testing and evaluation of this solution was carried out and the results analysed in terms of the LVC’s technical merits and the pedagogical value of the solution as experienced by students and lecturers/facilitators. An evaluation of this solution indicated that the LVC solves a number of the identified problems with video-conferencing and also provides a flexible/customisable/extensible solution that supports highly interactive, collaborative, learner-centred education. The custom LVC solution could be easily adapted to the specific needs of any tertiary educational institute in the country, and results may benefit other tertiary educational institutions involved in or dependant on distance learning.
- Full Text:
- Date Issued: 2008
- Authors: Halse, Michelle Louise
- Date: 2008 , 2008-02-23
- Subjects: Education, Higher -- South Africa -- Computer-assisted instruction , Universities and colleges -- Computer networks -- South Africa , Internet in higher education -- South Africa , Distance education -- South Africa -- Computer-assisted instruction , Computer-assisted instruction -- South Africa , Computer science -- Study and teaching (Higher) -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4636 , http://hdl.handle.net/10962/d1006545 , Education, Higher -- South Africa -- Computer-assisted instruction , Universities and colleges -- Computer networks -- South Africa , Internet in higher education -- South Africa , Distance education -- South Africa -- Computer-assisted instruction , Computer-assisted instruction -- South Africa , Computer science -- Study and teaching (Higher) -- South Africa
- Description: The Departments of Computer Science and Information Systems at Rhodes University currently share certain honours-level (fourth year) course modules with students from the corresponding departments at the previously disadvantaged University of Fort Hare. These lectures are currently delivered using video-conferencing. This was found to present a number of problems including challenges in terms of implementing desired pedagogical approaches, inequitable learning experiences, student disengagement at the remote venue, and inflexibility of the video-conferencing system. In order to address these problems, various e-learning modes were investigated and synchronous e-learning were found to offer a number of advantages over asynchronous e-learning. Live Virtual Classrooms (LVCs) were identified as synchronous e-learning tools that support the pedagogical principles important to the two universities and to the broader context of South African tertiary education, and commercial LVC applications were investigated and evaluated. Informed by the results of this investigation a small, simple LVC was designed, developed and customised for use in a predominantly academic sphere and deployment in a South African tertiary educational context. Testing and evaluation of this solution was carried out and the results analysed in terms of the LVC’s technical merits and the pedagogical value of the solution as experienced by students and lecturers/facilitators. An evaluation of this solution indicated that the LVC solves a number of the identified problems with video-conferencing and also provides a flexible/customisable/extensible solution that supports highly interactive, collaborative, learner-centred education. The custom LVC solution could be easily adapted to the specific needs of any tertiary educational institute in the country, and results may benefit other tertiary educational institutions involved in or dependant on distance learning.
- Full Text:
- Date Issued: 2008
A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
- Date Issued: 2007
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
- Date Issued: 2007
An investigation into the deployment of IEEE 802.11 networks
- Janse van Rensburg, Johanna Hendrina
- Authors: Janse van Rensburg, Johanna Hendrina
- Date: 2007
- Subjects: Local area networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4596 , http://hdl.handle.net/10962/d1004839 , Local area networks (Computer networks)
- Description: Currently, the IEEE 802.11 standard is the leading technology in the Wireless Local Area Network (WLAN) market. It provides flexibility and mobility to users, which in turn, increase productivity. Opposed to traditional fixed Local Area Network (LAN) technologies, WLANs are easier to deploy and have lower installation costs. Unfortunately, there are problems inherent within the technology and standard that inhibits its performance. Technological problems can be attributed to the physical medium of a WLAN, the electromagnetic (EM) wave. Standards based problems include security issues and the MAC layer design. However the impact of these problems can be mitigated with proper planning and design of the WLAN. To do this, an understanding of WLAN issues and the use of WLAN software tools are necessary. This thesis discusses WLAN issues such as security and electromagnetic wave propagation and introduces software that can aid the planning, deployment and maintenance of a WLAN. Furthermore the planning, implementation and auditing phases of a WLAN lifecylce are discussed. The aim being to provide an understanding of the complexities involved to deploy and maintain a secure and reliable WLAN.
- Full Text:
- Date Issued: 2007
- Authors: Janse van Rensburg, Johanna Hendrina
- Date: 2007
- Subjects: Local area networks (Computer networks)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4596 , http://hdl.handle.net/10962/d1004839 , Local area networks (Computer networks)
- Description: Currently, the IEEE 802.11 standard is the leading technology in the Wireless Local Area Network (WLAN) market. It provides flexibility and mobility to users, which in turn, increase productivity. Opposed to traditional fixed Local Area Network (LAN) technologies, WLANs are easier to deploy and have lower installation costs. Unfortunately, there are problems inherent within the technology and standard that inhibits its performance. Technological problems can be attributed to the physical medium of a WLAN, the electromagnetic (EM) wave. Standards based problems include security issues and the MAC layer design. However the impact of these problems can be mitigated with proper planning and design of the WLAN. To do this, an understanding of WLAN issues and the use of WLAN software tools are necessary. This thesis discusses WLAN issues such as security and electromagnetic wave propagation and introduces software that can aid the planning, deployment and maintenance of a WLAN. Furthermore the planning, implementation and auditing phases of a WLAN lifecylce are discussed. The aim being to provide an understanding of the complexities involved to deploy and maintain a secure and reliable WLAN.
- Full Text:
- Date Issued: 2007
Constructing a low-cost, open-source, VoiceXML
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: King, Adam
- Date: 2007 , 2013-07-01
- Subjects: VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4585 , http://hdl.handle.net/10962/d1004735 , VoiceXML (Document markup language) , Asterisk (Computer file) , Internet telephony , Open source software
- Description: Voice-enabled applications, applications that interact with a user via an audio channel, are used extensively today. Their use is growing as speech related technologies improve, as speech is one of the most natural methods of interaction. They can provide customer support as IVRs, can be used as an assistive technology, or can become an aural interface to the Internet. Given that the telephone is used extensively throughout the globe, the number of potential users of voice-enabled applications is very high. VoiceXML is a popular, open, high-level, standard means of creating voice-enabled applications which was designed to bring the benefits of web based development to services. While VoiceXML is an ideal language for creating these applications, VoiceXML gateways, the hardware and software responsible for interpreting VoiceXML applications and interfacing with the PSTN, are still expensive and so there is a need for a low-cost gateway. Asterisk, and open-source, TDM/VoIP telephony platform, can be used as a low-cost PSTN interface. This thesis investigates adding a VoiceXML service to Asterisk, creating a low-cost VoiceXML prototype gateway which is able to render voice-enabled applications. Following the Component-Based Software Engineering (CBSE) paradigm, the VoiceXML gateway is divided into a set of components which are sourced from the open-source community, and integrated to create the gateway. The browser requires a VoiceXML interpreter (OpenVXI), a Text-To-Speech engine (Festival) and a speech recognition engine (Sphinx 4). The integration of the components results in a low-cost, open-source VoiceXML gateway. System tests show that the integration of the components was successful, and that the system can handle concurrent calls. A fully compliant version of the gateway can be used in the real world to render voice-enabled applications at a low cost. , KMBT_363 , Adobe Acrobat 9.55 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
Extending the reach of personal area networks by transporting Bluetooth communications over IP networks
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
- Authors: Mackie, David Sean
- Date: 2007 , 2007-03-29
- Subjects: Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4637 , http://hdl.handle.net/10962/d1006551 , Bluetooth technology , Communication -- Technological innovations , Communication -- Network analysis , TCP/IP (Computer network protocol) , Computer networks , Computer network protocols , Wireless communication systems
- Description: This thesis presents an investigation of how to extend the reach of a Bluetooth personal area network by introducing the concept of Bluetooth Hotspots. Currently two Bluetooth devices cannot communicate with each other unless they are within radio range, since Bluetooth is designed as a cable-replacement technology for wireless communications over short ranges. An investigation was done into the feasibility of creating Bluetooth hotspots that allow distant Bluetooth devices to communicate with each other by transporting their communications between these hotspots via an alternative network infrastructure such as an IP network. Two approaches were investigated, masquerading of remote devices by the local hotspot to allow seamless communications and proxying services on remote devices by providing them on a local hotspot using a distributed service discovery database. The latter approach was used to develop applications capable of transporting Bluetooth’s RFCOMM and L2CAP protocols. Quantitative tests were performed to establish the throughput performance and latency of these transport applications. Furthermore, a number of selected Bluetooth services were tested which lead us to conclude that most data-based protocols can be transported by the system.
- Full Text:
- Date Issued: 2007
Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
- Authors: Clayton, Bradley
- Date: 2007 , 2007-01-08
- Subjects: Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4647 , http://hdl.handle.net/10962/d1006606 , Asterisk (Computer file) , Computer networks -- Security measures , Internet telephony -- Security measures
- Description: When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
- Full Text:
- Date Issued: 2007
Securing softswitches from malicious attacks
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
- Authors: Opie, Jake Weyman
- Date: 2007
- Subjects: Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4683 , http://hdl.handle.net/10962/d1007714 , Internet telephony -- Security measures , Computer networks -- Security measures , Digital telephone systems , Communication -- Technological innovations , Computer network protocols , TCP/IP (Computer network protocol) , Switching theory
- Description: Traditionally, real-time communication, such as voice calls, has run on separate, closed networks. Of all the limitations that these networks had, the ability of malicious attacks to cripple communication was not a crucial one. This situation has changed radically now that real-time communication and data have merged to share the same network. The objective of this project is to investigate the securing of softswitches with functionality similar to Private Branch Exchanges (PBX) from malicious attacks. The focus of the project will be a practical investigation of how to secure ILANGA, an ASTERISK-based system under development at Rhodes University. The practical investigation that focuses on ILANGA is based on performing six varied experiments on the different components of ILANGA. Before the six experiments are performed, basic preliminary security measures and the restrictions placed on the access to the database are discussed. The outcomes of these experiments are discussed and the precise reasons why these attacks were either successful or unsuccessful are given. Suggestions of a theoretical nature on how to defend against the successful attacks are also presented.
- Full Text:
- Date Issued: 2007
Trust on the semantic web
- Authors: Cloran, Russell Andrew
- Date: 2007 , 2006-08-07
- Subjects: Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4649 , http://hdl.handle.net/10962/d1006616 , Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Description: The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
- Full Text:
- Date Issued: 2007
- Authors: Cloran, Russell Andrew
- Date: 2007 , 2006-08-07
- Subjects: Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4649 , http://hdl.handle.net/10962/d1006616 , Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Description: The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
- Full Text:
- Date Issued: 2007
A detailed investigation of interoperability for web services
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
- Authors: Wright, Madeleine
- Date: 2006
- Subjects: Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4592 , http://hdl.handle.net/10962/d1004832 , Firefox , Web services , World Wide Web , Computer architecture , C# (Computer program language) , PHP (Computer program language) , Java (Computer program language)
- Description: The thesis presents a qualitative survey of web services' interoperability, offering a snapshot of development and trends at the end of 2005. It starts by examining the beginnings of web services in earlier distributed computing and middleware technologies, determining the distance from these approaches evident in current web-services architectures. It establishes a working definition of web services, examining the protocols that now seek to define it and the extent to which they contribute to its most crucial feature, interoperability. The thesis then considers the REST approach to web services as being in a class of its own, concluding that this approach to interoperable distributed computing is not only the simplest but also the most interoperable. It looks briefly at interoperability issues raised by technologies in the wider arena of Service Oriented Architecture. The chapter on protocols is complemented by a chapter that validates the qualitative findings by examining web services in practice. These have been implemented by a variety of toolkits and on different platforms. Included in the study is a preliminary examination of JAX-WS, the replacement for JAX-RPC, which is still under development. Although the main language of implementation is Java, the study includes services in C# and PHP and one implementation of a client using a Firefox extension. The study concludes that different forms of web service may co-exist with earlier middleware technologies. While remaining aware that there are still pitfalls that might yet derail the movement towards greater interoperability, the conclusion sounds an optimistic note that recent cooperation between different vendors may yet result in a solution that achieves interoperability through core web-service standards.
- Full Text:
- Date Issued: 2006
A framework for responsive content adaptation in electronic display networks
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
- Authors: West, Philip
- Date: 2006
- Subjects: Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4589 , http://hdl.handle.net/10962/d1004824 , Computer networks , Cell phone systems , Wireless communication systems , Mobile communication systems , HTML (Document markup language) , XML (Document markup language)
- Description: Recent trends show an increase in the availability and functionality of handheld devices, wireless network technology, and electronic display networks. We propose the novel integration of these technologies to provide wireless access to content delivered to large-screen display systems. Content adaptation is used as a method of reformatting web pages to display more appropriately on handheld devices, and to remove unwanted content. A framework is presented that facilitates content adaptation, implemented as an adaptation layer, which is extended to provide personalization of adaptation settings and response to network conditions. The framework is implemented as a proxy server for a wireless network, and handles HTML and XML documents. Once a document has been requested by a user, the HTML/XML is retrieved and parsed, creating a Document Object Model tree representation. It is then altered according to the user’s personal settings or predefined settings, based on current network usage and the network resources available. Three adaptation techniques were implemented; spatial representation, which generates an image map of the document, text summarization, which creates a tree view representation of a document, and tag extraction, which replaces specific tags with links. Three proof-of-concept systems were developed in order to test the robustness of the framework. A system for use with digital slide shows, a digital signage system, and a generalized system for use with the internet were implemented. Testing was performed by accessing sample web pages through the content adaptation proxy server. Tag extraction works correctly for all HTML and XML document structures, whereas spatial representation and text summarization are limited to a controlled subset. Results indicate that the adaptive system has the ability to reduce average bandwidth usage, by decreasing the amount of data on the network, thereby allowing a greater number of users access to content. This suggests that responsive content adaptation has a positive influence on network performance metrics.
- Full Text:
- Date Issued: 2006
Decorating Asterisk : experiments in service creation for a multi-protocol telephony environment using open source tools
- Authors: Hitchcock, Jonathan
- Date: 2006
- Subjects: Asterisk (Computer file) , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4635 , http://hdl.handle.net/10962/d1006539 , Asterisk (Computer file) , Internet telephony
- Description: As Voice over IP becomes more prevalent, value-adds to the service will become ubiquitous. Voice over IP (VoIP) is no longer a single service application, but an array of marketable services of increasing depth, which are moving into the non-desktop market. In addition, as the range of devices being generally used increases, it will become necessary for all services, including VoIP services, to be accessible from multiple platforms and through varied interfaces. With the recent introduction and growth of the open source software PBX system named Asterisk, the possibility of achieving these goals has become more concrete. In addition to Asterisk, a number of open source systems are being developed which facilitate the development of systems that interoperate over a wide variety of platforms and through multiple interfaces. This thesis investigates Asterisk in terms of its viability to provide the depth of services that will be required in a VoIP environment, as well as a number of other open source systems in terms of what they can offer such a system. In addition, it investigates whether these services can be made available on different devices. Using various systems built as a proof-of-concept, this thesis shows that Asterisk, in conjunction with various other open source projects, such as the Twisted framework provides a concrete tool which can be used to realise flexible and protocol independent telephony solutions for a small to medium enterprise.
- Full Text:
- Date Issued: 2006
- Authors: Hitchcock, Jonathan
- Date: 2006
- Subjects: Asterisk (Computer file) , Internet telephony
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4635 , http://hdl.handle.net/10962/d1006539 , Asterisk (Computer file) , Internet telephony
- Description: As Voice over IP becomes more prevalent, value-adds to the service will become ubiquitous. Voice over IP (VoIP) is no longer a single service application, but an array of marketable services of increasing depth, which are moving into the non-desktop market. In addition, as the range of devices being generally used increases, it will become necessary for all services, including VoIP services, to be accessible from multiple platforms and through varied interfaces. With the recent introduction and growth of the open source software PBX system named Asterisk, the possibility of achieving these goals has become more concrete. In addition to Asterisk, a number of open source systems are being developed which facilitate the development of systems that interoperate over a wide variety of platforms and through multiple interfaces. This thesis investigates Asterisk in terms of its viability to provide the depth of services that will be required in a VoIP environment, as well as a number of other open source systems in terms of what they can offer such a system. In addition, it investigates whether these services can be made available on different devices. Using various systems built as a proof-of-concept, this thesis shows that Asterisk, in conjunction with various other open source projects, such as the Twisted framework provides a concrete tool which can be used to realise flexible and protocol independent telephony solutions for a small to medium enterprise.
- Full Text:
- Date Issued: 2006
Email meets issue-tracking: a prototype implementation
- Authors: Kwinana, Zukhanye N
- Date: 2006 , 2013-06-11
- Subjects: Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4614 , http://hdl.handle.net/10962/d1005644 , Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Description: The use of electronic mail (email) has evolved from sending simple messages to task delegation and management. Most mail clients, however, have not kept up with the evolution and as a result have limited task management features available. On the other hand, while issue tracking systems offer useful task management functionality, they are not as widespread as emails and also have a few drawbacks. This thesis reports on the exploration of the integration of the ubiquitous nature of email with the task management features of issue-tracking systems. We explore this using simple ad-hoc as well as semi-automated tasks. With these two working together, tasks can be delegated from email clients without needing to switch between the two environments. It brings some of the benefits of issue tracking systems closer to our email users.The system is developed using Microsoft VisuaI Studio.NET. with the code written in C#. The eXtreme Programming (XP) methodology was used during the development of the proof-of-concept prototype that demonstrates the integration of the two environments, as we were faced at first with vague requirements bound to change, as we better understood the problem domain through our development. XP allowed us to skip an extended and comprehensive initial design process and incrementally develop the system, making refinements and extensions as we encountered the need for them. This alleviated the need to make upfront decisions that were based on minimal knowledge of what to expect during development. This thesis describes the implementation of the prototype and the decisions made with each step taken towards developing an email-based issue tracking system. With the two environments working together, we can now easily track issues from our email clients without needing to switch to another system. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2006
- Authors: Kwinana, Zukhanye N
- Date: 2006 , 2013-06-11
- Subjects: Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4614 , http://hdl.handle.net/10962/d1005644 , Microsoft Visual studio , Electronic mail systems , Computer networks , eXtreme programming , Computer software -- Development
- Description: The use of electronic mail (email) has evolved from sending simple messages to task delegation and management. Most mail clients, however, have not kept up with the evolution and as a result have limited task management features available. On the other hand, while issue tracking systems offer useful task management functionality, they are not as widespread as emails and also have a few drawbacks. This thesis reports on the exploration of the integration of the ubiquitous nature of email with the task management features of issue-tracking systems. We explore this using simple ad-hoc as well as semi-automated tasks. With these two working together, tasks can be delegated from email clients without needing to switch between the two environments. It brings some of the benefits of issue tracking systems closer to our email users.The system is developed using Microsoft VisuaI Studio.NET. with the code written in C#. The eXtreme Programming (XP) methodology was used during the development of the proof-of-concept prototype that demonstrates the integration of the two environments, as we were faced at first with vague requirements bound to change, as we better understood the problem domain through our development. XP allowed us to skip an extended and comprehensive initial design process and incrementally develop the system, making refinements and extensions as we encountered the need for them. This alleviated the need to make upfront decisions that were based on minimal knowledge of what to expect during development. This thesis describes the implementation of the prototype and the decisions made with each step taken towards developing an email-based issue tracking system. With the two environments working together, we can now easily track issues from our email clients without needing to switch to another system. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2006