An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
An investigation into the design and implementation of an internet-scale network simulator
- Authors: Richter, John Peter Frank
- Date: 2009
- Subjects: Computer simulation , Computer network resources , Computer networks , Internet
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4597 , http://hdl.handle.net/10962/d1004840 , Computer simulation , Computer network resources , Computer networks , Internet
- Description: Simulation is a complex task with many research applications - chiey as a research tool, to test and evaluate hypothetical scenarios. Though many simulations execute similar operations and utilise similar data, there are few simulation frameworks or toolkits that allow researchers to rapidly develop their concepts. Those that are available to researchers are limited in scope, or use old technology that is no longer useful to modern researchers. As a result of this, many researchers build their own simulations without a framework, wasting time and resources on a system that could already cater for the majority of their simulation's requirements. In this work, a system is proposed for the creation of a scalable, dynamic-resolution network simulation framework that provides scalable scope for researchers, using modern technologies and languages. This framework should allow researchers to rapidly develop a broad range of semantically-rich simulations, without the necessity of superor grid-computers or clusters. Design and implementation are discussed and alternative network simulations are compared to the proposed framework. A series of simulations, focusing on malware, is run on an implementation of this framework, and the results are compared to expectations for the outcomes of those simulations. In conclusion, a critical review of the simulator is made, considering any extensions or shortcomings that need to be addressed.
- Full Text:
- Authors: Richter, John Peter Frank
- Date: 2009
- Subjects: Computer simulation , Computer network resources , Computer networks , Internet
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4597 , http://hdl.handle.net/10962/d1004840 , Computer simulation , Computer network resources , Computer networks , Internet
- Description: Simulation is a complex task with many research applications - chiey as a research tool, to test and evaluate hypothetical scenarios. Though many simulations execute similar operations and utilise similar data, there are few simulation frameworks or toolkits that allow researchers to rapidly develop their concepts. Those that are available to researchers are limited in scope, or use old technology that is no longer useful to modern researchers. As a result of this, many researchers build their own simulations without a framework, wasting time and resources on a system that could already cater for the majority of their simulation's requirements. In this work, a system is proposed for the creation of a scalable, dynamic-resolution network simulation framework that provides scalable scope for researchers, using modern technologies and languages. This framework should allow researchers to rapidly develop a broad range of semantically-rich simulations, without the necessity of superor grid-computers or clusters. Design and implementation are discussed and alternative network simulations are compared to the proposed framework. A series of simulations, focusing on malware, is run on an implementation of this framework, and the results are compared to expectations for the outcomes of those simulations. In conclusion, a critical review of the simulator is made, considering any extensions or shortcomings that need to be addressed.
- Full Text:
Using semantic knowledge to improve compression on log files
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- Authors: Otten, Frederick John
- Date: 2009 , 2008-11-19
- Subjects: Computer networks , Data compression (Computer science) , Semantics--Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4650 , http://hdl.handle.net/10962/d1006619 , Computer networks , Data compression (Computer science) , Semantics--Data processing
- Description: With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. , TeX , pdfTeX-1.40.3
- Full Text:
- «
- ‹
- 1
- ›
- »