An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
A framework for the development of a personal information security agent
- Authors: Stieger, Ewald Andreas
- Date: 2011
- Subjects: Computer networks -- Security measures , Information storage and retrieval systems , Artificial intelligence
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9803 , http://hdl.handle.net/10948/d1012326 , Computer networks -- Security measures , Information storage and retrieval systems , Artificial intelligence
- Description: Nowadays information is everywhere. Organisations process, store and create information in unprecedented quantities to support their business processes. Similarly, people use, share and synthesise information to accomplish their daily tasks. Indeed, information and information technology are the core of business activities, and a part of daily life. Information has become a crucial resource in today‘s information age and any corruption, destruction or leakage of information can have a serious negative impact on an organisation. Thus, information should be kept safe. This requires the successful implementation of information security, which ensures that information assets are only used, modified and accessed by authorised people. Information security faces many challenges; and organisations still have not successfully addressed them. One of the main challenges is the human element. Information security depends to a large extent on people and their ability to follow and apply sound security practices. Unfortunately, people are often not very security-conscious in their behaviour; and this is the cause of many security breaches. There are a variety of reasons for this such as a lack of knowledge and a negative attitude to security. Many organisations are aware of this; and they attempt to remedy the situation by means of information security awareness programs. These programs aim to educate, train and increase the security awareness of individuals. However, information security awareness programs are not always successful. They are not a once-off remedy that can quickly cure information security. The programs need to be implemented effectively, and they require an ongoing effort. Unfortunately, this is where many organisations fail. Furthermore, changing individuals‘ security behaviour is difficult due to the complexity of factors that influence everyday behaviour. In view of the above, this research project proposes an alternative approach in the form of a personal information security agent. The goal of this agent is to influence individuals to adopt more secure behaviour. There are a variety of factors that need to be considered, in order to achieve this goal, and to positively influence security behaviour. Consequently, this research establishes criteria and principles for such an agent, based on the theory and practice. From a theoretical point of view, a variety of factors that influence human behaviour such as self-efficacy and normative beliefs were investigated. Furthermore, the field of persuasive technology has provided for strategies that can be used by technology to influence individuals. On the practical side, a prototype of a personal information security agent was created and evaluated through a technical software review process. The evaluation of the prototype showed that the theoretical criteria have merit but their effectiveness is largely dependent on how they are implemented. The criteria were thus revised, based on the practical findings. The findings also suggest that a personal information security agent, based on the criteria, may be able to positively influence individuals to be more secure in their behaviour. The insights gained by the research are presented in the form of a framework that makes both theoretical and practical recommendations for developing a personal information security agent. One may, consequently, conclude that the purpose of this research is to provide a foundation for the development of a personal information security agent to positively influence computer users to be more security-conscious in their behavior.
- Full Text:
- Date Issued: 2011
- Authors: Stieger, Ewald Andreas
- Date: 2011
- Subjects: Computer networks -- Security measures , Information storage and retrieval systems , Artificial intelligence
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9803 , http://hdl.handle.net/10948/d1012326 , Computer networks -- Security measures , Information storage and retrieval systems , Artificial intelligence
- Description: Nowadays information is everywhere. Organisations process, store and create information in unprecedented quantities to support their business processes. Similarly, people use, share and synthesise information to accomplish their daily tasks. Indeed, information and information technology are the core of business activities, and a part of daily life. Information has become a crucial resource in today‘s information age and any corruption, destruction or leakage of information can have a serious negative impact on an organisation. Thus, information should be kept safe. This requires the successful implementation of information security, which ensures that information assets are only used, modified and accessed by authorised people. Information security faces many challenges; and organisations still have not successfully addressed them. One of the main challenges is the human element. Information security depends to a large extent on people and their ability to follow and apply sound security practices. Unfortunately, people are often not very security-conscious in their behaviour; and this is the cause of many security breaches. There are a variety of reasons for this such as a lack of knowledge and a negative attitude to security. Many organisations are aware of this; and they attempt to remedy the situation by means of information security awareness programs. These programs aim to educate, train and increase the security awareness of individuals. However, information security awareness programs are not always successful. They are not a once-off remedy that can quickly cure information security. The programs need to be implemented effectively, and they require an ongoing effort. Unfortunately, this is where many organisations fail. Furthermore, changing individuals‘ security behaviour is difficult due to the complexity of factors that influence everyday behaviour. In view of the above, this research project proposes an alternative approach in the form of a personal information security agent. The goal of this agent is to influence individuals to adopt more secure behaviour. There are a variety of factors that need to be considered, in order to achieve this goal, and to positively influence security behaviour. Consequently, this research establishes criteria and principles for such an agent, based on the theory and practice. From a theoretical point of view, a variety of factors that influence human behaviour such as self-efficacy and normative beliefs were investigated. Furthermore, the field of persuasive technology has provided for strategies that can be used by technology to influence individuals. On the practical side, a prototype of a personal information security agent was created and evaluated through a technical software review process. The evaluation of the prototype showed that the theoretical criteria have merit but their effectiveness is largely dependent on how they are implemented. The criteria were thus revised, based on the practical findings. The findings also suggest that a personal information security agent, based on the criteria, may be able to positively influence individuals to be more secure in their behaviour. The insights gained by the research are presented in the form of a framework that makes both theoretical and practical recommendations for developing a personal information security agent. One may, consequently, conclude that the purpose of this research is to provide a foundation for the development of a personal information security agent to positively influence computer users to be more security-conscious in their behavior.
- Full Text:
- Date Issued: 2011
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
Categorising Network Telescope data using big data enrichment techniques
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
- Authors: Davis, Michael Reginald
- Date: 2019
- Subjects: Denial of service attacks , Big data , Computer networks -- Security measures
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92941 , vital:30766
- Description: Network Telescopes, Internet backbone sampling, IDS and other forms of network-sourced Threat Intelligence provide researchers with insight into the methods and intent of remote entities by capturing network traffic and analysing the resulting data. This analysis and determination of intent is made difficult by the large amounts of potentially malicious traffic, coupled with limited amount of knowledge that can be attributed to the source of the incoming data, as the source is known only by its IP address. Due to the lack of commonly available tooling, many researchers start this analysis from the beginning and so repeat and re-iterate previous research as the bulk of their work. As a result new insight into methods and approaches of analysis is gained at a high cost. Our research approaches this problem by using additional knowledge about the source IP address such as open ports, reverse and forward DNS, BGP routing tables and more, to enhance the researcher's ability to understand the traffic source. The research is a BigData experiment, where large (hundreds of GB) datasets are merged with a two month section of Network Telescope data using a set of Python scripts. The result are written to a Google BigQuery database table. Analysis of the network data is greatly simplified, with questions about the nature of the source, such as its device class (home routing device or server), potential vulnerabilities (open telnet ports or databases) and location becoming relatively easy to answer. Using this approach, researchers can focus on the questions that need answering and efficiently address them. This research could be taken further by using additional data sources such as Geo-location, WHOIS lookups, Threat Intelligence feeds and many others. Other potential areas of research include real-time categorisation of incoming packets, in order to better inform alerting and reporting systems' configuration. In conclusion, categorising Network Telescope data in this way provides insight into the intent of the (apparent) originator and as such is a valuable tool for those seeking to understand the purpose and intent of arriving packets. In particular, the ability to remove packets categorised as non-malicious (e.g. those in the Research category) from the data eliminates a known source of `noise' from the data. This allows the researcher to focus their efforts in a more productive manner.
- Full Text:
- Date Issued: 2019
Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
- Authors: White, Dominic Stjohn Dolin
- Date: 2007 , 2007-02-08
- Subjects: Computer networks -- Security measures , Computer viruses , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4629 , http://hdl.handle.net/10962/d1006510 , Computer networks -- Security measures , Computer viruses , Computer security
- Description: This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. , TeX output 2007.02.08:2212 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2007
A framework for malicious host fingerprinting using distributed network sensors
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
- Authors: Hunter, Samuel Oswald
- Date: 2018
- Subjects: Computer networks -- Security measures , Malware (Computer software) , Multisensor data fusion , Distributed Sensor Networks , Automated Reconnaissance Framework , Latency Based Multilateration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60653 , vital:27811
- Description: Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area.
- Full Text:
- Date Issued: 2018
Digital forensic model for computer networks
- Authors: Sanyamahwe, Tendai
- Date: 2011
- Subjects: Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11127 , http://hdl.handle.net/10353/d1000968 , Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Description: The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
- Full Text:
- Date Issued: 2011
- Authors: Sanyamahwe, Tendai
- Date: 2011
- Subjects: Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11127 , http://hdl.handle.net/10353/d1000968 , Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Description: The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
- Full Text:
- Date Issued: 2011
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
Towards an evaluation and protection strategy for critical infrastructure
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
A model for information security management and regulatory compliance in the South African health sector
- Authors: Tuyikeze, Tite
- Date: 2005
- Subjects: Computer networks -- Security measures , Public health -- South Africa
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9740 , http://hdl.handle.net/10948/425 , Computer networks -- Security measures , Public health -- South Africa
- Description: Information Security is becoming a part of the core business processes in every organization. Companies are faced with contradictory requirements to ensure open systems and accessible information while maintaining high protection standards. In addition, the contemporary management of Information Security requires a variety of approaches in different areas, ranging from technological to organizational issues and legislation. These approaches are often isolated while Security Management requires an integrated approach. Information Technology promises many benefits to healthcare organizations. It helps to make accurate information more readily available to healthcare providers and workers, researchers and patients and advanced computing and communication technology can improve the quality and lower the costs of healthcare. However, the prospect of storing health information in an electronic form raises concerns about patient privacy and security. Healthcare organizations are required to establish formal Information Security program, for example through the adoption of the ISO 17799 standard, to ensure an appropriate and consistent level of information security for computer-based patient records, both within individual healthcare organizations and throughout the entire healthcare delivery system. However, proper Information Security Management practices, alone, do not necessarily ensure regulatory compliance. South African healthcare organizations must comply with the South African National Health Act (SANHA) and the Electronic Communication Transaction Act (ECTA). It is necessary to consider compliance with the Health Insurance Portability and Accountability Act (HIPAA) to meet healthcare international industry standards. The main purpose of this project is to propose a compliance strategy, which ensures full compliance with regulatory requirements and at the same time assures customers that international industry standards are being used. This is preceded by a comparative analysis of the requirements posed by the ISO 17799 standard and the HIPAA, SANHA and ECTA regulations.
- Full Text:
- Date Issued: 2005
- Authors: Tuyikeze, Tite
- Date: 2005
- Subjects: Computer networks -- Security measures , Public health -- South Africa
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9740 , http://hdl.handle.net/10948/425 , Computer networks -- Security measures , Public health -- South Africa
- Description: Information Security is becoming a part of the core business processes in every organization. Companies are faced with contradictory requirements to ensure open systems and accessible information while maintaining high protection standards. In addition, the contemporary management of Information Security requires a variety of approaches in different areas, ranging from technological to organizational issues and legislation. These approaches are often isolated while Security Management requires an integrated approach. Information Technology promises many benefits to healthcare organizations. It helps to make accurate information more readily available to healthcare providers and workers, researchers and patients and advanced computing and communication technology can improve the quality and lower the costs of healthcare. However, the prospect of storing health information in an electronic form raises concerns about patient privacy and security. Healthcare organizations are required to establish formal Information Security program, for example through the adoption of the ISO 17799 standard, to ensure an appropriate and consistent level of information security for computer-based patient records, both within individual healthcare organizations and throughout the entire healthcare delivery system. However, proper Information Security Management practices, alone, do not necessarily ensure regulatory compliance. South African healthcare organizations must comply with the South African National Health Act (SANHA) and the Electronic Communication Transaction Act (ECTA). It is necessary to consider compliance with the Health Insurance Portability and Accountability Act (HIPAA) to meet healthcare international industry standards. The main purpose of this project is to propose a compliance strategy, which ensures full compliance with regulatory requirements and at the same time assures customers that international industry standards are being used. This is preceded by a comparative analysis of the requirements posed by the ISO 17799 standard and the HIPAA, SANHA and ECTA regulations.
- Full Text:
- Date Issued: 2005
A model to measure the maturuty of smartphone security at software consultancies
- Authors: Allam, Sean
- Date: 2009
- Subjects: Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11135 , http://hdl.handle.net/10353/281 , Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Description: Smartphones are proliferating into the workplace at an ever-increasing rate, similarly the threats that they pose is increasing. In an era of constant connectivity and availability, information is freed up of constraints of time and place. This research project delves into the risks introduced by smartphones, and through multiple cases studies, a maturity measurement model is formulated. The model is based on recommendations from two leading information security frameworks, the COBIT 4.1 framework and ISO27002 code of practice. Ultimately, a combination of smartphone specific risks are integrated with key control recommendations, in providing a set of key measurable security maturity components. The subjective opinions of case study respondents are considered a key component in achieving a solution. The solution addresses the concerns of not only policy makers, but also the employees subjected to the security policies. Nurturing security awareness into organisational culture through reinforcement and employee acceptance is highlighted in this research project. Software consultancies can use this model to mitigate risks, while harnessing the potential strategic advantages of mobile computing through smartphone devices. In addition, this research project identifies the critical components of a smartphone security solution. As a result, a model is provided for software consultancies due to the intense reliance on information within these types of organisations. The model can be effectively applied to any information intensive organisation.
- Full Text:
- Date Issued: 2009
- Authors: Allam, Sean
- Date: 2009
- Subjects: Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11135 , http://hdl.handle.net/10353/281 , Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Description: Smartphones are proliferating into the workplace at an ever-increasing rate, similarly the threats that they pose is increasing. In an era of constant connectivity and availability, information is freed up of constraints of time and place. This research project delves into the risks introduced by smartphones, and through multiple cases studies, a maturity measurement model is formulated. The model is based on recommendations from two leading information security frameworks, the COBIT 4.1 framework and ISO27002 code of practice. Ultimately, a combination of smartphone specific risks are integrated with key control recommendations, in providing a set of key measurable security maturity components. The subjective opinions of case study respondents are considered a key component in achieving a solution. The solution addresses the concerns of not only policy makers, but also the employees subjected to the security policies. Nurturing security awareness into organisational culture through reinforcement and employee acceptance is highlighted in this research project. Software consultancies can use this model to mitigate risks, while harnessing the potential strategic advantages of mobile computing through smartphone devices. In addition, this research project identifies the critical components of a smartphone security solution. As a result, a model is provided for software consultancies due to the intense reliance on information within these types of organisations. The model can be effectively applied to any information intensive organisation.
- Full Text:
- Date Issued: 2009
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
A social networking approach to security awareness in end-user cyber-driven financial transactions
- Authors: Maharaj, Rahul
- Date: 2018
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/48824 , vital:41144
- Description: Cyberspace, including the internet and associated technologies have become critical to social users in their day to day lives. Social users have grown to become reliant on cyberspace and associated cyber services. As such, a culture of users becoming dependent on cyberspace has formed. This cyberculture need to ensure that they can make use of cyberspace and associated cyber services in a safe and secure manner. This is particularly true for those social users involved in cyberdriven financial transactions. Therefore, the aim of this research study is to report on research undertaken, to assist said users by providing them with an alternative educational approach to cyber security, education, awareness and training.
- Full Text:
- Authors: Maharaj, Rahul
- Date: 2018
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/48824 , vital:41144
- Description: Cyberspace, including the internet and associated technologies have become critical to social users in their day to day lives. Social users have grown to become reliant on cyberspace and associated cyber services. As such, a culture of users becoming dependent on cyberspace has formed. This cyberculture need to ensure that they can make use of cyberspace and associated cyber services in a safe and secure manner. This is particularly true for those social users involved in cyberdriven financial transactions. Therefore, the aim of this research study is to report on research undertaken, to assist said users by providing them with an alternative educational approach to cyber security, education, awareness and training.
- Full Text:
GPF : a framework for general packet classification on GPU co-processors
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
Distributed authentication for resource control
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
- Authors: Burdis, Keith Robert
- Date: 2000
- Subjects: Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4630 , http://hdl.handle.net/10962/d1006512 , Computers -- Access control , Data protection , Computer networks -- Security measures , Electronic data processing departments -- Security measures
- Description: This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
- Full Text:
- Date Issued: 2000
Securing software development using developer access control
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
- Authors: Ongers, Grant
- Date: 2020
- Subjects: Computer software -- Development , Computers -- Access control , Computer security -- Software , Computer networks -- Security measures , Source code (Computer science) , Plug-ins (Computer programs) , Data encryption (Computer science) , Network Access Control , Data Loss Prevention , Google’s BeyondCorp , Confidentiality, Integrity and Availability (CIA) triad
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/149022 , vital:38796
- Description: This research is aimed at software development companies and highlights the unique information security concerns in the context of a non-malicious software developer’s work environment; and furthermore explores an application driven solution which focuses specifically on providing developer environments with access control for source code repositories. In order to achieve that, five goals were defined as discussed in section 1.3. The application designed to provide the developer environment with access control to source code repositories was modelled on lessons taken from the principles of Network Access Control (NAC), Data Loss Prevention (DLP), and Google’s BeyondCorp (GBC) for zero-trust end-user computing. The intention of this research is to provide software developers with maximum access to source code without compromising Confidentiality, as per the Confidentiality, Integrity and Availability (CIA) triad. Employing data gleaned from examining the characteristics of DLP, NAC, and Beyond- Corp—proof-of-concept code was developed to regulate access to the developer’s environment and source code. The system required sufficient flexibility to support the diversity of software development environments. In order to achieve this, a modular design was selected. The system comprised a client side agent and a plug-in-ready server component. The client side agent mounts and dismounts encrypted volumes containing source code. Furthermore, it provides the server with information of the client that is demanded by plug-ins. The server side service provided encryption keys to facilitate the mounting of the volumes and, through plug-ins, asked questions of the client agent to determine whether access should be granted. The solution was then tested with integration and system testing. There were plans to have it used by development teams who were then to be surveyed as to their view on the proof of concept but this proved impossible. The conclusion provides a basis by which organisations that develop software can better balance the two corners of the CIA triad most often in conflict: Confidentiality in terms of their source code against the Availability of the same to developers.
- Full Text:
- Date Issued: 2020
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
A framework for assuring conformance of cloud-based email at higher education institutions
- Authors: Willett, Melanie
- Date: 2013
- Subjects: Cloud computing -- Security measures , Computer networks -- Security measures , Web services , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:9815 , http://hdl.handle.net/10948/d1018664
- Description: Cloud computing is a relatively immature computing paradigm that could significantly benefit users. Cloud computing solutions are often associated with potential benefits such as cost reduction, less administrative hassle, flexibility and scalability. For organisations to realize such potential benefits, cloud computing solutions need to be chosen, implemented, managed and governed in a way that is secure, compliant with internal and external requirements and indicative of due diligence. This can be a challenge, given the many concerns and risks commonly associated with cloud computing solutions. One cloud computing solution that is being widely adopted around the world is cloud-based email. One of the foremost adopters of this cloud computing solution is higher education institutions. These higher education institutions stand to benefit greatly from using such services. Cloud-based email can be provisioned to staff and students at these institutions for free. Additionally, cloud service providers (CSPs) are able to provide a better email service than some higher education institutions would be able to provide if they were required to do so in-house. CSPs often provide larger inboxes and many extra services with cloud-based email. Cloud-based email is, therefore, clearly an example of a cloud computing solution that has the potential to benefit organisations. There are however, risks and challenges associated with the use of this cloud computing solution. Two of these challenges relate to ensuring conformance to internal and external (legal, regulatory and contractual obligations) requirements and to providing a mechanism of assuring that cloud-based email related activities are sound. The lack of structured guidelines for assuring the conformance of cloud-based email is putting this service at risk at higher education institutions in South Africa. This work addresses this problem by promoting a best practice based approach to assuring the conformance of cloud-based email at higher education institutions. To accomplish this, components of applicable standards and best practice guidelines for IT governance, IT assurance and IT conformance are used to construct a framework for assuring the conformance of cloud-based email. The framework is designed and verified using sound design science principles. The utility and value of the framework has been demonstrated at a higher education institution in South Africa. This framework can be used to assist higher education institutions to demonstrate due diligence in assuring that they conform to legal and best practice requirements for the management and governance of cloud-based email. This is a significant contribution in the relatively new field of cloud computing governance.
- Full Text:
- Date Issued: 2013
- Authors: Willett, Melanie
- Date: 2013
- Subjects: Cloud computing -- Security measures , Computer networks -- Security measures , Web services , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:9815 , http://hdl.handle.net/10948/d1018664
- Description: Cloud computing is a relatively immature computing paradigm that could significantly benefit users. Cloud computing solutions are often associated with potential benefits such as cost reduction, less administrative hassle, flexibility and scalability. For organisations to realize such potential benefits, cloud computing solutions need to be chosen, implemented, managed and governed in a way that is secure, compliant with internal and external requirements and indicative of due diligence. This can be a challenge, given the many concerns and risks commonly associated with cloud computing solutions. One cloud computing solution that is being widely adopted around the world is cloud-based email. One of the foremost adopters of this cloud computing solution is higher education institutions. These higher education institutions stand to benefit greatly from using such services. Cloud-based email can be provisioned to staff and students at these institutions for free. Additionally, cloud service providers (CSPs) are able to provide a better email service than some higher education institutions would be able to provide if they were required to do so in-house. CSPs often provide larger inboxes and many extra services with cloud-based email. Cloud-based email is, therefore, clearly an example of a cloud computing solution that has the potential to benefit organisations. There are however, risks and challenges associated with the use of this cloud computing solution. Two of these challenges relate to ensuring conformance to internal and external (legal, regulatory and contractual obligations) requirements and to providing a mechanism of assuring that cloud-based email related activities are sound. The lack of structured guidelines for assuring the conformance of cloud-based email is putting this service at risk at higher education institutions in South Africa. This work addresses this problem by promoting a best practice based approach to assuring the conformance of cloud-based email at higher education institutions. To accomplish this, components of applicable standards and best practice guidelines for IT governance, IT assurance and IT conformance are used to construct a framework for assuring the conformance of cloud-based email. The framework is designed and verified using sound design science principles. The utility and value of the framework has been demonstrated at a higher education institution in South Africa. This framework can be used to assist higher education institutions to demonstrate due diligence in assuring that they conform to legal and best practice requirements for the management and governance of cloud-based email. This is a significant contribution in the relatively new field of cloud computing governance.
- Full Text:
- Date Issued: 2013
A cyber security awareness and education framework for South Africa
- Authors: Kortjan, Noloxolo
- Date: 2013
- Subjects: Computer networks -- Security measures , Computer crimes -- Prevention , Computer security
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9811 , http://hdl.handle.net/10948/d1014829
- Description: The Internet is becoming increasingly interwoven in the daily life of many individuals, organisations and nations. It has, to a large extent, had a positive effect on the way people communicate. It has also introduced new avenues for business and has offered nations an opportunity to govern online. Nevertheless, although cyberspace offers an endless list of services and opportunities, it is also accompanied by many risks. One of these risks is cybercrime. The Internet has given criminals a platform on which to grow and proliferate. As a result of the abstract nature of the Internet, it is easy for these criminals to go unpunished. Moreover, many who use the Internet are not aware of such threats; therefore they may themselves be at risk, together with businesses and governmental assets and infrastructure. In view of this, there is a need for cyber security awareness and education initiatives that will promote users who are well versed in the risks associated with the Internet. In this context, it is the role of the government to empower all levels of society by providing the necessary knowledge and expertise to act securely online. However, there is currently a definite lack in South Africa (SA) in this regard, as there are currently no government-led cyber security awareness and education initiatives. The primary research objective of this study, therefore, is to propose a cyber security awareness and education framework for SA that will assist in creating a cyber secure culture in SA among all of its users of the Internet.
- Full Text:
- Date Issued: 2013
- Authors: Kortjan, Noloxolo
- Date: 2013
- Subjects: Computer networks -- Security measures , Computer crimes -- Prevention , Computer security
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9811 , http://hdl.handle.net/10948/d1014829
- Description: The Internet is becoming increasingly interwoven in the daily life of many individuals, organisations and nations. It has, to a large extent, had a positive effect on the way people communicate. It has also introduced new avenues for business and has offered nations an opportunity to govern online. Nevertheless, although cyberspace offers an endless list of services and opportunities, it is also accompanied by many risks. One of these risks is cybercrime. The Internet has given criminals a platform on which to grow and proliferate. As a result of the abstract nature of the Internet, it is easy for these criminals to go unpunished. Moreover, many who use the Internet are not aware of such threats; therefore they may themselves be at risk, together with businesses and governmental assets and infrastructure. In view of this, there is a need for cyber security awareness and education initiatives that will promote users who are well versed in the risks associated with the Internet. In this context, it is the role of the government to empower all levels of society by providing the necessary knowledge and expertise to act securely online. However, there is currently a definite lack in South Africa (SA) in this regard, as there are currently no government-led cyber security awareness and education initiatives. The primary research objective of this study, therefore, is to propose a cyber security awareness and education framework for SA that will assist in creating a cyber secure culture in SA among all of its users of the Internet.
- Full Text:
- Date Issued: 2013
A holistic approach to network security in OGSA-based grid systems
- Authors: Loutsios, Demetrios
- Date: 2006
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9736 , http://hdl.handle.net/10948/550 , Computer networks -- Security measures
- Description: Grid computing technologies facilitate complex scientific collaborations between globally dispersed parties, which make use of heterogeneous technologies and computing systems. However, in recent years the commercial sector has developed a growing interest in Grid technologies. Prominent Grid researchers have predicted Grids will grow into the commercial mainstream, even though its origins were in scientific research. This is much the same way as the Internet started as a vehicle for research collaboration between universities and government institutions, and grew into a technology with large commercial applications. Grids facilitate complex trust relationships between globally dispersed business partners, research groups, and non-profit organizations. Almost any dispersed “virtual organization” willing to share computing resources can make use of Grid technologies. Grid computing facilitates the networking of shared services; the inter-connection of a potentially unlimited number of computing resources within a “Grid” is possible. Grid technologies leverage a range of open standards and technologies to provide interoperability between heterogeneous computing systems. Newer Grids build on key capabilities of Web-Service technologies to provide easy and dynamic publishing and discovery of Grid resources. Due to the inter-organisational nature of Grid systems, there is a need to provide adequate security to Grid users and to Grid resources. This research proposes a framework, using a specific brokered pattern, which addresses several common Grid security challenges, which include: Providing secure and consistent cross-site Authentication and Authorization; Single-sign on capabilities to Grid users; Abstract iii; Underlying platform and runtime security, and; Grid network communications and messaging security. These Grid security challenges can be viewed as comprising two (proposed) logical layers of a Grid. These layers are: a Common Grid Layer (higher level Grid interactions), and a Local Resource Layer (Lower level technology security concerns). This research is concerned with providing a generic and holistic security framework to secure both layers. This research makes extensive use of STRIDE - an acronym for Microsoft approach to addressing security threats - as part of a holistic Grid security framework. STRIDE and key Grid related standards, such as Open Grid Service Architecture (OGSA), Web-Service Resource Framework (WS-RF), and the Globus Toolkit are used to formulate the proposed framework.
- Full Text:
- Date Issued: 2006
- Authors: Loutsios, Demetrios
- Date: 2006
- Subjects: Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9736 , http://hdl.handle.net/10948/550 , Computer networks -- Security measures
- Description: Grid computing technologies facilitate complex scientific collaborations between globally dispersed parties, which make use of heterogeneous technologies and computing systems. However, in recent years the commercial sector has developed a growing interest in Grid technologies. Prominent Grid researchers have predicted Grids will grow into the commercial mainstream, even though its origins were in scientific research. This is much the same way as the Internet started as a vehicle for research collaboration between universities and government institutions, and grew into a technology with large commercial applications. Grids facilitate complex trust relationships between globally dispersed business partners, research groups, and non-profit organizations. Almost any dispersed “virtual organization” willing to share computing resources can make use of Grid technologies. Grid computing facilitates the networking of shared services; the inter-connection of a potentially unlimited number of computing resources within a “Grid” is possible. Grid technologies leverage a range of open standards and technologies to provide interoperability between heterogeneous computing systems. Newer Grids build on key capabilities of Web-Service technologies to provide easy and dynamic publishing and discovery of Grid resources. Due to the inter-organisational nature of Grid systems, there is a need to provide adequate security to Grid users and to Grid resources. This research proposes a framework, using a specific brokered pattern, which addresses several common Grid security challenges, which include: Providing secure and consistent cross-site Authentication and Authorization; Single-sign on capabilities to Grid users; Abstract iii; Underlying platform and runtime security, and; Grid network communications and messaging security. These Grid security challenges can be viewed as comprising two (proposed) logical layers of a Grid. These layers are: a Common Grid Layer (higher level Grid interactions), and a Local Resource Layer (Lower level technology security concerns). This research is concerned with providing a generic and holistic security framework to secure both layers. This research makes extensive use of STRIDE - an acronym for Microsoft approach to addressing security threats - as part of a holistic Grid security framework. STRIDE and key Grid related standards, such as Open Grid Service Architecture (OGSA), Web-Service Resource Framework (WS-RF), and the Globus Toolkit are used to formulate the proposed framework.
- Full Text:
- Date Issued: 2006