Assessing program code through static structural similarity
- Authors: Naude, Kevin Alexander
- Date: 2007
- Subjects: Computer networks -- Security measures , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10478 , http://hdl.handle.net/10948/578 , Computer networks -- Security measures , Internet -- Security measures
- Description: Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
- Full Text:
- Date Issued: 2007
- Authors: Naude, Kevin Alexander
- Date: 2007
- Subjects: Computer networks -- Security measures , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10478 , http://hdl.handle.net/10948/578 , Computer networks -- Security measures , Internet -- Security measures
- Description: Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
- Full Text:
- Date Issued: 2007
Towards an evaluation and protection strategy for critical infrastructure
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Date Issued: 2015
A model to measure the maturuty of smartphone security at software consultancies
- Authors: Allam, Sean
- Date: 2009
- Subjects: Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11135 , http://hdl.handle.net/10353/281 , Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Description: Smartphones are proliferating into the workplace at an ever-increasing rate, similarly the threats that they pose is increasing. In an era of constant connectivity and availability, information is freed up of constraints of time and place. This research project delves into the risks introduced by smartphones, and through multiple cases studies, a maturity measurement model is formulated. The model is based on recommendations from two leading information security frameworks, the COBIT 4.1 framework and ISO27002 code of practice. Ultimately, a combination of smartphone specific risks are integrated with key control recommendations, in providing a set of key measurable security maturity components. The subjective opinions of case study respondents are considered a key component in achieving a solution. The solution addresses the concerns of not only policy makers, but also the employees subjected to the security policies. Nurturing security awareness into organisational culture through reinforcement and employee acceptance is highlighted in this research project. Software consultancies can use this model to mitigate risks, while harnessing the potential strategic advantages of mobile computing through smartphone devices. In addition, this research project identifies the critical components of a smartphone security solution. As a result, a model is provided for software consultancies due to the intense reliance on information within these types of organisations. The model can be effectively applied to any information intensive organisation.
- Full Text:
- Date Issued: 2009
- Authors: Allam, Sean
- Date: 2009
- Subjects: Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11135 , http://hdl.handle.net/10353/281 , Computer networks -- Security measures , Capability maturity model (Computer software) , Smartphones , Wireless Internet , Mobile communication systems , Mobile computing
- Description: Smartphones are proliferating into the workplace at an ever-increasing rate, similarly the threats that they pose is increasing. In an era of constant connectivity and availability, information is freed up of constraints of time and place. This research project delves into the risks introduced by smartphones, and through multiple cases studies, a maturity measurement model is formulated. The model is based on recommendations from two leading information security frameworks, the COBIT 4.1 framework and ISO27002 code of practice. Ultimately, a combination of smartphone specific risks are integrated with key control recommendations, in providing a set of key measurable security maturity components. The subjective opinions of case study respondents are considered a key component in achieving a solution. The solution addresses the concerns of not only policy makers, but also the employees subjected to the security policies. Nurturing security awareness into organisational culture through reinforcement and employee acceptance is highlighted in this research project. Software consultancies can use this model to mitigate risks, while harnessing the potential strategic advantages of mobile computing through smartphone devices. In addition, this research project identifies the critical components of a smartphone security solution. As a result, a model is provided for software consultancies due to the intense reliance on information within these types of organisations. The model can be effectively applied to any information intensive organisation.
- Full Text:
- Date Issued: 2009
Towards a user centric model for identity and access management within the online environment
- Authors: Deas, Matthew Burns
- Date: 2008
- Subjects: Computers -- Access control , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9780 , http://hdl.handle.net/10948/775 , Computers -- Access control , Computer networks -- Security measures
- Description: Today, one is expected to remember multiple user names and passwords for different domains when one wants to access on the Internet. Identity management seeks to solve this problem through creating a digital identity that is exchangeable across organisational boundaries. Through the setup of collaboration agreements between multiple domains, users can easily switch across domains without being required to sign in again. However, use of this technology comes with risks of user identity and personal information being compromised. Criminals make use of spoofed websites and social engineering techniques to gain illegal access to user information. Due to this, the need for users to be protected from online threats has increased. Two processes are required to protect the user login information at the time of sign-on. Firstly, user’s information must be protected at the time of sign-on, and secondly, a simple method for the identification of the website is required by the user. This treatise looks at the process for identifying and verifying user information, and how the user can verify the system at sign-in. Three models for identity management are analysed, namely the Microsoft .NET Passport, Liberty Alliance Federated Identity for Single Sign-on and the Mozilla TrustBar for system authentication.
- Full Text:
- Date Issued: 2008
- Authors: Deas, Matthew Burns
- Date: 2008
- Subjects: Computers -- Access control , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9780 , http://hdl.handle.net/10948/775 , Computers -- Access control , Computer networks -- Security measures
- Description: Today, one is expected to remember multiple user names and passwords for different domains when one wants to access on the Internet. Identity management seeks to solve this problem through creating a digital identity that is exchangeable across organisational boundaries. Through the setup of collaboration agreements between multiple domains, users can easily switch across domains without being required to sign in again. However, use of this technology comes with risks of user identity and personal information being compromised. Criminals make use of spoofed websites and social engineering techniques to gain illegal access to user information. Due to this, the need for users to be protected from online threats has increased. Two processes are required to protect the user login information at the time of sign-on. Firstly, user’s information must be protected at the time of sign-on, and secondly, a simple method for the identification of the website is required by the user. This treatise looks at the process for identifying and verifying user information, and how the user can verify the system at sign-in. Three models for identity management are analysed, namely the Microsoft .NET Passport, Liberty Alliance Federated Identity for Single Sign-on and the Mozilla TrustBar for system authentication.
- Full Text:
- Date Issued: 2008
Targeted attack detection by means of free and open source solutions
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
- Authors: Bernardo, Louis F
- Date: 2019
- Subjects: Computer networks -- Security measures , Information technology -- Security measures , Computer security -- Management , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92269 , vital:30703
- Description: Compliance requirements are part of everyday business requirements for various areas, such as retail and medical services. As part of compliance it may be required to have infrastructure in place to monitor the activities in the environment to ensure that the relevant data and environment is sufficiently protected. At the core of such monitoring solutions one would find some type of data repository, or database, to store and ultimately correlate the captured events. Such solutions are commonly called Security Information and Event Management, or SIEM for short. Larger companies have been known to use commercial solutions such as IBM's Qradar, Logrythm, or Splunk. However, these come at significant cost and arent suitable for smaller businesses with limited budgets. These solutions require manual configuration of event correlation for detection of activities that place the environment in danger. This usually requires vendor implementation assistance that also would come at a cost. Alternatively, there are open source solutions that provide the required functionality. This research will demonstrate building an open source solution, with minimal to no cost for hardware or software, while still maintaining the capability of detecting targeted attacks. The solution presented in this research includes Wazuh, which is a combination of OSSEC and the ELK stack, integrated with an Network Intrusion Detection System (NIDS). The success of the integration, is determined by measuring postive attack detection based on each different configuration options. To perform the testing, a deliberately vulnerable platform named Metasploitable will be used as a victim host. The victim host vulnerabilities were created specifically to serve as target for Metasploit. The attacks were generated by utilising Metasploit Framework on a prebuilt Kali Linux host.
- Full Text:
- Date Issued: 2019
A framework to mitigate phishing threats
- Authors: Frauenstein, Edwin Donald
- Date: 2013
- Subjects: Computer networks -- Security measures , Mobile computing -- Security measures , Online social networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9832 , http://hdl.handle.net/10948/d1021208
- Description: We live today in the information age with users being able to access and share information freely by using both personal computers and their handheld devices. This, in turn, has been made possible by the Internet. However, this poses security risks as attempts are made to use this same environment in order to compromise the confidentiality, integrity and availability of information. Accordingly, there is an urgent need for users and organisations to protect their information resources from agents posing a security threat. Organisations typically spend large amounts of money as well as dedicating resources to improve their technological defences against general security threats. However, the agents posing these threats are adopting social engineering techniques in order to bypass the technical measures which organisations are putting in place. These social engineering techniques are often effective because they target human behaviour, something which the majority of researchers believe is a far easier alternative than hacking information systems. As such, phishing effectively makes use of a combination of social engineering techniques which involve crafty technical emails and website designs which gain the trust of their victims. Within an organisational context, there are a number of areas which phishers exploit. These areas include human factors, organisational aspects and technological controls. Ironically, these same areas serve simultaneously as security measures against phishing attacks. However, each of these three areas mentioned above are characterised by gaps which arise as a result of human involvement. As a result, the current approach to mitigating phishing threats comprises a single-layer defence model only. However, this study proposes a holistic model which integrates each of these three areas by strengthening the human element in each of these areas by means of a security awareness, training and education programme.
- Full Text:
- Date Issued: 2013
- Authors: Frauenstein, Edwin Donald
- Date: 2013
- Subjects: Computer networks -- Security measures , Mobile computing -- Security measures , Online social networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9832 , http://hdl.handle.net/10948/d1021208
- Description: We live today in the information age with users being able to access and share information freely by using both personal computers and their handheld devices. This, in turn, has been made possible by the Internet. However, this poses security risks as attempts are made to use this same environment in order to compromise the confidentiality, integrity and availability of information. Accordingly, there is an urgent need for users and organisations to protect their information resources from agents posing a security threat. Organisations typically spend large amounts of money as well as dedicating resources to improve their technological defences against general security threats. However, the agents posing these threats are adopting social engineering techniques in order to bypass the technical measures which organisations are putting in place. These social engineering techniques are often effective because they target human behaviour, something which the majority of researchers believe is a far easier alternative than hacking information systems. As such, phishing effectively makes use of a combination of social engineering techniques which involve crafty technical emails and website designs which gain the trust of their victims. Within an organisational context, there are a number of areas which phishers exploit. These areas include human factors, organisational aspects and technological controls. Ironically, these same areas serve simultaneously as security measures against phishing attacks. However, each of these three areas mentioned above are characterised by gaps which arise as a result of human involvement. As a result, the current approach to mitigating phishing threats comprises a single-layer defence model only. However, this study proposes a holistic model which integrates each of these three areas by strengthening the human element in each of these areas by means of a security awareness, training and education programme.
- Full Text:
- Date Issued: 2013
An analysis of the use of DNS for malicious payload distribution
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
- Authors: Dube, Ishmael
- Date: 2019
- Subjects: Internet domain names , Computer networks -- Security measures , Computer security , Computer network protocols , Data protection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/97531 , vital:31447
- Description: The Domain Name System (DNS) protocol is a fundamental part of Internet activities that can be abused by cybercriminals to conduct malicious activities. Previous research has shown that cybercriminals use different methods, including the DNS protocol, to distribute malicious content, remain hidden and avoid detection from various technologies that are put in place to detect anomalies. This allows botnets and certain malware families to establish covert communication channels that can be used to send or receive data and also distribute malicious payloads using the DNS queries and responses. Cybercriminals use the DNS to breach highly protected networks, distribute malicious content, and exfiltrate sensitive information without being detected by security controls put in place by embedding certain strings in DNS packets. This research undertaking broadens this research field and fills in the existing research gap by extending the analysis of DNS being used as a payload distribution channel to detection of domains that are used to distribute different malicious payloads. This research undertaking analysed the use of the DNS in detecting domains and channels that are used for distributing malicious payloads. Passive DNS data which replicate DNS queries on name servers to detect anomalies in DNS queries was evaluated and analysed in order to detect malicious payloads. The research characterises the malicious payload distribution channels by analysing passive DNS traffic and modelling the DNS query and response patterns. The research found that it is possible to detect malicious payload distribution channels through the analysis of DNS TXT resource records.
- Full Text:
- Date Issued: 2019
Evaluating the cyber security skills gap relating to penetration testing
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
- Authors: Beukes, Dirk Johannes
- Date: 2021
- Subjects: Computer networks -- Security measures , Computer networks -- Monitoring , Computer networks -- Management , Data protection , Information technology -- Security measures , Professionals -- Supply and demand , Electronic data personnel -- Supply and demand
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/171120 , vital:42021
- Description: Information Technology (IT) is growing rapidly and has become an integral part of daily life. It provides a boundless list of services and opportunities, generating boundless sources of information, which could be abused or exploited. Due to this growth, there are thousands of new users added to the grid using computer systems in a static and mobile environment; this fact alone creates endless volumes of data to be exploited and hardware devices to be abused by the wrong people. The growth in the IT environment adds challenges that may affect users in their personal, professional, and business lives. There are constant threats on corporate and private computer networks and computer systems. In the corporate environment companies try to eliminate the threat by testing networks making use of penetration tests and by implementing cyber awareness programs to make employees more aware of the cyber threat. Penetration tests and vulnerability assessments are undervalued; are seen as a formality and are not used to increase system security. If used regularly the computer system will be more secure and attacks minimized. With the growth in technology, industries all over the globe become fully dependent on information systems in doing their day-to-day business. As technology evolves and new technology becomes available, the bigger the risk becomes to protect against the dangers which come with this new technology. For industry to protect itself against this growth in technology, personnel with a certain skill set is needed. This is where cyber security plays a very important role in the protection of information systems to ensure the confidentiality, integrity and availability of the information system itself and the data on the system. Due to this drive to secure information systems, the need for cyber security by professionals is on the rise as well. It is estimated that there is a shortage of one million cyber security professionals globally. What is the reason for this skills shortage? Will it be possible to close this skills shortage gap? This study is about identifying the skills gap and identifying possible ways to close this skills gap. In this study, research was conducted on the cyber security international standards, cyber security training at universities and international certification focusing specifically on penetration testing, the evaluation of the need of industry while recruiting new penetration testers, finishing with suggestions on how to fill possible gaps in the skills market with a conclusion.
- Full Text:
- Date Issued: 2021
GPF : a framework for general packet classification on GPU co-processors
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
A comparative study of CERBER, MAKTUB and LOCKY Ransomware using a Hybridised-Malware analysis
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
- Authors: Schmitt, Veronica
- Date: 2019
- Subjects: Microsoft Windows (Computer file) , Data protection , Computer crimes -- Prevention , Computer security , Computer networks -- Security measures , Computers -- Access control , Malware (Computer software)
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/92313 , vital:30702
- Description: There has been a significant increase in the prevalence of Ransomware attacks in the preceding four years to date. This indicates that the battle has not yet been won defending against this class of malware. This research proposes that by identifying the similarities within the operational framework of Ransomware strains, a better overall understanding of their operation and function can be achieved. This, in turn, will aid in a quicker response to future attacks. With the average Ransomware attack taking two hours to be identified, it shows that there is not yet a clear understanding as to why these attacks are so successful. Research into Ransomware is limited by what is currently known on the topic. Due to the limitations of the research the decision was taken to only examined three samples of Ransomware from different families. This was decided due to the complexities and comprehensive nature of the research. The in depth nature of the research and the time constraints associated with it did not allow for proof of concept of this framework to be tested on more than three families, but the exploratory work was promising and should be further explored in future research. The aim of the research is to follow the Hybrid-Malware analysis framework which consists of both static and the dynamic analysis phases, in addition to the digital forensic examination of the infected system. This allows for signature-based findings, along with behavioural and forensic findings all in one. This information allows for a better understanding of how this malware is designed and how it infects and remains persistent on a system. The operating system which has been chosen is the Microsoft Window 7 operating system which is still utilised by a significant proportion of Windows users especially in the corporate environment. The experiment process was designed to enable the researcher the ability to collect information regarding the Ransomware and every aspect of its behaviour and communication on a target system. The results can be compared across the three strains to identify the commonalities. The initial hypothesis was that Ransomware variants are all much like an instant cake box consists of specific building blocks which remain the same with the flavouring of the cake mix being the unique feature.
- Full Text:
- Date Issued: 2019
An investigation into the current state of web based cryptominers and cryptojacking
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Len, Robert
- Date: 2021-04
- Subjects: Cryptocurrencies , Malware (Computer software) , Computer networks -- Security measures , Computer networks -- Monitoring , Cryptomining , Coinhive , Cryptojacking
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178248 , vital:42924
- Description: The aim of this research was to conduct a review of the current state and extent of surreptitious crypto mining software and its prevalence as a means for income generation. Income is generated through the use of a viewer's browser to execute custom JavaScript code to mine cryptocurrencies such as Monero and Bitcoin. The research aimed to measure the prevalence of illicit mining scripts being utilised for “in-browser" cryptojacking while further analysing the ecosystems that support the cryptomining environment. The extent of the research covers aspects such as the content (or type) of the sites hosting malicious “in-browser" cryptomining software as well as the occurrences of currencies utilised in the cryptographic mining and the analysis of cryptographic mining code samples. This research aims to compare the results of previous work with the current state of affairs since the closure of Coinhive in March 2018. Coinhive were at the time the market leader in such web based mining services. Beyond the analysis of the prevalence of cryptomining on the web today, research into the methodologies and techniques used to detect and counteract cryptomining are also conducted. This includes the most recent developments in malicious JavaScript de-obfuscation as well as cryptomining signature creation and detection. Methodologies for heuristic JavaScript behaviour identification and subsequent identification of potential malicious out-liars are also included within the research of the countermeasure analysis. The research revealed that although no longer functional, Coinhive remained as the most prevalent script being used for “in-browser" cryptomining services. While remaining the most prevalent, there was however a significant decline in overall occurrences compared to when coinhive.com was operational. Analysis of the ecosystem hosting \in-browser" mining websites was found to be distributed both geographically as well as in terms of domain categorisations. , Thesis (MSc) -- Faculty of Science, Computer Science, 2021
- Full Text:
- Date Issued: 2021-04
Bolvedere: a scalable network flow threat analysis system
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
- Authors: Herbert, Alan
- Date: 2019
- Subjects: Bolvedere (Computer network analysis system) , Computer networks -- Scalability , Computer networks -- Measurement , Computer networks -- Security measures , Telecommunication -- Traffic -- Measurement
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/71557 , vital:29873
- Description: Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system.
- Full Text:
- Date Issued: 2019
Cybersecurity: reducing the attack surface
- Authors: Thomson, Kerry-Lynn
- Subjects: Computer security , Computer networks -- Security measures , f-sa
- Language: English
- Type: Lectures
- Identifier: http://hdl.handle.net/10948/52885 , vital:44319
- Description: Almost 60% of the world’s population has access to the internet and most organisations today rely on internet connectivity to conduct business and carry out daily operations. Further to this, it is estimated that concepts such as the Internet of Things (IoT) will facilitate the connections of over 125 billion ‘things’ by the year 2030. However, as people and devices are becoming more and more interconnected, and more data is being shared, the question that must be asked is – are we doing so securely? Each year, cybercriminals cost organisations and individuals millions of dollars, using techniques such as phishing, social engineering, malware and denial of service attacks. In particular, together with the Covid-19 pandemic, there has been a so-called ‘cybercrime pandemic’. Threat actors adapted their techniques to target people with Covid-19-themed cyberattacks and phishing campaigns to exploit their stress and anxiety during the pandemic. Cybersecurity and cybercrime exist in a symbiotic relationship in cyberspace, where, as cybersecurity gets stronger, so the cybercriminals need to become stronger to overcome those defenses. And, as the cybercriminals become stronger, so too must the defenses. Further, this symbiotic relationship plays out on what is called the attack surface. Attack surfaces are the exposed areas of an organisation that make systems more vulnerable to attacks and, essentially, is all the gaps in an organisation’s security that could be compromised by a threat actor. This attack surface is increased through organisations incorporating things such as IoT technologies, migrating to the cloud and decentralising its workforce, as happened during the pandemic with many people working from home. It is essential that organisations reduce the digital attack surface, and the vulnerabilities introduced through devices connected to the internet, with technical strategies and solutions. However, the focus of cybersecurity is often on the digital attack surface and technical solutions, with less of a focus on the human aspects of cybersecurity. The human attack surface encompasses all the vulnerabilities introduced through the actions and activities of employees. These employees should be given the necessary cybersecurity awareness, training and education to reduce the human attack surface of organisations. However, it is not only employees of organisations who are online. All individuals who interact online should be cybersecurity aware and know how to reduce their own digital and human attack surfaces, or digital footprints. This paper emphasises the importance of utilising people as part of the cybersecurity defense through the cultivation of cybersecurity cultures in organisations and a cybersecurity conscious society.
- Full Text:
- Authors: Thomson, Kerry-Lynn
- Subjects: Computer security , Computer networks -- Security measures , f-sa
- Language: English
- Type: Lectures
- Identifier: http://hdl.handle.net/10948/52885 , vital:44319
- Description: Almost 60% of the world’s population has access to the internet and most organisations today rely on internet connectivity to conduct business and carry out daily operations. Further to this, it is estimated that concepts such as the Internet of Things (IoT) will facilitate the connections of over 125 billion ‘things’ by the year 2030. However, as people and devices are becoming more and more interconnected, and more data is being shared, the question that must be asked is – are we doing so securely? Each year, cybercriminals cost organisations and individuals millions of dollars, using techniques such as phishing, social engineering, malware and denial of service attacks. In particular, together with the Covid-19 pandemic, there has been a so-called ‘cybercrime pandemic’. Threat actors adapted their techniques to target people with Covid-19-themed cyberattacks and phishing campaigns to exploit their stress and anxiety during the pandemic. Cybersecurity and cybercrime exist in a symbiotic relationship in cyberspace, where, as cybersecurity gets stronger, so the cybercriminals need to become stronger to overcome those defenses. And, as the cybercriminals become stronger, so too must the defenses. Further, this symbiotic relationship plays out on what is called the attack surface. Attack surfaces are the exposed areas of an organisation that make systems more vulnerable to attacks and, essentially, is all the gaps in an organisation’s security that could be compromised by a threat actor. This attack surface is increased through organisations incorporating things such as IoT technologies, migrating to the cloud and decentralising its workforce, as happened during the pandemic with many people working from home. It is essential that organisations reduce the digital attack surface, and the vulnerabilities introduced through devices connected to the internet, with technical strategies and solutions. However, the focus of cybersecurity is often on the digital attack surface and technical solutions, with less of a focus on the human aspects of cybersecurity. The human attack surface encompasses all the vulnerabilities introduced through the actions and activities of employees. These employees should be given the necessary cybersecurity awareness, training and education to reduce the human attack surface of organisations. However, it is not only employees of organisations who are online. All individuals who interact online should be cybersecurity aware and know how to reduce their own digital and human attack surfaces, or digital footprints. This paper emphasises the importance of utilising people as part of the cybersecurity defense through the cultivation of cybersecurity cultures in organisations and a cybersecurity conscious society.
- Full Text:
Digital forensic model for computer networks
- Authors: Sanyamahwe, Tendai
- Date: 2011
- Subjects: Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11127 , http://hdl.handle.net/10353/d1000968 , Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Description: The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
- Full Text:
- Date Issued: 2011
- Authors: Sanyamahwe, Tendai
- Date: 2011
- Subjects: Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Language: English
- Type: Thesis , Masters , MCom (Information Systems)
- Identifier: vital:11127 , http://hdl.handle.net/10353/d1000968 , Computer crimes -- Investigation , Evidence, Criminal , Computer networks -- Security measures , Electronic evidence , Forensic sciences , Internet -- Security measures
- Description: The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
- Full Text:
- Date Issued: 2011
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
An information security governance model for industrial control systems
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
- Authors: Webster, Zynn
- Date: 2018
- Subjects: Computer networks -- Security measures , Data protection Computer security Business enterprises -- Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MIT
- Identifier: http://hdl.handle.net/10948/36383 , vital:33934
- Description: Industrial Control Systems (ICS) is a term used to describe several types of control systems, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). These systems consist of a combination of control components (e.g. electrical, mechanical, pneumatic) which act together to achieve an industrial objective (e.g., manufacturing, transportation of matter or energy). ICS play a fundamental role in critical infrastructures such as electricity grids, oil, gas and manufacturing industries. Initially ICS had little resemblance to typical enterprise IT systems; they were isolated and running proprietary control protocols using specialized hardware and software. However, with initiatives such as Industry 4.0 and Industrial Internet of Things (IIoT), the nature of ICS has changed significantly. There is an ever-increasing use of commercial operating systems and standard protocols like TCP/IP and Ethernet. Consequently, modern ICS are more and more resembling conventional enterprise IT systems, and it is a well-known fact that these IT systems and networks are known to be vulnerable and that they require extensive management to ensure Confidentiality, Integrity, and Availability. Since ICS are now adopting conventional IT characteristics they are also accepting the associated risks. However, owing to the functional area of ICS, the consequences of these threats are much more severe than those of enterprise IT systems. The need to manage security for these systems with highly skilled IT personnel has become essential. Therefore, this research was focussed to identify which unique security controls for ICS and enterprise IT systems can be combined and/or tailored to provide the organization with a single set of comprehensive security controls. By doing an investigation on existing standards and best practices for both enterprise IT and ICS environments, this study has produced a single set of security controls and presented how the security controls can be integrated into an existing information security governance model which organizations can use as a basis for generating a security framework, used not only to secure their enterprise IT systems, but also including the security of their ICS.
- Full Text:
- Date Issued: 2018
Evolving a secure grid-enabled, distributed data warehouse : a standards-based perspective
- Authors: Li, Xiao-Yu
- Date: 2007
- Subjects: Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9738 , http://hdl.handle.net/10948/544 , Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Description: As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions.
- Full Text:
- Date Issued: 2007
- Authors: Li, Xiao-Yu
- Date: 2007
- Subjects: Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9738 , http://hdl.handle.net/10948/544 , Computational grids (Computer systems) , Computer networks -- Security measures , Electronic data processing -- Distributed processing
- Description: As digital data-collection has increased in scale and number, it becomes an important type of resource serving a wide community of researchers. Cross-institutional data-sharing and collaboration introduce a suitable approach to facilitate those research institutions that are suffering the lack of data and related IT infrastructures. Grid computing has become a widely adopted approach to enable cross-institutional resource-sharing and collaboration. It integrates a distributed and heterogeneous collection of locally managed users and resources. This project proposes a distributed data warehouse system, which uses Grid technology to enable data-access and integration, and collaborative operations across multi-distributed institutions in the context of HV/AIDS research. This study is based on wider research into OGSA-based Grid services architecture, comprising a data-analysis system which utilizes a data warehouse, data marts, and near-line operational database that are hosted by distributed institutions. Within this framework, specific patterns for collaboration, interoperability, resource virtualization and security are included. The heterogeneous and dynamic nature of the Grid environment introduces a number of security challenges. This study also concerns a set of particular security aspects, including PKI-based authentication, single sign-on, dynamic delegation, and attribute-based authorization. These mechanisms, as supported by the Globus Toolkit’s Grid Security Infrastructure, are used to enable interoperability and establish trust relationship between various security mechanisms and policies within different institutions; manage credentials; and ensure secure interactions.
- Full Text:
- Date Issued: 2007
Governing information security using organisational information security profiles
- Authors: Tyukala, Mkhululi
- Date: 2007
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9788 , http://hdl.handle.net/10948/626 , Data protection , Computer security -- Management , Computer networks -- Security measures
- Description: The corporate scandals of the last few years have changed the face of information security and its governance. Information security has been elevated to the board of director level due to legislation and corporate governance regulations resulting from the scandals. Now boards of directors have corporate responsibility to ensure that the information assets of an organisation are secure. They are forced to embrace information security and make it part of business strategies. The new support from the board of directors gives information security weight and the voice from the top as well as the financial muscle that other business activities experience. However, as an area that is made up of specialist activities, information security may not easily be comprehended at board level like other business related activities. Yet the board of directors needs to provide oversight of information security. That is, put an information security programme in place to ensure that information is adequately protected. This raises a number of challenges. One of the challenges is how can information security be understood and well informed decisions about it be made at the board level? This dissertation provides a mechanism to present information at board level on how information security is implemented according to the vision of the board of directors. This mechanism is built upon well accepted and documented concepts of information security. The mechanism (termed An Organisational Information Security Profile or OISP) will assist organisations with the initialisation, monitoring, measuring, reporting and reviewing of information security programmes. Ultimately, the OISP will make it possible to know if the information security endeavours of the organisation are effective or not. If the information security programme is found to be ineffective, The OISP will facilitate the pointing out of areas that are ineffective and what caused the ineffectiveness. This dissertation also presents how the effectiveness or ineffctiveness of information security can be presented at board level using well known visualisation methods. Finally the contribution, limits and areas that need more investigation are provided.
- Full Text:
- Date Issued: 2007
- Authors: Tyukala, Mkhululi
- Date: 2007
- Subjects: Data protection , Computer security -- Management , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9788 , http://hdl.handle.net/10948/626 , Data protection , Computer security -- Management , Computer networks -- Security measures
- Description: The corporate scandals of the last few years have changed the face of information security and its governance. Information security has been elevated to the board of director level due to legislation and corporate governance regulations resulting from the scandals. Now boards of directors have corporate responsibility to ensure that the information assets of an organisation are secure. They are forced to embrace information security and make it part of business strategies. The new support from the board of directors gives information security weight and the voice from the top as well as the financial muscle that other business activities experience. However, as an area that is made up of specialist activities, information security may not easily be comprehended at board level like other business related activities. Yet the board of directors needs to provide oversight of information security. That is, put an information security programme in place to ensure that information is adequately protected. This raises a number of challenges. One of the challenges is how can information security be understood and well informed decisions about it be made at the board level? This dissertation provides a mechanism to present information at board level on how information security is implemented according to the vision of the board of directors. This mechanism is built upon well accepted and documented concepts of information security. The mechanism (termed An Organisational Information Security Profile or OISP) will assist organisations with the initialisation, monitoring, measuring, reporting and reviewing of information security programmes. Ultimately, the OISP will make it possible to know if the information security endeavours of the organisation are effective or not. If the information security programme is found to be ineffective, The OISP will facilitate the pointing out of areas that are ineffective and what caused the ineffectiveness. This dissertation also presents how the effectiveness or ineffctiveness of information security can be presented at board level using well known visualisation methods. Finally the contribution, limits and areas that need more investigation are provided.
- Full Text:
- Date Issued: 2007
A comparison of exact string search algorithms for deep packet inspection
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
- Authors: Hunt, Kieran
- Date: 2018
- Subjects: Algorithms , Firewalls (Computer security) , Computer networks -- Security measures , Intrusion detection systems (Computer security) , Deep Packet Inspection
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/60629 , vital:27807
- Description: Every day, computer networks throughout the world face a constant onslaught of attacks. To combat these, network administrators are forced to employ a multitude of mitigating measures. Devices such as firewalls and Intrusion Detection Systems are prevalent today and employ extensive Deep Packet Inspection to scrutinise each piece of network traffic. Systems such as these usually require specialised hardware to meet the demand imposed by high throughput networks. Hardware like this is extremely expensive and singular in its function. It is with this in mind that the string search algorithms are introduced. These algorithms have been proven to perform well when searching through large volumes of text and may be able to perform equally well in the context of Deep Packet Inspection. String search algorithms are designed to match a single pattern to a substring of a given piece of text. This is not unlike the heuristics employed by traditional Deep Packet Inspection systems. This research compares the performance of a large number of string search algorithms during packet processing. Deep Packet Inspection places stringent restrictions on the reliability and speed of the algorithms due to increased performance pressures. A test system had to be designed in order to properly test the string search algorithms in the context of Deep Packet Inspection. The system allowed for precise and repeatable tests of each algorithm and then for their comparison. Of the algorithms tested, the Horspool and Quick Search algorithms posted the best results for both speed and reliability. The Not So Naive and Rabin-Karp algorithms were slowest overall.
- Full Text:
- Date Issued: 2018
Educating users about information security by means of game play
- Authors: Monk, Thomas Philippus
- Date: 2011
- Subjects: Computer security , Educational games -- Design , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9748 , http://hdl.handle.net/10948/1493 , Computer security , Educational games -- Design , Computer networks -- Security measures
- Description: Information is necessary for any business to function. However, if one does not manage one’s information assets properly then one’s business is likely to be at risk. By implementing Information Security controls, procedures, and/or safeguards one can secure information assets against risks. The risks of an organisation can be mitigated if employees implement safety measures. However, employees are often unable to work securely due to a lack of knowledge. This dissertation evaluates the premise that a computer game could be used to educate employees about Information Security. A game was developed with the aim of educating employees in this regard. If people were motivated to play the game, without external motivation from an organisation, then people would also, indirectly, be motivated to learn about Information Security. Therefore, a secondary aim of this game was to be self-motivating. An experiment was conducted in order to test whether or not these aims were met. The experiment was conducted on a play test group and a control group. The play test group played the game before completing a questionnaire that tested the information security knowledge of participants, while the control group simply completed the questionnaire. The two groups’ answers were compared in order to obtain results. This dissertation discusses the research design of the experiment and also provides an analysis of the results. The game design will be discussed which provides guidelines for future game designers to follow. The experiment indicated that the game is motivational, but perhaps not educational enough. However, the results suggest that a computer game can still be used to teach users about Information Security. Factors that involved consequence and repetition contributed towards the educational value of the game, whilst competitiveness and rewards contributed to the motivational aspect of the game.
- Full Text:
- Date Issued: 2011
- Authors: Monk, Thomas Philippus
- Date: 2011
- Subjects: Computer security , Educational games -- Design , Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9748 , http://hdl.handle.net/10948/1493 , Computer security , Educational games -- Design , Computer networks -- Security measures
- Description: Information is necessary for any business to function. However, if one does not manage one’s information assets properly then one’s business is likely to be at risk. By implementing Information Security controls, procedures, and/or safeguards one can secure information assets against risks. The risks of an organisation can be mitigated if employees implement safety measures. However, employees are often unable to work securely due to a lack of knowledge. This dissertation evaluates the premise that a computer game could be used to educate employees about Information Security. A game was developed with the aim of educating employees in this regard. If people were motivated to play the game, without external motivation from an organisation, then people would also, indirectly, be motivated to learn about Information Security. Therefore, a secondary aim of this game was to be self-motivating. An experiment was conducted in order to test whether or not these aims were met. The experiment was conducted on a play test group and a control group. The play test group played the game before completing a questionnaire that tested the information security knowledge of participants, while the control group simply completed the questionnaire. The two groups’ answers were compared in order to obtain results. This dissertation discusses the research design of the experiment and also provides an analysis of the results. The game design will be discussed which provides guidelines for future game designers to follow. The experiment indicated that the game is motivational, but perhaps not educational enough. However, the results suggest that a computer game can still be used to teach users about Information Security. Factors that involved consequence and repetition contributed towards the educational value of the game, whilst competitiveness and rewards contributed to the motivational aspect of the game.
- Full Text:
- Date Issued: 2011