An analysis of malware evasion techniques against modern AV engines
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
- Authors: Haffejee, Jameel
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20979 , http://hdl.handle.net/10962/5821
- Description: This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited.
- Full Text:
An analysis of the risk exposure of adopting IPV6 in enterprise networks
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
- Authors: Berko, Istvan Sandor
- Date: 2015
- Subjects: International Workshop on Deploying the Future Infrastructure , Computer networks , Computer networks -- Security measures , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4722 , http://hdl.handle.net/10962/d1018918
- Description: The IPv6 increased address pool presents changes in resource impact to the Enterprise that, if not adequately addressed, can change risks that are locally significant in IPv4 to risks that can impact the Enterprise in its entirety. The expected conclusion is that the IPv6 environment will impose significant changes in the Enterprise environment - which may negatively impact organisational security if the IPv6 nuances are not adequately addressed. This thesis reviews the risks related to the operation of enterprise networks with the introduction of IPv6. The global trends are discussed to provide insight and background to the IPv6 research space. Analysing the current state of readiness in enterprise networks, quantifies the value of developing this thesis. The base controls that should be deployed in enterprise networks to prevent the abuse of IPv6 through tunnelling and the protection of the enterprise access layer are discussed. A series of case studies are presented which identify and analyse the impact of certain changes in the IPv6 protocol on the enterprise networks. The case studies also identify mitigation techniques to reduce risk.
- Full Text:
An investigation into the role played by perceived security concerns in the adoption of mobile money services : a Zimbabwean case study
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
- Authors: Madebwe, Charles
- Date: 2015
- Subjects: Banks and banking, Mobile -- Zimbabwe , Global system for mobile communications , Cell phones -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4711 , http://hdl.handle.net/10962/d1017933
- Description: The ubiquitous nature of mobile phones and their popularity has led to opportunistic value added services (VAS), such as mobile money, riding on this phenomenon to be implemented. Several studies have been done to find factors that influence the adoption of mobile money and other information systems. The thesis looks at factors determining the uptake of mobile money over cellular networks with a special emphasis on aspects relating to perceived security even though other factors namely perceived usefulness, perceived ease of use, perceived trust and perceived cost were also looked at. The research further looks at the security threats introduced to mobile money by virtue of the nature, architecture, standards and protocols of Global System for Mobile Communications (GSM). The model employed for this research was the Technology Acceptance Model (TAM). Literature review was done on the security of GSM. Data was collected from a sample population around Harare, Zimbabwe using physical questionnaires. Statistical tests were performed on the collected data to find the significance of each construct to mobile money adoption. The research has found positive correlation between perceived security concerns and the adoption of money mobile money services over cellular networks. Perceived usefulness was found to be the most important factor in the adoption of mobile money. The research also found that customers need to trust the network service provider and the systems in use for them to adopt mobile money. Other factors driving consumer adoption were found to be perceived ease of use and perceived cost. The findings show that players who intend to introduce mobile money should strive to offer secure and useful systems that are trustworthy without making the service expensive or difficult to use. Literature review done showed that there is a possibility of compromising mobile money transactions done over GSM
- Full Text:
Pseudo-random access compressed archive for security log data
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
- Authors: Radley, Johannes Jurgens
- Date: 2015
- Subjects: Computer security , Information storage and retrieval systems , Data compression (Computer science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4723 , http://hdl.handle.net/10962/d1020019
- Description: We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
- Full Text:
Towards a framework for building security operation centers
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
- Authors: Jacobs, Pierre Conrad
- Date: 2015
- Subjects: Security systems industry , Systems engineering , Expert systems (Computer science) , COBIT (Information technology management standard) , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4710 , http://hdl.handle.net/10962/d1017932
- Description: In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
- Full Text:
Towards an evaluation and protection strategy for critical infrastructure
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
- Authors: Gottschalk, Jason Howard
- Date: 2015
- Subjects: Computer crimes -- Prevention , Computer networks -- Security measures , Computer crimes -- Law and legislation -- South Africa , Public works -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4721 , http://hdl.handle.net/10962/d1018793
- Description: Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.
- Full Text:
Towards large scale software based network routing simulation
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
- Authors: Herbert, Alan
- Date: 2015
- Subjects: Routers (Computer networks) , Computer software , Linux
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4709 , http://hdl.handle.net/10962/d1017931
- Description: Software based routing simulators suffer from large simulation host requirements and are prone to slow downs because of resource limitations, as well as context switching due to user space to kernel space requests. Furthermore, hardware based simulations do not scale with the passing of time as their available resources are set at the time of manufacture. This research aims to provide a software based, scalable solution to network simulation. It aims to achieve this by a Linux kernel-based solution, through insertion of a custom kernel module. This will reduce the number of context switches by eliminating the user space context requirement, and serve to be highly compatible with any host that can run the Linux kernel. Through careful consideration in data structure choice and software component design, this routing simulator achieved results of over 7 Gbps of throughput over multiple simulated node hops on consumer hardware. Alongside this throughput, this routing simulator also brings to light scalability and the ability to instantiate and simulate networks in excess of 1 million routing nodes within 1 GB of system memory
- Full Text:
Using risk mitigation approaches to define the requirements for software escrow
- Authors: Rode, Karl
- Date: 2015
- Subjects: Escrows , Source code (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4714 , http://hdl.handle.net/10962/d1017936
- Description: Two or more parties entering into a contract for service or goods may make use of an escrow of the funds for payment to enable trust in the contract. In such an event the documents or financial instruments, the object(s) in escrow, are held in trust by a trusted third party (escrow provider) until the specified conditions are fulfilled. In the scenario of software escrow, the object of escrow is typically the source code, and the specified release conditions usually address potential scenarios wherein the software provider becomes unable to continue providing services (such as due to bankruptcy or a change in services provided, etc.) The subject of software escrow is not well documented in the academic body of work, with the largest information sources, active commentary and supporting papers provided by commercial software escrow providers, both in South Africa and abroad. This work maps the software escrow topic onto the King III compliance framework in South Africa. This is of value since any users of bespoke developed applications may require extended professional assistance to align with the King III guidelines. The supporting risk assessment model developed in this work will serve as a tool to evaluate and motivate for software escrow agreements. It will also provide an overview of the various escrow agreement types and will transfer the focus to the value proposition that they each hold. Initial research has indicated that current awareness of software escrow in industry is still very low. This was evidenced by the significant number of approached specialists that declined to participate in the survey due to their own admitted inexperience in applying the discipline of software escrow within their companies. Moreover, the participants that contributed to the research indicated that they only required software escrow for medium to highly critical applications. This proved the value of assessing the various risk factors that bespoke software development introduces, as well as the risk mitigation options available, through tools such as escrow, to reduce the actual and residual risk to a manageable level.
- Full Text:
- Authors: Rode, Karl
- Date: 2015
- Subjects: Escrows , Source code (Computer Science)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4714 , http://hdl.handle.net/10962/d1017936
- Description: Two or more parties entering into a contract for service or goods may make use of an escrow of the funds for payment to enable trust in the contract. In such an event the documents or financial instruments, the object(s) in escrow, are held in trust by a trusted third party (escrow provider) until the specified conditions are fulfilled. In the scenario of software escrow, the object of escrow is typically the source code, and the specified release conditions usually address potential scenarios wherein the software provider becomes unable to continue providing services (such as due to bankruptcy or a change in services provided, etc.) The subject of software escrow is not well documented in the academic body of work, with the largest information sources, active commentary and supporting papers provided by commercial software escrow providers, both in South Africa and abroad. This work maps the software escrow topic onto the King III compliance framework in South Africa. This is of value since any users of bespoke developed applications may require extended professional assistance to align with the King III guidelines. The supporting risk assessment model developed in this work will serve as a tool to evaluate and motivate for software escrow agreements. It will also provide an overview of the various escrow agreement types and will transfer the focus to the value proposition that they each hold. Initial research has indicated that current awareness of software escrow in industry is still very low. This was evidenced by the significant number of approached specialists that declined to participate in the survey due to their own admitted inexperience in applying the discipline of software escrow within their companies. Moreover, the participants that contributed to the research indicated that they only required software escrow for medium to highly critical applications. This proved the value of assessing the various risk factors that bespoke software development introduces, as well as the risk mitigation options available, through tools such as escrow, to reduce the actual and residual risk to a manageable level.
- Full Text:
Visualisation of PF firewall logs using open source
- Authors: Coetzee, Dirk
- Date: 2015
- Subjects: Open source software -- South Africa , Firewalls (Computer security) -- South Africa , Data logging -- South Africa , Data integrity -- South Africa , Data protection -- South Africa , Computer crimes -- South Africa , Hacktivism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4719 , http://hdl.handle.net/10962/d1018552
- Description: If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage.
- Full Text:
- Authors: Coetzee, Dirk
- Date: 2015
- Subjects: Open source software -- South Africa , Firewalls (Computer security) -- South Africa , Data logging -- South Africa , Data integrity -- South Africa , Data protection -- South Africa , Computer crimes -- South Africa , Hacktivism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4719 , http://hdl.handle.net/10962/d1018552
- Description: If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage.
- Full Text:
- «
- ‹
- 1
- ›
- »