An examination of validation practices in relation to the forensic acquisition of digital evidence in South Africa
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
- Authors: Jordaan, Jason
- Date: 2014
- Subjects: Electronic evidence , Evidence, Criminal , Forensic sciences , Evidence, Criminal -- South Africa -- Law and legislation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4706 , http://hdl.handle.net/10962/d1016361
- Description: The acquisition of digital evidence is the most crucial part of the entire digital forensics process. During this process, digital evidence is acquired in a forensically sound manner to ensure the legal admissibility and reliability of that evidence in court. In the acquisition process various hardware or software tools are used to acquire the digital evidence. All of the digital forensic standards relating to the acquisition of digital evidence require that the hardware and software tools used in the acquisition process are validated as functioning correctly and reliably, as this lends credibility to the evidence in court. In fact the Electronic Communications and Transactions Act 25 of 2002 in South Africa specifically requires courts to consider issues such as reliability and the manner in which the integrity of digital evidence is ensured when assessing the evidential weight of digital evidence. Previous research into quality assurance in the practice of digital forensics in South Africa identified that in general, tool validation was not performed, and as such a hypothesis was proposed that digital forensic practitioners in South Africa make use of hardware and/or software tools for the forensic acquisition of digital evidence, whose validity and/or reliability cannot be objectively proven. As such the reliability of any digital evidence preserved using those tools is potentially unreliable. This hypothesis was tested in the research through the use of a survey of digital forensic practitioners in South Africa. The research established that the majority of digital forensic practitioners do not use tools in the forensic acquisition of digital evidence that can be proven to be validated and/or reliable. While just under a fifth of digital forensic practitioners can provide some proof of validation and/or reliability, the proof of validation does not meet formal international standards. In essence this means that digital evidence, which is preserved through the use of specific hardware and/or software tools for subsequent presentation and reliance upon as evidence in a court of law, is preserved by tools where the objective and scientific validity thereof has not been determined. Since South African courts must consider reliability in terms of Section 15(3) of the Electronic Communications and Transactions Act 25 of 2002 in assessing the weight of digital evidence, this is undermined through the current state of practice in South Africa by digital forensic practitioners.
- Full Text:
- Date Issued: 2014
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
The role of computational thinking in introductory computer science
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
- «
- ‹
- 1
- ›
- »