Artificial neural networks as simulators for behavioural evolution in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
Protein secondary structure prediction using neural networks and support vector machines
- Authors: Tsilo, Lipontseng Cecilia
- Date: 2009
- Subjects: Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5569 , http://hdl.handle.net/10962/d1002809 , Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Description: Predicting the secondary structure of proteins is important in biochemistry because the 3D structure can be determined from the local folds that are found in secondary structures. Moreover, knowing the tertiary structure of proteins can assist in determining their functions. The objective of this thesis is to compare the performance of Neural Networks (NN) and Support Vector Machines (SVM) in predicting the secondary structure of 62 globular proteins from their primary sequence. For each NN and SVM, we created six binary classifiers to distinguish between the classes’ helices (H) strand (E), and coil (C). For NN we use Resilient Backpropagation training with and without early stopping. We use NN with either no hidden layer or with one hidden layer with 1,2,...,40 hidden neurons. For SVM we use a Gaussian kernel with parameter fixed at = 0.1 and varying cost parameters C in the range [0.1,5]. 10- fold cross-validation is used to obtain overall estimates for the probability of making a correct prediction. Our experiments indicate for NN and SVM that the different binary classifiers have varying accuracies: from 69% correct predictions for coils vs. non-coil up to 80% correct predictions for stand vs. non-strand. It is further demonstrated that NN with no hidden layer or not more than 2 hidden neurons in the hidden layer are sufficient for better predictions. For SVM we show that the estimated accuracies do not depend on the value of the cost parameter. As a major result, we will demonstrate that the accuracy estimates of NN and SVM binary classifiers cannot distinguish. This contradicts a modern belief in bioinformatics that SVM outperforms other predictors.
- Full Text:
- Date Issued: 2009
- Authors: Tsilo, Lipontseng Cecilia
- Date: 2009
- Subjects: Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5569 , http://hdl.handle.net/10962/d1002809 , Neural networks (Computer science) , Support vector machines , Proteins -- Structure -- Mathematical models
- Description: Predicting the secondary structure of proteins is important in biochemistry because the 3D structure can be determined from the local folds that are found in secondary structures. Moreover, knowing the tertiary structure of proteins can assist in determining their functions. The objective of this thesis is to compare the performance of Neural Networks (NN) and Support Vector Machines (SVM) in predicting the secondary structure of 62 globular proteins from their primary sequence. For each NN and SVM, we created six binary classifiers to distinguish between the classes’ helices (H) strand (E), and coil (C). For NN we use Resilient Backpropagation training with and without early stopping. We use NN with either no hidden layer or with one hidden layer with 1,2,...,40 hidden neurons. For SVM we use a Gaussian kernel with parameter fixed at = 0.1 and varying cost parameters C in the range [0.1,5]. 10- fold cross-validation is used to obtain overall estimates for the probability of making a correct prediction. Our experiments indicate for NN and SVM that the different binary classifiers have varying accuracies: from 69% correct predictions for coils vs. non-coil up to 80% correct predictions for stand vs. non-strand. It is further demonstrated that NN with no hidden layer or not more than 2 hidden neurons in the hidden layer are sufficient for better predictions. For SVM we show that the estimated accuracies do not depend on the value of the cost parameter. As a major result, we will demonstrate that the accuracy estimates of NN and SVM binary classifiers cannot distinguish. This contradicts a modern belief in bioinformatics that SVM outperforms other predictors.
- Full Text:
- Date Issued: 2009
Wireless industrial intelligent controller for a non-linear system
- Authors: Fernandes, John Manuel
- Date: 2015
- Subjects: Neural networks (Computer science) , Linear systems
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: http://hdl.handle.net/10948/9021 , vital:26457
- Description: Modern neural network (NN) based control schemes have surmounted many of the limitations found in the traditional control approaches. Nevertheless, these modern control techniques have only recently been introduced for use on high-specification Programmable Logic Controllers (PLCs) and usually at a very high cost in terms of the required software and hardware. This ‗intelligent‘ control in the sector of industrial automation, specifically on standard PLCs thus remains an area of study that is open to further research and development. The research documented in this thesis examined the effectiveness of linear traditional control schemes such as Proportional Integral Derivative (PID), Lead and Lead-Lag control, in comparison to non-linear NN based control schemes when applied on a strongly non-linear platform. To this end, a mechatronic-type balancing system, namely, the Ball-on-Wheel (BOW) system was designed, constructed and modelled. Thereafter various traditional and intelligent controllers were implemented in order to control the system. The BOW platform may be taken to represent any single-input, single-output (SISO) non-linear system in use in the real world. The system makes use of current industrial technology including a standard PLC as the digital computational platform, a servo drive and wireless access for remote control. The results gathered from the research revealed that NN based control schemes (i.e. Pure NN and NN-PID), although comparatively slower in response, have greater advantages over traditional controllers in that they are able to adapt to external system changes as well as system non-linearity through a process of learning. These controllers also reduce the guess work that is usually involved with the traditional control approaches where cumbersome modelling, linearization or manual tuning is required. Furthermore, the research showed that online-learning adaptive traditional controllers such as the NN-PID controller which maintains the best of both the intelligent and traditional controllers may be implemented easily and with minimum expense on standard PLCs.
- Full Text:
- Date Issued: 2015
- Authors: Fernandes, John Manuel
- Date: 2015
- Subjects: Neural networks (Computer science) , Linear systems
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: http://hdl.handle.net/10948/9021 , vital:26457
- Description: Modern neural network (NN) based control schemes have surmounted many of the limitations found in the traditional control approaches. Nevertheless, these modern control techniques have only recently been introduced for use on high-specification Programmable Logic Controllers (PLCs) and usually at a very high cost in terms of the required software and hardware. This ‗intelligent‘ control in the sector of industrial automation, specifically on standard PLCs thus remains an area of study that is open to further research and development. The research documented in this thesis examined the effectiveness of linear traditional control schemes such as Proportional Integral Derivative (PID), Lead and Lead-Lag control, in comparison to non-linear NN based control schemes when applied on a strongly non-linear platform. To this end, a mechatronic-type balancing system, namely, the Ball-on-Wheel (BOW) system was designed, constructed and modelled. Thereafter various traditional and intelligent controllers were implemented in order to control the system. The BOW platform may be taken to represent any single-input, single-output (SISO) non-linear system in use in the real world. The system makes use of current industrial technology including a standard PLC as the digital computational platform, a servo drive and wireless access for remote control. The results gathered from the research revealed that NN based control schemes (i.e. Pure NN and NN-PID), although comparatively slower in response, have greater advantages over traditional controllers in that they are able to adapt to external system changes as well as system non-linearity through a process of learning. These controllers also reduce the guess work that is usually involved with the traditional control approaches where cumbersome modelling, linearization or manual tuning is required. Furthermore, the research showed that online-learning adaptive traditional controllers such as the NN-PID controller which maintains the best of both the intelligent and traditional controllers may be implemented easily and with minimum expense on standard PLCs.
- Full Text:
- Date Issued: 2015
A feasibility study into total electron content prediction using neural networks
- Authors: Habarulema, John Bosco
- Date: 2008
- Subjects: Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5466 , http://hdl.handle.net/10962/d1005251 , Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Description: Global Positioning System (GPS) networks provide an opportunity to study the dynamics and continuous changes in the ionosphere by supplementing ionospheric measurements which are usually obtained by various techniques such as ionosondes, incoherent scatter radars and satellites. Total electron content (TEC) is one of the physical quantities that can be derived from GPS data, and provides an indication of ionospheric variability. This thesis presents a feasibility study for the development of a Neural Network (NN) based model for the prediction of South African GPS derived TEC. The South African GPS receiver network is operated and maintained by the Chief Directorate Surveys and Mapping (CDSM) in Cape Town, South Africa. Three South African locations were identified and used in the development of an input space and NN architecture for the model. The input space includes the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), and magnetic index(measure of the magnetic activity). An attempt to study the effects of solar wind on TEC variability was carried out using the Advanced Composition Explorer (ACE) data and it is recommended that more study be done using low altitude satellite data. An analysis was done by comparing predicted NN TEC with TEC values from the IRI2001 version of the International Reference Ionosphere (IRI), validating GPS TEC with ionosonde TEC (ITEC) and assessing the performance of the NN model during equinoxes and solstices. Results show that NNs predict GPS TEC more accurately than the IRI at South African GPS locations, but that more good quality GPS data is required before a truly representative empirical GPS TEC model can be released.
- Full Text:
- Date Issued: 2008
- Authors: Habarulema, John Bosco
- Date: 2008
- Subjects: Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5466 , http://hdl.handle.net/10962/d1005251 , Electrons , Neural networks (Computer science) , Global Positioning System , Ionosphere , Ionospheric electron density
- Description: Global Positioning System (GPS) networks provide an opportunity to study the dynamics and continuous changes in the ionosphere by supplementing ionospheric measurements which are usually obtained by various techniques such as ionosondes, incoherent scatter radars and satellites. Total electron content (TEC) is one of the physical quantities that can be derived from GPS data, and provides an indication of ionospheric variability. This thesis presents a feasibility study for the development of a Neural Network (NN) based model for the prediction of South African GPS derived TEC. The South African GPS receiver network is operated and maintained by the Chief Directorate Surveys and Mapping (CDSM) in Cape Town, South Africa. Three South African locations were identified and used in the development of an input space and NN architecture for the model. The input space includes the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), and magnetic index(measure of the magnetic activity). An attempt to study the effects of solar wind on TEC variability was carried out using the Advanced Composition Explorer (ACE) data and it is recommended that more study be done using low altitude satellite data. An analysis was done by comparing predicted NN TEC with TEC values from the IRI2001 version of the International Reference Ionosphere (IRI), validating GPS TEC with ionosonde TEC (ITEC) and assessing the performance of the NN model during equinoxes and solstices. Results show that NNs predict GPS TEC more accurately than the IRI at South African GPS locations, but that more good quality GPS data is required before a truly representative empirical GPS TEC model can be released.
- Full Text:
- Date Issued: 2008
Modelling Ionospheric vertical drifts over the African low latitude region
- Dubazane, Makhosonke Berthwell
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
- Authors: Dubazane, Makhosonke Berthwell
- Date: 2018
- Subjects: Ionospheric drift , Magnetometers , Functions, Orthogonal , Neural networks (Computer science) , Ionospheric electron density -- Africa , Communication and Navigation Outage Forecasting Systems (C/NOFS)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63356 , vital:28396
- Description: Low/equatorial latitudes vertical plasma drifts and electric fields govern the formation and changes of ionospheric density structures which affect space-based systems such as communications, navigation and positioning. Dynamical and electrodynamical processes play important roles in plasma distribution at different altitudes. Because of the high variability of E × B drift in low latitude regions, coupled with various processes that sometimes originate from high latitudes especially during geomagnetic storm conditions, it is challenging to develop accurate vertical drift models. This is despite the fact that there are very few instruments dedicated to provide electric field and hence E × B drift data in low/equatorial latitude regions. To this effect, there exists no ground-based instrument for direct measurements of E×B drift data in the African sector. This study presents the first time investigation aimed at modelling the long-term variability of low latitude vertical E × B drift over the African sector using a combination of Communication and Navigation Outage Forecasting Systems (C/NOFS) and ground-based magnetometer observations/measurements during 2008-2013. Because the approach is based on the estimation of equatorial electrojet from ground-based magnetometer observations, the developed models are only valid for local daytime. Three modelling techniques have been considered. The application of Empirical Orthogonal Functions and partial least squares has been performed on vertical E × B drift modelling for the first time. The artificial neural networks that have the advantage of learning underlying changes between a set of inputs and known output were also used in vertical E × B drift modelling. Due to lack of E×B drift data over the African sector, the developed models were validated using satellite data and the climatological Scherliess-Fejer model incorporated within the International Reference Ionosphere model. Maximum correlation coefficient of ∼ 0.8 was achieved when validating the developed models with C/NOFS E × B drift observations that were not used in any model development. For most of the time, the climatological model overestimates the local daytime vertical E × B drift velocities. The methods and approach presented in this study provide a background for constructing vertical E ×B drift databases in longitude sectors that do not have radar instrumentation. This will in turn make it possible to study day-to-day variability of vertical E×B drift and hopefully lead to the development of regional and global models that will incorporate local time information in different longitude sectors.
- Full Text:
- Date Issued: 2018
Application of machine learning, molecular modelling and structural data mining against antiretroviral drug resistance in HIV-1
- Sheik Amamuddy, Olivier Serge André
- Authors: Sheik Amamuddy, Olivier Serge André
- Date: 2020
- Subjects: Machine learning , Molecules -- Models , Data mining , Neural networks (Computer science) , Antiretroviral agents , Protease inhibitors , Drug resistance , Multidrug resistance , Molecular dynamics , Renin-angiotensin system , HIV (Viruses) -- South Africa , HIV (Viruses) -- Social aspects -- South Africa , South African Natural Compounds Database
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/115964 , vital:34282
- Description: Millions are affected with the Human Immunodeficiency Virus (HIV) world wide, even though the death toll is on the decline. Antiretrovirals (ARVs), more specifically protease inhibitors have shown tremendous success since their introduction into therapy since the mid 1990’s by slowing down progression to the Acquired Immune Deficiency Syndrome (AIDS). However, Drug Resistance Mutations (DRMs) are constantly selected for due to viral adaptation, making drugs less effective over time. The current challenge is to manage the infection optimally with a limited set of drugs, with differing associated levels of toxicities in the face of a virus that (1) exists as a quasispecies, (2) may transmit acquired DRMs to drug-naive individuals and (3) that can manifest class-wide resistance due to similarities in design. The presence of latent reservoirs, unawareness of infection status, education and various socio-economic factors make the problem even more complex. Adequate timing and choice of drug prescription together with treatment adherence are very important as drug toxicities, drug failure and sub-optimal treatment regimens leave room for further development of drug resistance. While CD4 cell count and the determination of viral load from patients in resource-limited settings are very helpful to track how well a patient’s immune system is able to keep the virus in check, they can be lengthy in determining whether an ARV is effective. Phenosense assay kits answer this problem using viruses engineered to contain the patient sequences and evaluating their growth in the presence of different ARVs, but this can be expensive and too involved for routine checks. As a cheaper and faster alternative, genotypic assays provide similar information from HIV pol sequences obtained from blood samples, inferring ARV efficacy on the basis of drug resistance mutation patterns. However, these are inherently complex and the various methods of in silico prediction, such as Geno2pheno, REGA and Stanford HIVdb do not always agree in every case, even though this gap decreases as the list of resistance mutations is updated. A major gap in HIV treatment is that the information used for predicting drug resistance is mainly computed from data containing an overwhelming majority of B subtype HIV, when these only comprise about 12% of the worldwide HIV infections. In addition to growing evidence that drug resistance is subtype-related, it is intuitive to hypothesize that as subtyping is a phylogenetic classification, the more divergent a subtype is from the strains used in training prediction models, the less their resistance profiles would correlate. For the aforementioned reasons, we used a multi-faceted approach to attack the virus in multiple ways. This research aimed to (1) improve resistance prediction methods by focusing solely on the available subtype, (2) mine structural information pertaining to resistance in order to find any exploitable weak points and increase knowledge of the mechanistic processes of drug resistance in HIV protease. Finally, (3) we screen for protease inhibitors amongst a database of natural compounds [the South African natural compound database (SANCDB)] to find molecules or molecular properties usable to come up with improved inhibition against the drug target. In this work, structural information was mined using the Anisotropic Network Model, Dynamics Cross-Correlation, Perturbation Response Scanning, residue contact network analysis and the radius of gyration. These methods failed to give any resistance-associated patterns in terms of natural movement, internal correlated motions, residue perturbation response, relational behaviour and global compaction respectively. Applications of drug docking, homology-modelling and energy minimization for generating features suitable for machine-learning were not very promising, and rather suggest that the value of binding energies by themselves from Vina may not be very reliable quantitatively. All these failures lead to a refinement that resulted in a highly sensitive statistically-guided network construction and analysis, which leads to key findings in the early dynamics associated with resistance across all PI drugs. The latter experiment unravelled a conserved lateral expansion motion occurring at the flap elbows, and an associated contraction that drives the base of the dimerization domain towards the catalytic site’s floor in the case of drug resistance. Interestingly, we found that despite the conserved movement, bond angles were degenerate. Alongside, 16 Artificial Neural Network models were optimised for HIV proteases and reverse transcriptase inhibitors, with performances on par with Stanford HIVdb. Finally, we prioritised 9 compounds with potential protease inhibitory activity using virtual screening and molecular dynamics (MD) to additionally suggest a promising modification to one of the compounds. This yielded another molecule inhibiting equally well both opened and closed receptor target conformations, whereby each of the compounds had been selected against an array of multi-drug-resistant receptor variants. While a main hurdle was a lack of non-B subtype data, our findings, especially from the statistically-guided network analysis, may extrapolate to a certain extent to them as the level of conservation was very high within subtype B, despite all the present variations. This network construction method lays down a sensitive approach for analysing a pair of alternate phenotypes for which complex patterns prevail, given a sufficient number of experimental units. During the course of research a weighted contact mapping tool was developed to compare renin-angiotensinogen variants and packaged as part of the MD-TASK tool suite. Finally the functionality, compatibility and performance of the MODE-TASK tool were evaluated and confirmed for both Python2.7.x and Python3.x, for the analysis of normals modes from single protein structures and essential modes from MD trajectories. These techniques and tools collectively add onto the conventional means of MD analysis.
- Full Text:
- Date Issued: 2020
- Authors: Sheik Amamuddy, Olivier Serge André
- Date: 2020
- Subjects: Machine learning , Molecules -- Models , Data mining , Neural networks (Computer science) , Antiretroviral agents , Protease inhibitors , Drug resistance , Multidrug resistance , Molecular dynamics , Renin-angiotensin system , HIV (Viruses) -- South Africa , HIV (Viruses) -- Social aspects -- South Africa , South African Natural Compounds Database
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/115964 , vital:34282
- Description: Millions are affected with the Human Immunodeficiency Virus (HIV) world wide, even though the death toll is on the decline. Antiretrovirals (ARVs), more specifically protease inhibitors have shown tremendous success since their introduction into therapy since the mid 1990’s by slowing down progression to the Acquired Immune Deficiency Syndrome (AIDS). However, Drug Resistance Mutations (DRMs) are constantly selected for due to viral adaptation, making drugs less effective over time. The current challenge is to manage the infection optimally with a limited set of drugs, with differing associated levels of toxicities in the face of a virus that (1) exists as a quasispecies, (2) may transmit acquired DRMs to drug-naive individuals and (3) that can manifest class-wide resistance due to similarities in design. The presence of latent reservoirs, unawareness of infection status, education and various socio-economic factors make the problem even more complex. Adequate timing and choice of drug prescription together with treatment adherence are very important as drug toxicities, drug failure and sub-optimal treatment regimens leave room for further development of drug resistance. While CD4 cell count and the determination of viral load from patients in resource-limited settings are very helpful to track how well a patient’s immune system is able to keep the virus in check, they can be lengthy in determining whether an ARV is effective. Phenosense assay kits answer this problem using viruses engineered to contain the patient sequences and evaluating their growth in the presence of different ARVs, but this can be expensive and too involved for routine checks. As a cheaper and faster alternative, genotypic assays provide similar information from HIV pol sequences obtained from blood samples, inferring ARV efficacy on the basis of drug resistance mutation patterns. However, these are inherently complex and the various methods of in silico prediction, such as Geno2pheno, REGA and Stanford HIVdb do not always agree in every case, even though this gap decreases as the list of resistance mutations is updated. A major gap in HIV treatment is that the information used for predicting drug resistance is mainly computed from data containing an overwhelming majority of B subtype HIV, when these only comprise about 12% of the worldwide HIV infections. In addition to growing evidence that drug resistance is subtype-related, it is intuitive to hypothesize that as subtyping is a phylogenetic classification, the more divergent a subtype is from the strains used in training prediction models, the less their resistance profiles would correlate. For the aforementioned reasons, we used a multi-faceted approach to attack the virus in multiple ways. This research aimed to (1) improve resistance prediction methods by focusing solely on the available subtype, (2) mine structural information pertaining to resistance in order to find any exploitable weak points and increase knowledge of the mechanistic processes of drug resistance in HIV protease. Finally, (3) we screen for protease inhibitors amongst a database of natural compounds [the South African natural compound database (SANCDB)] to find molecules or molecular properties usable to come up with improved inhibition against the drug target. In this work, structural information was mined using the Anisotropic Network Model, Dynamics Cross-Correlation, Perturbation Response Scanning, residue contact network analysis and the radius of gyration. These methods failed to give any resistance-associated patterns in terms of natural movement, internal correlated motions, residue perturbation response, relational behaviour and global compaction respectively. Applications of drug docking, homology-modelling and energy minimization for generating features suitable for machine-learning were not very promising, and rather suggest that the value of binding energies by themselves from Vina may not be very reliable quantitatively. All these failures lead to a refinement that resulted in a highly sensitive statistically-guided network construction and analysis, which leads to key findings in the early dynamics associated with resistance across all PI drugs. The latter experiment unravelled a conserved lateral expansion motion occurring at the flap elbows, and an associated contraction that drives the base of the dimerization domain towards the catalytic site’s floor in the case of drug resistance. Interestingly, we found that despite the conserved movement, bond angles were degenerate. Alongside, 16 Artificial Neural Network models were optimised for HIV proteases and reverse transcriptase inhibitors, with performances on par with Stanford HIVdb. Finally, we prioritised 9 compounds with potential protease inhibitory activity using virtual screening and molecular dynamics (MD) to additionally suggest a promising modification to one of the compounds. This yielded another molecule inhibiting equally well both opened and closed receptor target conformations, whereby each of the compounds had been selected against an array of multi-drug-resistant receptor variants. While a main hurdle was a lack of non-B subtype data, our findings, especially from the statistically-guided network analysis, may extrapolate to a certain extent to them as the level of conservation was very high within subtype B, despite all the present variations. This network construction method lays down a sensitive approach for analysing a pair of alternate phenotypes for which complex patterns prevail, given a sufficient number of experimental units. During the course of research a weighted contact mapping tool was developed to compare renin-angiotensinogen variants and packaged as part of the MD-TASK tool suite. Finally the functionality, compatibility and performance of the MODE-TASK tool were evaluated and confirmed for both Python2.7.x and Python3.x, for the analysis of normals modes from single protein structures and essential modes from MD trajectories. These techniques and tools collectively add onto the conventional means of MD analysis.
- Full Text:
- Date Issued: 2020
A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
Predictability of Geomagnetically Induced Currents using neural networks
- Authors: Lotz, Stefan
- Date: 2009
- Subjects: Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5483 , http://hdl.handle.net/10962/d1005269 , Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Description: It is a well documented fact that Geomagnetically Induced Currents (GIC’s) poses a significant threat to ground-based electric conductor networks like oil pipelines, railways and powerline networks. A study is undertaken to determine the feasibility of using artificial neural network models to predict GIC occurrence in the Southern African power grid. The magnitude of an induced current at a specific location on the Earth’s surface is directly related to the temporal derivative of the geomagnetic field (specifically its horizontal components) at that point. Hence, the focus of the problem is on the prediction of the temporal variations in the horizontal geomagnetic field (@Bx/@t and @By/@t). Artificial neural networks are used to predict @Bx/@t and @By/@t measured at Hermanus, South Africa (34.27◦ S, 19.12◦ E) with a 30 minute prediction lead time. As input parameters to the neural networks, insitu solar wind measurements made by the Advanced Composition Explorer (ACE) satellite are used. The results presented here compare well with similar models developed at high-latitude locations (e.g. Sweden, Finland, Canada) where extensive GIC research has been undertaken. It is concluded that it would indeed be feasible to use a neural network model to predict GIC occurrence in the Southern African power grid, provided that GIC measurements, powerline configuration and network parameters are made available.
- Full Text:
- Date Issued: 2009
- Authors: Lotz, Stefan
- Date: 2009
- Subjects: Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5483 , http://hdl.handle.net/10962/d1005269 , Advanced Composition Explorer (Artificial satellite) , Geomagnetism , Electromagnetic induction , Neural networks (Computer science) , Artificial intelligence
- Description: It is a well documented fact that Geomagnetically Induced Currents (GIC’s) poses a significant threat to ground-based electric conductor networks like oil pipelines, railways and powerline networks. A study is undertaken to determine the feasibility of using artificial neural network models to predict GIC occurrence in the Southern African power grid. The magnitude of an induced current at a specific location on the Earth’s surface is directly related to the temporal derivative of the geomagnetic field (specifically its horizontal components) at that point. Hence, the focus of the problem is on the prediction of the temporal variations in the horizontal geomagnetic field (@Bx/@t and @By/@t). Artificial neural networks are used to predict @Bx/@t and @By/@t measured at Hermanus, South Africa (34.27◦ S, 19.12◦ E) with a 30 minute prediction lead time. As input parameters to the neural networks, insitu solar wind measurements made by the Advanced Composition Explorer (ACE) satellite are used. The results presented here compare well with similar models developed at high-latitude locations (e.g. Sweden, Finland, Canada) where extensive GIC research has been undertaken. It is concluded that it would indeed be feasible to use a neural network model to predict GIC occurrence in the Southern African power grid, provided that GIC measurements, powerline configuration and network parameters are made available.
- Full Text:
- Date Issued: 2009
The development of an ionospheric storm-time index for the South African region
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Tshisaphungo, Mpho
- Date: 2021-04
- Subjects: Ionospheric storms -- South Africa , Global Positioning System , Neural networks (Computer science) , Regression analysis , Ionosondes , Auroral electrojet , Geomagnetic indexes , Magnetic storms -- South Africa
- Language: English
- Type: thesis , text , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/178409 , vital:42937 , 10.21504/10962/178409
- Description: This thesis presents the development of a regional ionospheric storm-time model which forms the foundation of an index to provide a quick view of the ionospheric storm effects over South African mid-latitude region. The model is based on the foF2 measurements from four South African ionosonde stations. The data coverage for the model development over Grahamstown (33.3◦S, 26.5◦E), Hermanus (34.42◦S, 19.22◦E), Louisvale (28.50◦S, 21.20◦E), and Madimbo (22.39◦S, 30.88◦E) is 1996-2016, 2009-2016, 2000-2016, and 2000-2016 respectively. Data from the Global Positioning System (GPS) and radio occultation (RO) technique were used during validation. As the measure of either positive or negative storm effect, the variation of the critical frequency of the F2 layer (foF2) from the monthly median values (denoted as _foF2) is modeled. The modeling of _foF2 is based on only storm time data with the criteria of Dst 6 -50 nT and Kp > 4. The modeling methods used in the study were artificial neural network (ANN), linear regression (LR) and polynomial functions. The approach taken was to first test the modeling techniques on a single station before expanding the study to cover the regional aspect. The single station modeling was developed based on ionosonde data over Grahamstown. The inputs for the model which related to seasonal variation, diurnal variation, geomagnetic activity and solar activity were considered. For the geomagnetic activity, three indices namely; the symmetric disturbance in the horizontal component of the Earth’s magnetic field (SYM − H), the Auroral Electrojet (AE) index and local geomagnetic index A, were included as inputs. The performance of a single station model revealed that, of the three geomagnetic indices, SYM − H index has the largest contribution of 41% and 54% based on ANN and LR techniques respectively. The average correlation coefficients (R) for both ANN and LR models was 0.8, when validated during the selected storms falling within the period of model development. When validated using storms that fall outside the period of model development, the model gave R values of 0.6 and 0.5 for ANN and LR respectively. In addition, the GPS total electron content (TEC) derived measurements were used to estimate foF2 data. This is because there are more GPS receivers than ionosonde locations and the utilisation of this data increases the spatial coverage of the regional model. The estimation of foF2 from GPS TEC was done at GPS-ionosonde co-locations using polynomial functions. The average R values of 0.69 and 0.65 were obtained between actual and derived _foF2 over the co-locations and other GPS stations respectively. Validation of GPS TEC derived foF2 with RO data over regions out of ionospheric pierce points coverage with respect to ionosonde locations gave R greater than 0.9 for the selected storm period of 4-8 August 2011. The regional storm-time model was then developed based on the ANN technique using the four South African ionosonde stations. The maximum and minimum R values of 0.6 and 0.5 were obtained over ionosonde and GPS locations respectively. This model forms the basis towards the regional ionospheric storm-time index. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2021
- Full Text:
- Date Issued: 2021-04
Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network
- Chaibva, Faith A, Burton, Michael, Walker, Roderick B
- Authors: Chaibva, Faith A , Burton, Michael , Walker, Roderick B
- Date: 2010
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Article
- Identifier: vital:6352 , http://hdl.handle.net/10962/d1006034
- Description: An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
- Full Text:
- Date Issued: 2010
- Authors: Chaibva, Faith A , Burton, Michael , Walker, Roderick B
- Date: 2010
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Article
- Identifier: vital:6352 , http://hdl.handle.net/10962/d1006034
- Description: An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
- Full Text:
- Date Issued: 2010
Tomographic imaging of East African equatorial ionosphere and study of equatorial plasma bubbles
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
- Authors: Giday, Nigussie Mezgebe
- Date: 2018
- Subjects: Ionosphere -- Africa, Central , Tomography -- Africa, Central , Global Positioning System , Neural networks (Computer science) , Space environment , Multi-Instrument Data Analysis System (MIDAS) , Equatorial plasma bubbles
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63980 , vital:28516
- Description: In spite of the fact that the African ionospheric equatorial region has the largest ground footprint along the geomagnetic equator, it has not been well studied due to the absence of adequate ground-based instruments. This thesis presents research on both tomographic imaging of the African equatorial ionosphere and the study of the ionospheric irregularities/equatorial plasma bubbles (EPBs) under varying geomagnetic conditions. The Multi-Instrument Data Analysis System (MIDAS), an inversion algorithm, was investigated for its validity and ability as a tool to reconstruct multi-scaled ionospheric structures for different geomagnetic conditions. This was done for the narrow East African longitude sector with data from the available ground Global Positioning Sys-tem (GPS) receivers. The MIDAS results were compared to the results of two models, namely the IRI and GIM. MIDAS results compared more favourably with the observation vertical total electron content (VTEC), with a computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TECU, than did the results of the IRI-2012 and GIM models with maximum r of 0.93 and 0.99, and minimum RMSE of 13.03 TECU and 6.52 TECU, respectively, over all the test stations and validation days. The ability of MIDAS to reconstruct storm-time TEC was also compared with the results produced by the use of a Artificial Neural Net-work (ANN) for the African low- and mid-latitude regions. In terms of latitude, on average,MIDAS performed 13.44 % better than ANN in the African mid-latitudes, while MIDAS under performed in low-latitudes. This thesis also reports on the effects of moderate geomagnetic conditions on the evolution of EPBs and/or ionospheric irregularities during their season of occurrence using data from (or measurements by) space- and ground-based instruments for the east African equatorial sector. The study showed that the strength of daytime equatorial electrojet (EEJ), the steepness of the TEC peak-to-trough gradient and/or the meridional/transequatorial thermospheric winds sometimes have collective/interwoven effects, while at other times one mechanism dominates. In summary, this research offered tomographic results that outperform the results of the commonly used (“standard”) global models (i.e. IRI and GIM) for a longitude sector of importance to space weather, which has not been adequately studied due to a lack of sufficient instrumentation.
- Full Text:
- Date Issued: 2018
Updating the ionospheric propagation factor, M(3000)F2, global model using the neural network technique and relevant geophysical input parameters
- Oronsaye, Samuel Iyen Jeffrey
- Authors: Oronsaye, Samuel Iyen Jeffrey
- Date: 2013
- Subjects: Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5434 , http://hdl.handle.net/10962/d1001609 , Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Description: This thesis presents an update to the ionospheric propagation factor, M(3000)F2, global empirical model developed by Oyeyemi et al. (2007) (NNO). An additional aim of this research was to produce the updated model in a form that could be used within the International Reference Ionosphere (IRI) global model without adding to the complexity of the IRI. M(3000)F2 is the highest frequency at which a radio signal can be received over a distance of 3000 km after reflection in the ionosphere. The study employed the artificial neural network (ANN) technique using relevant geophysical input parameters which are known to influence the M(3000)F2 parameter. Ionosonde data from 135 ionospheric stations globally, including a number of equatorial stations, were available for this work. M(3000)F2 hourly values from 1976 to 2008, spanning all periods of low and high solar activity were used for model development and verification. A preliminary investigation was first carried out using a relatively small dataset to determine the appropriate input parameters for global M(3000)F2 parameter modelling. Inputs representing diurnal variation, seasonal variation, solar variation, modified dip latitude, longitude and latitude were found to be the optimum parameters for modelling the diurnal and seasonal variations of the M(3000)F2 parameter both on a temporal and spatial basis. The outcome of the preliminary study was applied to the overall dataset to develop a comprehensive ANN M(3000)F2 model which displays a remarkable improvement over the NNO model as well as the IRI version. The model shows 7.11% and 3.85% improvement over the NNO model as well as 13.04% and 10.05% over the IRI M(3000)F2 model, around high and low solar activity periods respectively. A comparison of the diurnal structure of the ANN and the IRI predicted values reveal that the ANN model is more effective in representing the diurnal structure of the M(3000)F2 values than the IRI M(3000)F2 model. The capability of the ANN model in reproducing the seasonal variation pattern of the M(3000)F2 values at 00h00UT, 06h00UT, 12h00UT, and l8h00UT more appropriately than the IRI version is illustrated in this work. A significant result obtained in this study is the ability of the ANN model in improving the post-sunset predicted values of the M(3000)F2 parameter which is known to be problematic to the IRI M(3000)F2 model in the low-latitude and the equatorial regions. The final M(3000)F2 model provides for an improved equatorial prediction and a simplified input space that allows for easy incorporation into the IRI model.
- Full Text:
- Date Issued: 2013
- Authors: Oronsaye, Samuel Iyen Jeffrey
- Date: 2013
- Subjects: Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5434 , http://hdl.handle.net/10962/d1001609 , Neural networks (Computer science) , Ionospheric radio wave propagation , Ionosphere , Geophysics , Ionosondes
- Description: This thesis presents an update to the ionospheric propagation factor, M(3000)F2, global empirical model developed by Oyeyemi et al. (2007) (NNO). An additional aim of this research was to produce the updated model in a form that could be used within the International Reference Ionosphere (IRI) global model without adding to the complexity of the IRI. M(3000)F2 is the highest frequency at which a radio signal can be received over a distance of 3000 km after reflection in the ionosphere. The study employed the artificial neural network (ANN) technique using relevant geophysical input parameters which are known to influence the M(3000)F2 parameter. Ionosonde data from 135 ionospheric stations globally, including a number of equatorial stations, were available for this work. M(3000)F2 hourly values from 1976 to 2008, spanning all periods of low and high solar activity were used for model development and verification. A preliminary investigation was first carried out using a relatively small dataset to determine the appropriate input parameters for global M(3000)F2 parameter modelling. Inputs representing diurnal variation, seasonal variation, solar variation, modified dip latitude, longitude and latitude were found to be the optimum parameters for modelling the diurnal and seasonal variations of the M(3000)F2 parameter both on a temporal and spatial basis. The outcome of the preliminary study was applied to the overall dataset to develop a comprehensive ANN M(3000)F2 model which displays a remarkable improvement over the NNO model as well as the IRI version. The model shows 7.11% and 3.85% improvement over the NNO model as well as 13.04% and 10.05% over the IRI M(3000)F2 model, around high and low solar activity periods respectively. A comparison of the diurnal structure of the ANN and the IRI predicted values reveal that the ANN model is more effective in representing the diurnal structure of the M(3000)F2 values than the IRI M(3000)F2 model. The capability of the ANN model in reproducing the seasonal variation pattern of the M(3000)F2 values at 00h00UT, 06h00UT, 12h00UT, and l8h00UT more appropriately than the IRI version is illustrated in this work. A significant result obtained in this study is the ability of the ANN model in improving the post-sunset predicted values of the M(3000)F2 parameter which is known to be problematic to the IRI M(3000)F2 model in the low-latitude and the equatorial regions. The final M(3000)F2 model provides for an improved equatorial prediction and a simplified input space that allows for easy incorporation into the IRI model.
- Full Text:
- Date Issued: 2013
A comparative study of artificial neural networks and physics models as simulators in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
- Authors: Pretorius, Christiaan Johannes
- Date: 2019
- Subjects: Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/30789 , vital:31131
- Description: The Evolutionary Robotics (ER) process is a technique that applies evolutionary optimization algorithms to the task of automatically developing, or evolving, robotic control programs. These control programs, or simply controllers, are evolved in order to allow a robot to perform a required task. During the ER process, use is often made of robotic simulators to evaluate the performance of candidate controllers that are produced in the course of the controller evolution process. Such simulators accelerate and otherwise simplify the controller evolution process, as opposed to the more arduous process of evaluating controllers in the real world without use of simulation. To date, the vast majority of simulators that have been applied in ER are physics- based models which are constructed by taking into account the underlying physics governing the operation of the robotic system in question. An alternative approach to simulator implementation in ER is the usage of Artificial Neural Networks (ANNs) as simulators in the ER process. Such simulators are referred to as Simulator Neural Networks (SNNs). Previous studies have indicated that SNNs can successfully be used as an alter- native to physics-based simulators in the ER process on various robotic platforms. At the commencement of the current study it was not, however, known how this relatively new method of simulation would compare to traditional physics-based simulation approaches in ER. The study presented in this thesis thus endeavoured to quantitatively compare SNNs and physics-based models as simulators in the ER process. In order to con- duct this comparative study, both SNNs and physics simulators were constructed for the modelling of three different robotic platforms: a differentially-steered robot, a wheeled inverted pendulum robot and a hexapod robot. Each of these two types of simulation was then used in simulation-based evolution processes to evolve con- trollers for each robotic platform. During these controller evolution processes, the SNNs and physics models were compared in terms of their accuracy in making pre- dictions of robotic behaviour, their computational efficiency in arriving at these predictions, the human effort required to construct each simulator and, most im- portantly, the real-world performance of controllers evolved by making use of each simulator. The results obtained in this study illustrated experimentally that SNNs were, in the majority of cases, able to make more accurate predictions than the physics- based models and these SNNs were arguably simpler to construct than the physics simulators. Additionally, SNNs were also shown to be a computationally efficient alternative to physics-based simulators in ER and, again in the majority of cases, these SNNs were able to produce controllers which outperformed those evolved in the physics-based simulators, when these controllers were uploaded to the real-world robots. The results of this thesis thus suggest that SNNs are a viable alternative to more commonly-used physics simulators in ER and further investigation of the potential of this simulation technique appears warranted.
- Full Text:
- Date Issued: 2019
Universal approximation properties of feedforward artificial neural networks.
- Authors: Redpath, Stuart Frederick
- Date: 2011
- Subjects: Neural networks (Computer science) , Artificial intelligence -- Biological applications , Functional analysis , Weierstrass-Stone Theorem , Banach-Hahn theorem
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5430 , http://hdl.handle.net/10962/d1015869
- Description: In this thesis we summarise several results in the literature which show the approximation capabilities of multilayer feedforward artificial neural networks. We show that multilayer feedforward artificial neural networks are capable of approximating continuous and measurable functions from Rn to R to any degree of accuracy under certain conditions. In particular making use of the Stone-Weierstrass and Hahn-Banach theorems, we show that a multilayer feedforward artificial neural network can approximate any continuous function to any degree of accuracy, by using either an arbitrary squashing function or any continuous sigmoidal function for activation. Making use of the Stone-Weirstrass Theorem again, we extend these approximation capabilities of multilayer feedforward artificial neural networks to the space of measurable functions under any probability measure.
- Full Text:
- Date Issued: 2011
- Authors: Redpath, Stuart Frederick
- Date: 2011
- Subjects: Neural networks (Computer science) , Artificial intelligence -- Biological applications , Functional analysis , Weierstrass-Stone Theorem , Banach-Hahn theorem
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5430 , http://hdl.handle.net/10962/d1015869
- Description: In this thesis we summarise several results in the literature which show the approximation capabilities of multilayer feedforward artificial neural networks. We show that multilayer feedforward artificial neural networks are capable of approximating continuous and measurable functions from Rn to R to any degree of accuracy under certain conditions. In particular making use of the Stone-Weierstrass and Hahn-Banach theorems, we show that a multilayer feedforward artificial neural network can approximate any continuous function to any degree of accuracy, by using either an arbitrary squashing function or any continuous sigmoidal function for activation. Making use of the Stone-Weirstrass Theorem again, we extend these approximation capabilities of multilayer feedforward artificial neural networks to the space of measurable functions under any probability measure.
- Full Text:
- Date Issued: 2011
Forecasting solar cycle 24 using neural networks
- Authors: Uwamahoro, Jean
- Date: 2009
- Subjects: Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5468 , http://hdl.handle.net/10962/d1005253 , Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Description: The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.
- Full Text:
- Date Issued: 2009
- Authors: Uwamahoro, Jean
- Date: 2009
- Subjects: Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5468 , http://hdl.handle.net/10962/d1005253 , Solar cycle , Neural networks (Computer science) , Ionosphere , Ionospheric electron density , Ionospheric forecasting , Solar thermal energy
- Description: The ability to predict the future behavior of solar activity has become of extreme importance due to its effect on the near-Earth environment. Predictions of both the amplitude and timing of the next solar cycle will assist in estimating the various consequences of Space Weather. Several prediction techniques have been applied and have achieved varying degrees of success in the domain of solar activity prediction. These techniques include, for example, neural networks and geomagnetic precursor methods. In this thesis, various neural network based models were developed and the model considered to be optimum was used to estimate the shape and timing of solar cycle 24. Given the recent success of the geomagnetic precusrsor methods, geomagnetic activity as measured by the aa index is considered among the main inputs to the neural network model. The neural network model developed is also provided with the time input parameters defining the year and the month of a particular solar cycle, in order to characterise the temporal behaviour of sunspot number as observed during the last 10 solar cycles. The structure of input-output patterns to the neural network is constructed in such a way that the network learns the relationship between the aa index values of a particular cycle, and the sunspot number values of the following cycle. Assuming January 2008 as the minimum preceding solar cycle 24, the shape and amplitude of solar cycle 24 is estimated in terms of monthly mean and smoothed monthly sunspot number. This new prediction model estimates an average solar cycle 24, with the maximum occurring around June 2012 [± 11 months], with a smoothed monthly maximum sunspot number of 121 ± 9.
- Full Text:
- Date Issued: 2009
Deep neural networks for robot vision in evolutionary robotics
- Authors: Watt, Nathan
- Date: 2021-04
- Subjects: Gqeberha (South Africa) , Eastern Cape (South Africa) , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/52100 , vital:43448
- Description: Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Full Text: false
- Date Issued: 2021-04
- Authors: Watt, Nathan
- Date: 2021-04
- Subjects: Gqeberha (South Africa) , Eastern Cape (South Africa) , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/52100 , vital:43448
- Description: Advances in electronics manufacturing have made robots and their sensors cheaper and more accessible. Robots can have a variety of sensors, such as touch sensors, distance sensors and cameras. A robot’s controller is the software which interprets its sensors and determines how the robot will behave. The difficulty in programming robot controllers increases with complex robots and complicated tasks, forming a barrier to deploying robots for real-world applications. Robot controllers can be automatically created with Evolutionary Robotics (ER). ER makes use of an Evolutionary Algorithm (EA) to evolve controllers to complete a particular task. Instead of manually programming controllers, an EA can evolve controllers when provided with the robot’s task. ER has been used to evolve controllers for many different kinds of robots with a variety of sensors, however the use of robots with on-board camera sensors has been limited. The nature of EAs makes evolving a controller for a camera-equipped robot particularly difficult. There are two main challenges which complicate the evolution of vision-based controllers. First, every image from a camera contains a large amount of information, and a controller needs many parameters to receive that information, however it is difficult to evolve controllers with such a large number of parameters using EAs. Second, during the process of evolution, it is necessary to evaluate the fitness of many candidate controllers. This is typically done in simulation, however creating a simulator for a camera sensor is a tedious and timeconsuming task, as building a photo-realistic simulated environment requires handcrafted 3-dimensional models, textures and lighting. Two techniques have been used in previous experiments to overcome the challenges associated with evolving vision-based controllers. Either the controller was provided with extremely low-resolution images, or a task-specific algorithm was used to preprocess the images, only providing the necessary information to the controller. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2021
- Full Text: false
- Date Issued: 2021-04
Development of a neural network based model for predicting the occurrence of spread F within the Brazilian sector
- Authors: Paradza, Masimba Wellington
- Date: 2009
- Subjects: Neural networks (Computer science) , Ionosphere , F region
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5460 , http://hdl.handle.net/10962/d1005245 , Neural networks (Computer science) , Ionosphere , F region
- Description: Spread F is a phenomenon of the ionosphere in which the pulses returned from the ionosphere are of a much greater duration than the transmitted ones. The occurrence of spread F can be predicted using the technique of Neural Networks (NNs). This thesis presents the development and evaluation of NN based models (two single station models and a regional model) for predicting the occurrence of spread F over selected stations within the Brazilian sector. The input space for the NNs included the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), magnetic index (measure of the magnetic activity) and magnetic position (latitude, magnetic declination and inclination). Twelve years of spread F data measured during 1978 to 1989 inclusively at the equatorial site Fortaleza and low latitude site Cachoeira Paulista are used in the development of an input space and NN architecture for the NN models. Spread F data that is believed to be related to plasma bubble developments (range spread F) were used in the development of the models while those associated with narrow spectrum irregularities that occur near the F layer (frequency spread F) were excluded. The results of the models show the dependency of the probability of spread F as a function of local time, season and latitude. The models also illustrate some characteristics of spread F such as the onset and peak occurrence of spread F as a function of distance from the equator. Results from these models are presented in this thesis and compared to measured data and to modelled data obtained with an empirical model developed for the same purpose.
- Full Text:
- Date Issued: 2009
- Authors: Paradza, Masimba Wellington
- Date: 2009
- Subjects: Neural networks (Computer science) , Ionosphere , F region
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5460 , http://hdl.handle.net/10962/d1005245 , Neural networks (Computer science) , Ionosphere , F region
- Description: Spread F is a phenomenon of the ionosphere in which the pulses returned from the ionosphere are of a much greater duration than the transmitted ones. The occurrence of spread F can be predicted using the technique of Neural Networks (NNs). This thesis presents the development and evaluation of NN based models (two single station models and a regional model) for predicting the occurrence of spread F over selected stations within the Brazilian sector. The input space for the NNs included the day number (seasonal variation), hour (diurnal variation), sunspot number (measure of the solar activity), magnetic index (measure of the magnetic activity) and magnetic position (latitude, magnetic declination and inclination). Twelve years of spread F data measured during 1978 to 1989 inclusively at the equatorial site Fortaleza and low latitude site Cachoeira Paulista are used in the development of an input space and NN architecture for the NN models. Spread F data that is believed to be related to plasma bubble developments (range spread F) were used in the development of the models while those associated with narrow spectrum irregularities that occur near the F layer (frequency spread F) were excluded. The results of the models show the dependency of the probability of spread F as a function of local time, season and latitude. The models also illustrate some characteristics of spread F such as the onset and peak occurrence of spread F as a function of distance from the equator. Results from these models are presented in this thesis and compared to measured data and to modelled data obtained with an empirical model developed for the same purpose.
- Full Text:
- Date Issued: 2009
An analysis of neural networks and time series techniques for demand forecasting
- Authors: Winn, David
- Date: 2007
- Subjects: Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5572 , http://hdl.handle.net/10962/d1004362 , Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Description: This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
- Full Text:
- Date Issued: 2007
- Authors: Winn, David
- Date: 2007
- Subjects: Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5572 , http://hdl.handle.net/10962/d1004362 , Time-series analysis , Neural networks (Computer science) , Artificial intelligence , Marketing -- Management , Marketing -- Data processing , Marketing -- Statistical methods , Consumer behaviour
- Description: This research examines the plausibility of developing demand forecasting techniques which are consistently and accurately able to predict demand. Time Series Techniques and Artificial Neural Networks are both investigated. Deodorant sales in South Africa are specifically studied in this thesis. Marketing techniques which are used to influence consumer buyer behaviour are considered, and these factors are integrated into the forecasting models wherever possible. The results of this research suggest that Artificial Neural Networks can be developed which consistently outperform industry forecasting targets as well as Time Series forecasts, suggesting that producers could reduce costs by adopting this more effective method.
- Full Text:
- Date Issued: 2007
NeGPAIM : a model for the proactive detection of information security intrusions, utilizing fuzzy logic and neural network techniques
- Authors: Botha, Martin
- Date: 2003
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DTech (Computer Studies)
- Identifier: vital:10792 , http://hdl.handle.net/10948/142 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: “Information is the lifeblood of any organisation and everything an organisation does involves using information in some way” (Peppard, 1993, p.5). Therefore, it can be argued that information is an organisation’s most precious asset and as with all other assets, like equipment, money, personnel, and so on, this asset needs to be protected properly at all times (Whitman & Mattord, 2003, pp.1-14). The introduction of modern technologies, such as e-commerce, will not only increase the value of information, but will also increase security requirements of those organizations that are intending to utilize such technologies. Evidence of these requirements can be observed in the 2001 CSI/FBI Computer Crime and Security Survey (Power, 2001). According to this source, the annual financial losses caused through security breaches in 2001 have increased by 277% when compared to the results from 1997. The 2002 and 2003 Computer Crime and Security Survey confirms this by stating that the threat of computer crime and other related information security breaches continues unabated and that the financial toll is mounting (Richardson, 2003). Information is normally protected by means of a process of identifying, implementing, managing and maintaining a set of information security controls, countermeasures or safeguards (GMITS, 1998). In the rest of this thesis, the term security controls will be utilized when referring to information protection mechanisms or procedures. These security controls can be of a physical (for example, door locks), a technical (for example, passwords) and/or a procedural nature (for example, to make back-up copies of critical files)(Pfleeger, 2003, pp.22-23; Stallings, 1995, p.1). The effective identification, implementation, management and maintenance of this set of security controls are usually integrated into an Information Security Management Program, the objective of which is to ensure an acceptable level of information confidentiality, integrity and availability within the organisation at all times (Pfleeger, 2003, pp.10-12; Whitman & Mattord, 2003, pp.1-14; Von Solms, 1993). Once the most effective security controls have been identified and implemented, it is important that this level of security be maintained through a process of continued control. For this reason, it is important that proper change management, measurement, audit, monitoring and detection be implemented (Bruce & Dempsey, 1997). Monitoring and detection are important functions and refer to the ability to identify and detect situations where information security policies have been compromised and/or breached or security violations have taken place (BS 7799, 1999; GMITS, 1998; Von Solms, 1993). The Information Security Officer is usually the person responsible for most of the operational tasks in the control process within an Information Security Management Program (Von Solms, 1993). In practice, these tasks could also be performed by a system administrator, network administrator, etc. In the rest of the thesis the person responsible for these tasks will be referred to as system administrator. These tasks have proved to be very challenging and demanding. The main reason for this is the rapid advancement of technology in the discipline of Information Technology, for example, the modern distributed computing environment, the Internet, the “freedom” of end-users, the introduction of e-commerce, and etc. (Whitman & Mattord, 2003, p.9; Sundaram, 2000, p.1; Moses, 2001, p.6; Allen, 2001, p.1). As a result of the importance of this control process, and especially the monitoring and detection tasks, it is vital that the system administrator has proper tools at his/her disposal to perform this task effectively. Many of the tools that are currently available to the system administrator, utilize technical controls, such as, audit logs and user profiles. Audit logs are normally used to record all events executed on a system. These logs are simply files that record security and non-security related events that take place on a computer system within an organisation. For this reason, these logs can be used by these tools to gain valuable information on security violations, such as intrusions and, therefore, are able to monitor the current actions of each user (Microsoft, 2002; Smith, 1989, pp. 116-117). User profiles are files that contain information about users` desktop operating environments and are used by the operating system to structure each user environment so that it is the same each time a user logs onto the system (Microsoft, 2002; Block, 1994, p.54). Thus, a user profile is used to indicate which actions the user is allowed to perform on the system. Both technical controls (audit logs and user profiles) are frequently available in most computer environments (such as, UNIX, Firewalls, Windows, etc.) (Cooper et al, 1995, p.129). Therefore, seeing that the audit logs record most events taking place on an information system and the user profile indicates the authorized actions of each user, the system administrator could most probably utilise these controls in a more proactive manner.
- Full Text:
- Date Issued: 2003
- Authors: Botha, Martin
- Date: 2003
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Doctoral , DTech (Computer Studies)
- Identifier: vital:10792 , http://hdl.handle.net/10948/142 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: “Information is the lifeblood of any organisation and everything an organisation does involves using information in some way” (Peppard, 1993, p.5). Therefore, it can be argued that information is an organisation’s most precious asset and as with all other assets, like equipment, money, personnel, and so on, this asset needs to be protected properly at all times (Whitman & Mattord, 2003, pp.1-14). The introduction of modern technologies, such as e-commerce, will not only increase the value of information, but will also increase security requirements of those organizations that are intending to utilize such technologies. Evidence of these requirements can be observed in the 2001 CSI/FBI Computer Crime and Security Survey (Power, 2001). According to this source, the annual financial losses caused through security breaches in 2001 have increased by 277% when compared to the results from 1997. The 2002 and 2003 Computer Crime and Security Survey confirms this by stating that the threat of computer crime and other related information security breaches continues unabated and that the financial toll is mounting (Richardson, 2003). Information is normally protected by means of a process of identifying, implementing, managing and maintaining a set of information security controls, countermeasures or safeguards (GMITS, 1998). In the rest of this thesis, the term security controls will be utilized when referring to information protection mechanisms or procedures. These security controls can be of a physical (for example, door locks), a technical (for example, passwords) and/or a procedural nature (for example, to make back-up copies of critical files)(Pfleeger, 2003, pp.22-23; Stallings, 1995, p.1). The effective identification, implementation, management and maintenance of this set of security controls are usually integrated into an Information Security Management Program, the objective of which is to ensure an acceptable level of information confidentiality, integrity and availability within the organisation at all times (Pfleeger, 2003, pp.10-12; Whitman & Mattord, 2003, pp.1-14; Von Solms, 1993). Once the most effective security controls have been identified and implemented, it is important that this level of security be maintained through a process of continued control. For this reason, it is important that proper change management, measurement, audit, monitoring and detection be implemented (Bruce & Dempsey, 1997). Monitoring and detection are important functions and refer to the ability to identify and detect situations where information security policies have been compromised and/or breached or security violations have taken place (BS 7799, 1999; GMITS, 1998; Von Solms, 1993). The Information Security Officer is usually the person responsible for most of the operational tasks in the control process within an Information Security Management Program (Von Solms, 1993). In practice, these tasks could also be performed by a system administrator, network administrator, etc. In the rest of the thesis the person responsible for these tasks will be referred to as system administrator. These tasks have proved to be very challenging and demanding. The main reason for this is the rapid advancement of technology in the discipline of Information Technology, for example, the modern distributed computing environment, the Internet, the “freedom” of end-users, the introduction of e-commerce, and etc. (Whitman & Mattord, 2003, p.9; Sundaram, 2000, p.1; Moses, 2001, p.6; Allen, 2001, p.1). As a result of the importance of this control process, and especially the monitoring and detection tasks, it is vital that the system administrator has proper tools at his/her disposal to perform this task effectively. Many of the tools that are currently available to the system administrator, utilize technical controls, such as, audit logs and user profiles. Audit logs are normally used to record all events executed on a system. These logs are simply files that record security and non-security related events that take place on a computer system within an organisation. For this reason, these logs can be used by these tools to gain valuable information on security violations, such as intrusions and, therefore, are able to monitor the current actions of each user (Microsoft, 2002; Smith, 1989, pp. 116-117). User profiles are files that contain information about users` desktop operating environments and are used by the operating system to structure each user environment so that it is the same each time a user logs onto the system (Microsoft, 2002; Block, 1994, p.54). Thus, a user profile is used to indicate which actions the user is allowed to perform on the system. Both technical controls (audit logs and user profiles) are frequently available in most computer environments (such as, UNIX, Firewalls, Windows, etc.) (Cooper et al, 1995, p.129). Therefore, seeing that the audit logs record most events taking place on an information system and the user profile indicates the authorized actions of each user, the system administrator could most probably utilise these controls in a more proactive manner.
- Full Text:
- Date Issued: 2003
The effective combating of intrusion attacks through fuzzy logic and neural networks
- Authors: Goss, Robert Melvin
- Date: 2007
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9794 , http://hdl.handle.net/10948/512 , http://hdl.handle.net/10948/d1011917 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: The importance of properly securing an organization’s information and computing resources has become paramount in modern business. Since the advent of the Internet, securing this organizational information has become increasingly difficult. Organizations deploy many security mechanisms in the protection of their data, intrusion detection systems in particular have an increasingly valuable role to play, and as networks grow, administrators need better ways to monitor their systems. Currently, many intrusion detection systems lack the means to accurately monitor and report on wireless segments within the corporate network. This dissertation proposes an extension to the NeGPAIM model, known as NeGPAIM-W, which allows for the accurate detection of attacks originating on wireless network segments. The NeGPAIM-W model is able to detect both wired and wireless based attacks, and with the extensions to the original model mentioned previously, also provide for correlation of intrusion attacks sourced on both wired and wireless network segments. This provides for a holistic detection strategy for an organization. This has been accomplished with the use of Fuzzy logic and neural networks utilized in the detection of attacks. The model works on the assumption that each user has, and leaves, a unique footprint on a computer system. Thus, all intrusive behaviour on the system and networks which support it, can be traced back to the user account which was used to perform the intrusive behavior.
- Full Text:
- Date Issued: 2007
- Authors: Goss, Robert Melvin
- Date: 2007
- Subjects: Computer security , Fuzzy logic , Neural networks (Computer science)
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9794 , http://hdl.handle.net/10948/512 , http://hdl.handle.net/10948/d1011917 , Computer security , Fuzzy logic , Neural networks (Computer science)
- Description: The importance of properly securing an organization’s information and computing resources has become paramount in modern business. Since the advent of the Internet, securing this organizational information has become increasingly difficult. Organizations deploy many security mechanisms in the protection of their data, intrusion detection systems in particular have an increasingly valuable role to play, and as networks grow, administrators need better ways to monitor their systems. Currently, many intrusion detection systems lack the means to accurately monitor and report on wireless segments within the corporate network. This dissertation proposes an extension to the NeGPAIM model, known as NeGPAIM-W, which allows for the accurate detection of attacks originating on wireless network segments. The NeGPAIM-W model is able to detect both wired and wireless based attacks, and with the extensions to the original model mentioned previously, also provide for correlation of intrusion attacks sourced on both wired and wireless network segments. This provides for a holistic detection strategy for an organization. This has been accomplished with the use of Fuzzy logic and neural networks utilized in the detection of attacks. The model works on the assumption that each user has, and leaves, a unique footprint on a computer system. Thus, all intrusive behaviour on the system and networks which support it, can be traced back to the user account which was used to perform the intrusive behavior.
- Full Text:
- Date Issued: 2007