Eliciting and combining expert opinion : an overview and comparison of methods
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers
- Authors: Dastile, Xolani Collen
- Date: 2011
- Subjects: Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Description: The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
- Full Text:
- Authors: Dastile, Xolani Collen
- Date: 2011
- Subjects: Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5567 , http://hdl.handle.net/10962/d1002807 , Mathematical statistics , Analysis of variance , Nearest neighbor analysis (Statistics) , Trees--Classification
- Description: The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
- Full Text:
A modelling approach to the analysis of complex survey data
- Authors: Dlangamandla, Olwethu
- Date: 2021-10-29
- Subjects: Sampling (Statistics) , Linear models (Statistics) , Multilevel models (Statistics) , Logistic regression analysis , Complex survey data
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192955 , vital:45284
- Description: Surveys are an essential tool for collecting data and most surveys use complex sampling designs to collect the data. Complex sampling designs are used mainly to enhance representativeness in the sample by accounting for the underlying structure of the population. This often results in data that are non-independent and clustered. Ignoring complex design features such as clustering, stratification, multistage and unequal probability sampling may result in inaccurate and incorrect inference. An overview of, and difference between, design-based and model-based approaches to inference for complex survey data has been discussed. This study adopts a model-based approach. The objective of this study is to discuss and describe the modelling approach in analysing complex survey data. This is specifically done by introducing the principle inference methods under which data from complex surveys may be analysed. In particular, discussions on the theory and methods of model fitting for the analysis of complex survey data are presented. We begin by discussing unique features of complex survey data and explore appropriate methods of analysis that account for the complexity inherent in the survey data. We also explore the widely applied logistic regression modelling of binary data in a complex sample survey context. In particular, four forms of logistic regression models are fitted. These models are generalized linear models, multilevel models, mixed effects models and generalized linear mixed models. Simulated complex survey data are used to illustrate the methods and models. Various R packages are used for the analysis. The results presented and discussed in this thesis indicate that a logistic mixed model with first and second level predictors has a better fit compared to a logistic mixed model with first level predictors. In addition, a logistic multilevel model with first and second level predictors and nested random effects provides a better fit to the data compared to other logistic multilevel fitted models. Similar results were obtained from fitting a generalized logistic mixed model with first and second level predictor variables and a generalized linear mixed model with first and second level predictors and nested random effects. , Thesis (MSC) -- Faculty of Science, Statistics, 2021
- Full Text:
- Authors: Dlangamandla, Olwethu
- Date: 2021-10-29
- Subjects: Sampling (Statistics) , Linear models (Statistics) , Multilevel models (Statistics) , Logistic regression analysis , Complex survey data
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192955 , vital:45284
- Description: Surveys are an essential tool for collecting data and most surveys use complex sampling designs to collect the data. Complex sampling designs are used mainly to enhance representativeness in the sample by accounting for the underlying structure of the population. This often results in data that are non-independent and clustered. Ignoring complex design features such as clustering, stratification, multistage and unequal probability sampling may result in inaccurate and incorrect inference. An overview of, and difference between, design-based and model-based approaches to inference for complex survey data has been discussed. This study adopts a model-based approach. The objective of this study is to discuss and describe the modelling approach in analysing complex survey data. This is specifically done by introducing the principle inference methods under which data from complex surveys may be analysed. In particular, discussions on the theory and methods of model fitting for the analysis of complex survey data are presented. We begin by discussing unique features of complex survey data and explore appropriate methods of analysis that account for the complexity inherent in the survey data. We also explore the widely applied logistic regression modelling of binary data in a complex sample survey context. In particular, four forms of logistic regression models are fitted. These models are generalized linear models, multilevel models, mixed effects models and generalized linear mixed models. Simulated complex survey data are used to illustrate the methods and models. Various R packages are used for the analysis. The results presented and discussed in this thesis indicate that a logistic mixed model with first and second level predictors has a better fit compared to a logistic mixed model with first level predictors. In addition, a logistic multilevel model with first and second level predictors and nested random effects provides a better fit to the data compared to other logistic multilevel fitted models. Similar results were obtained from fitting a generalized logistic mixed model with first and second level predictor variables and a generalized linear mixed model with first and second level predictors and nested random effects. , Thesis (MSC) -- Faculty of Science, Statistics, 2021
- Full Text:
Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier
- Eldud Omer, Ahmed Abdelkarim
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
Default in payment, an application of statistical learning techniques
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
Analytic pricing of American put options
- Authors: Glover, Elistan Nicholas
- Date: 2009
- Subjects: Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Description: American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.
- Full Text:
- Authors: Glover, Elistan Nicholas
- Date: 2009
- Subjects: Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5566 , http://hdl.handle.net/10962/d1002804 , Options (Finance) -- Prices -- Mathematical models , Derivative securities -- Prices -- Mathematical models , Finance -- Mathematical models , Martingales (Mathematics)
- Description: American options are the most commonly traded financial derivatives in the market. Pricing these options fairly, so as to avoid arbitrage, is of paramount importance. Closed form solutions for American put options cannot be utilised in practice and so numerical techniques are employed. This thesis looks at the work done by other researchers to find an analytic solution to the American put option pricing problem and suggests a practical method, that uses Monte Carlo simulation, to approximate the American put option price. The theory behind option pricing is first discussed using a discrete model. Once the concepts of arbitrage-free pricing and hedging have been dealt with, this model is extended to a continuous-time setting. Martingale theory is introduced to put the option pricing theory in a more formal framework. The construction of a hedging portfolio is discussed in detail and it is shown how financial derivatives are priced according to a unique riskneutral probability measure. Black-Scholes model is discussed and utilised to find closed form solutions to European style options. American options are discussed in detail and it is shown that under certain conditions, American style options can be solved according to closed form solutions. Various numerical techniques are presented to approximate the true American put option price. Chief among these methods is the Richardson extrapolation on a sequence of Bermudan options method that was developed by Geske and Johnson. This model is extended to a Repeated-Richardson extrapolation technique. Finally, a Monte Carlo simulation is used to approximate Bermudan put options. These values are then extrapolated to approximate the price of an American put option. The use of extrapolation techniques was hampered by the presence of non-uniform convergence of the Bermudan put option sequence. When convergence was uniform, the approximations were accurate up to a few cents difference.
- Full Text:
Statistical analyses of artificial waterpoints: their effect on the herbaceous and woody structure composition within the Kruger National Park
- Authors: Goodall, Victoria Lucy
- Date: 2007
- Subjects: South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5570 , http://hdl.handle.net/10962/d1002810 , South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Description: The objective of this project is to link the statistical theory used in the ecological sciences with an actual project that was developed for the South African National Parks Scientific Services. It investigates the changes that have occurred in the herbaceous and woody structure due to the closure of artificial waterpoints; including the impacts that elephants and other herbivores have on the vegetation of the Kruger National Park. This project was designed in conjunction with South African National Parks (SANP) Scientific Services and it is a registered project with this department. The results of this project will be submitted to Scientific Services in accordance with the terms and conditions of a SANP research project. A major concern within the KNP is the declining numbers of rare antelope and numerous projects have been developed to investigate possible ways of halting this decline and thus protecting the heterogeneity of the Kruger National Park. Three different datasets were investigated, covering three aspects of vegetation structure and composition within the KNP. The first investigated the changes that have occurred since the N'washitsumbe enclosure in the Far Northern KNP was fenced off from the rest of the park. The results show that over the 40 years since the enclosure was built, changes have occurred which have resulted in a significant difference in the abundance of Increaser 2 and Decreaser grass species between the inside and the outside of the enclosure. Increaser 2 and Decreaser categories are the result of a grass species classification depending on whether the species thrives or is depressed by heavy grazing. The difference in grass species composition and structure between the inside and the outside of the enclosure indicates that the grazing animals within the KNP have influenced the grass composition in a way that favours the dominant animals. This has resulted in a declining roan antelope population - one of the species that is considered as a 'rare antelope'. Many artificial waterpoints (boreholes and dams) have also been closed throughout the KNP in the hope of resulting in a change in vegetation structure and composition in favour of the roan. Veld condition assessment data for 87 boreholes throughout the Park was analyzed to determine whether the veld in the vicinity is beginning to change towards a more Decreaser dominated sward which would favour the roan. The results were analyzed for the different regions of the Park; and they indicate that changes are becoming evident; however, the results are not particularly conclusive, yet. The majority of the boreholes were closed between 1994 and 1998 which means that not a lot of data were available to be analyzed. A similar study conducted in another 10 years time might reveal more meaningful results. However the results are moving in the direction hoped for by the management of the KNP. The results show that the grass composition has a higher proportion of Decreaser grasses since the closure of the waterpoints, and the grass biomass around these areas has also improved. The results were analyzed on an individual basis; and then on a regional basis as the minimal data meant that the individual analyses did not provide any significant results. A third study was then done on the impact that the rapidly increasing elephant population on the vegetation within the Riparian zone along three rivers in the Far Northern region of the KNP. The riparian zone is an important part of the landscape, in terms of providing food for many animals as well as shade. The elephant population has increased substantially since the termination of the culling program and this means that the feeding requirements of the population has increased which could result in severe damage upon the vegetation, as elephants can be very destructive feeders. The results show surprising differences between the three years of data that were analyzed; however the results indicate that the elephants are targeting specific height ranges of trees when feeding; however they do not seem to consistently target specific tree species. This is positive for the diversity of the Riparian zone as this region is very important both ecologically and aesthetically for the tourists who visit the Park.
- Full Text:
- Authors: Goodall, Victoria Lucy
- Date: 2007
- Subjects: South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5570 , http://hdl.handle.net/10962/d1002810 , South African National Parks , Ecology -- Statistical methods , Regression analysis , Log-linear models , Game reserves -- South Africa , Kruger National Park (South Africa)
- Description: The objective of this project is to link the statistical theory used in the ecological sciences with an actual project that was developed for the South African National Parks Scientific Services. It investigates the changes that have occurred in the herbaceous and woody structure due to the closure of artificial waterpoints; including the impacts that elephants and other herbivores have on the vegetation of the Kruger National Park. This project was designed in conjunction with South African National Parks (SANP) Scientific Services and it is a registered project with this department. The results of this project will be submitted to Scientific Services in accordance with the terms and conditions of a SANP research project. A major concern within the KNP is the declining numbers of rare antelope and numerous projects have been developed to investigate possible ways of halting this decline and thus protecting the heterogeneity of the Kruger National Park. Three different datasets were investigated, covering three aspects of vegetation structure and composition within the KNP. The first investigated the changes that have occurred since the N'washitsumbe enclosure in the Far Northern KNP was fenced off from the rest of the park. The results show that over the 40 years since the enclosure was built, changes have occurred which have resulted in a significant difference in the abundance of Increaser 2 and Decreaser grass species between the inside and the outside of the enclosure. Increaser 2 and Decreaser categories are the result of a grass species classification depending on whether the species thrives or is depressed by heavy grazing. The difference in grass species composition and structure between the inside and the outside of the enclosure indicates that the grazing animals within the KNP have influenced the grass composition in a way that favours the dominant animals. This has resulted in a declining roan antelope population - one of the species that is considered as a 'rare antelope'. Many artificial waterpoints (boreholes and dams) have also been closed throughout the KNP in the hope of resulting in a change in vegetation structure and composition in favour of the roan. Veld condition assessment data for 87 boreholes throughout the Park was analyzed to determine whether the veld in the vicinity is beginning to change towards a more Decreaser dominated sward which would favour the roan. The results were analyzed for the different regions of the Park; and they indicate that changes are becoming evident; however, the results are not particularly conclusive, yet. The majority of the boreholes were closed between 1994 and 1998 which means that not a lot of data were available to be analyzed. A similar study conducted in another 10 years time might reveal more meaningful results. However the results are moving in the direction hoped for by the management of the KNP. The results show that the grass composition has a higher proportion of Decreaser grasses since the closure of the waterpoints, and the grass biomass around these areas has also improved. The results were analyzed on an individual basis; and then on a regional basis as the minimal data meant that the individual analyses did not provide any significant results. A third study was then done on the impact that the rapidly increasing elephant population on the vegetation within the Riparian zone along three rivers in the Far Northern region of the KNP. The riparian zone is an important part of the landscape, in terms of providing food for many animals as well as shade. The elephant population has increased substantially since the termination of the culling program and this means that the feeding requirements of the population has increased which could result in severe damage upon the vegetation, as elephants can be very destructive feeders. The results show surprising differences between the three years of data that were analyzed; however the results indicate that the elephants are targeting specific height ranges of trees when feeding; however they do not seem to consistently target specific tree species. This is positive for the diversity of the Riparian zone as this region is very important both ecologically and aesthetically for the tourists who visit the Park.
- Full Text:
Statistical and Mathematical Learning: an application to fraud detection and prevention
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
Bayesian accelerated life tests: exponential and Weibull models
- Authors: Izally, Sharkay Ruwade
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3003 , vital:20351
- Description: Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.
- Full Text:
- Authors: Izally, Sharkay Ruwade
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3003 , vital:20351
- Description: Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.
- Full Text:
Enhancing the use of large-scale assessment data in South Africa: Multidimensional Item Response Theory
- Authors: Lahoud, Tamlyn Ann
- Date: 2023-03-29
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422389 , vital:71938
- Description: This research aims to enhance the use of large-scale assessment data in South Africa by evaluating assessment validity by means of multidimensional item response theory and its associated statistical techniques, which have been severely underutilised. Data from the 2014 administration of the grade 6 Mathematics annual national assessment was used in this study and all analyses were conducted using the mirt package in R. A two parameter logistic item response theory model was developed which indicated a clear alignment between the model parameters and difficulty specifications of the test. The test was found to favour learners within the central band on the ability scale. An exploratory five dimensional item response theory model was then developed to investigate the alignment with the test specifications as evidence for construct validity. Significant discrepancies between the factor structure and the specifications of the test were identified. Notably, the results suggest that some items measured an ability that was not purely mathematical, such as reading ability, which would distort the test’s representation of Mathematics ability, disadvantage learners with lower English literacy, and reduce the construct validity of the test. Further validity evidence was obtained by differential item functioning analyses which revealed that fourteen items function differently for learners from different provinces. Although possible reasons for the presence of differential item functioning among provinces were not discussed, its presence provided sufficient evidence against the validity of the test. In conclusion, multidimensional item response theory provided an effective and rigorous approach to establishing the validity of a large-scale assessment. To avoid the pitfalls of the annual national assessments, it is recommended that this multidimensional item and differential item functioning techniques are utilised for the development and evaluation of future national assessment instruments in South Africa. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
- Authors: Lahoud, Tamlyn Ann
- Date: 2023-03-29
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422389 , vital:71938
- Description: This research aims to enhance the use of large-scale assessment data in South Africa by evaluating assessment validity by means of multidimensional item response theory and its associated statistical techniques, which have been severely underutilised. Data from the 2014 administration of the grade 6 Mathematics annual national assessment was used in this study and all analyses were conducted using the mirt package in R. A two parameter logistic item response theory model was developed which indicated a clear alignment between the model parameters and difficulty specifications of the test. The test was found to favour learners within the central band on the ability scale. An exploratory five dimensional item response theory model was then developed to investigate the alignment with the test specifications as evidence for construct validity. Significant discrepancies between the factor structure and the specifications of the test were identified. Notably, the results suggest that some items measured an ability that was not purely mathematical, such as reading ability, which would distort the test’s representation of Mathematics ability, disadvantage learners with lower English literacy, and reduce the construct validity of the test. Further validity evidence was obtained by differential item functioning analyses which revealed that fourteen items function differently for learners from different provinces. Although possible reasons for the presence of differential item functioning among provinces were not discussed, its presence provided sufficient evidence against the validity of the test. In conclusion, multidimensional item response theory provided an effective and rigorous approach to establishing the validity of a large-scale assessment. To avoid the pitfalls of the annual national assessments, it is recommended that this multidimensional item and differential item functioning techniques are utilised for the development and evaluation of future national assessment instruments in South Africa. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
Reliability analysis: assessment of hardware and human reliability
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
Application of multiserver queueing to call centres
- Authors: Majakwara, Jacob
- Date: 2010
- Subjects: Call centers , ERLANG (Computer program language) , Queuing theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5578 , http://hdl.handle.net/10962/d1015461
- Description: The simplest and most widely used queueing model in call centres is the M/M/k system, sometimes referred to as Erlang-C. For many applications the model is an over-simplification. Erlang-C model ignores among other things busy signals, customer impatience and services that span multiple visits. Although the Erlang-C formula is easily implemented, it is not easy to obtain insight from its answers (for example, to find an approximate answer to questions such as "how many additional agents do I need if the arrival rate doubles?"). An approximation of the Erlang-C formula that gives structural insight into this type of question would be of use to better understand economies of scale in call centre operations. Erlang-C based predictions can also turn out highly inaccurate because of violations of underlying assumptions and these violations are not straightforward to model. For example, non-exponential service times lead one to the M/G/k queue which, in stark contrast to the M/M/k system, is difficult to analyse. This thesis deals mainly with the general M/GI/k model with abandonment. The arrival process conforms to a Poisson process, service durations are independent and identically distributed with a general distribution, there are k servers, and independent and identically distributed customer abandoning times with a general distribution. This thesis will endeavour to analyse call centres using M/GI/k model with abandonment and the data to be used will be simulated using EZSIM-software. The paper by Brown et al. [3] entitled "Statistical Analysis of a Telephone Call Centre: A Queueing-Science Perspective," will be the basis upon which this thesis is built.
- Full Text:
- Authors: Majakwara, Jacob
- Date: 2010
- Subjects: Call centers , ERLANG (Computer program language) , Queuing theory
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5578 , http://hdl.handle.net/10962/d1015461
- Description: The simplest and most widely used queueing model in call centres is the M/M/k system, sometimes referred to as Erlang-C. For many applications the model is an over-simplification. Erlang-C model ignores among other things busy signals, customer impatience and services that span multiple visits. Although the Erlang-C formula is easily implemented, it is not easy to obtain insight from its answers (for example, to find an approximate answer to questions such as "how many additional agents do I need if the arrival rate doubles?"). An approximation of the Erlang-C formula that gives structural insight into this type of question would be of use to better understand economies of scale in call centre operations. Erlang-C based predictions can also turn out highly inaccurate because of violations of underlying assumptions and these violations are not straightforward to model. For example, non-exponential service times lead one to the M/G/k queue which, in stark contrast to the M/M/k system, is difficult to analyse. This thesis deals mainly with the general M/GI/k model with abandonment. The arrival process conforms to a Poisson process, service durations are independent and identically distributed with a general distribution, there are k servers, and independent and identically distributed customer abandoning times with a general distribution. This thesis will endeavour to analyse call centres using M/GI/k model with abandonment and the data to be used will be simulated using EZSIM-software. The paper by Brown et al. [3] entitled "Statistical Analysis of a Telephone Call Centre: A Queueing-Science Perspective," will be the basis upon which this thesis is built.
- Full Text:
Stochastic models in finance
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
A review of generalized linear models for count data with emphasis on current geospatial procedures
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
Bayesian accelerated life tests for the Weibull distribution under non-informative priors
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
The application of Classification Trees in the Banking Sector
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
An analysis of the Libor and Swap market models for pricing interest-rate derivatives
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
Clustering algorithms and their effect on edge preservation in image compression
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
Pricing exotic options using C++
- Authors: Nhongo, Tawuya D R
- Date: 2007
- Subjects: C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5577 , http://hdl.handle.net/10962/d1008373 , C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Description: This document demonstrates the use of the C++ programming language as a simulation tool in the efficient pricing of exotic European options. Extensions to the basic problem of simulation pricing are undertaken including variance reduction by conditional expectation, control and antithetic variates. Ultimately we were able to produce a modularized, easily extend-able program which effectively makes use of Monte Carlo simulation techniques to price lookback, Asian and barrier exotic options. Theories of variance reduction were validated except in cases where we used control variates in combination with the other variance reduction techniques in which case we observed increased variance. Again, the main aim of this half thesis was to produce a C++ program which would produce stable pricings of exotic options.
- Full Text:
- Authors: Nhongo, Tawuya D R
- Date: 2007
- Subjects: C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5577 , http://hdl.handle.net/10962/d1008373 , C++ (Computer program language) , Monte Carlo method , Simulation methods , Options (Finance) -- Mathematical models , Pricing -- Mathematical models
- Description: This document demonstrates the use of the C++ programming language as a simulation tool in the efficient pricing of exotic European options. Extensions to the basic problem of simulation pricing are undertaken including variance reduction by conditional expectation, control and antithetic variates. Ultimately we were able to produce a modularized, easily extend-able program which effectively makes use of Monte Carlo simulation techniques to price lookback, Asian and barrier exotic options. Theories of variance reduction were validated except in cases where we used control variates in combination with the other variance reduction techniques in which case we observed increased variance. Again, the main aim of this half thesis was to produce a C++ program which would produce stable pricings of exotic options.
- Full Text:
Importance of various data sources in deterministic stock assessment models
- Authors: Northrop, Amanda Rosalind
- Date: 2008
- Subjects: Fish stock assessment -- Mathematical models , Fishery management -- Mathematical models , Fish populations -- Mathematical models , Error analysis (Mathematics) , Fishery management -- Statistical methods , Fish stock assessment -- Statistical methods
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5571 , http://hdl.handle.net/10962/d1002811 , Fish stock assessment -- Mathematical models , Fishery management -- Mathematical models , Fish populations -- Mathematical models , Error analysis (Mathematics) , Fishery management -- Statistical methods , Fish stock assessment -- Statistical methods
- Description: In fisheries, advice for the management of fish populations is based upon management quantities that are estimated by stock assessment models. Fisheries stock assessment is a process in which data collected from a fish population are used to generate a model which enables the effects of fishing on a stock to be quantified. This study determined the effects of various data sources, assumptions, error scenarios and sample sizes on the accuracy with which the age-structured production model and the Schaefer model (assessment models) were able to estimate key management quantities for a fish resource similar to the Cape hakes (Merluccius capensis and M. paradoxus). An age-structured production model was used as the operating model to simulate hypothetical fish resource population dynamics for which management quantities could be determined by the assessment models. Different stocks were simulated with various harvest rate histories. These harvest rates produced Downhill trip data, where harvest rates increase over time until the resource is close to collapse, and Good contrast data, where the harvest rate increases over time until the resource is at less than half of it’s exploitable biomass, and then it decreases allowing the resource to rebuild. The accuracy of the assessment models were determined when data were drawn from the operating model with various combinations of error. The age-structured production model was more accurate at estimating maximum sustainable yield, maximum sustainable yield level and the maximum sustainable yield ratio. The Schaefer model gave more accurate estimates of Depletion and Total Allowable Catch. While the assessment models were able to estimate management quantities using Downhill trip data, the estimates improved significantly when the models were tuned with Good contrast data. When autocorrelation in the spawner-recruit curve was not accounted for by the deterministic assessment model, inaccuracy in parameter estimates were high. The assessment model management quantities were not greatly affected by multinomial ageing error in the catch-at-age matrices at a sample size of 5000 otoliths. Assessment model estimates were closer to their true values when log-normal error were assumed in the catch-at-age matrix, even when the true underlying error were multinomial. However, the multinomial had smaller coefficients of variation at all sample sizes, between 1000 and 10000, of otoliths aged. It was recommended that the assessment model is chosen based on the management quantity of interest. When the underlying error is multinomial, the weighted log-normal likelihood function should be used in the catch-at-age matrix to obtain accurate parameter estimates. However, the multinomial likelihood should be used to minimise the coefficient of variation. Investigation into correcting for autocorrelation in the stock-recruitment relationship should be carried out, as it had a large effect on the accuracy of management quantities.
- Full Text:
- Authors: Northrop, Amanda Rosalind
- Date: 2008
- Subjects: Fish stock assessment -- Mathematical models , Fishery management -- Mathematical models , Fish populations -- Mathematical models , Error analysis (Mathematics) , Fishery management -- Statistical methods , Fish stock assessment -- Statistical methods
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5571 , http://hdl.handle.net/10962/d1002811 , Fish stock assessment -- Mathematical models , Fishery management -- Mathematical models , Fish populations -- Mathematical models , Error analysis (Mathematics) , Fishery management -- Statistical methods , Fish stock assessment -- Statistical methods
- Description: In fisheries, advice for the management of fish populations is based upon management quantities that are estimated by stock assessment models. Fisheries stock assessment is a process in which data collected from a fish population are used to generate a model which enables the effects of fishing on a stock to be quantified. This study determined the effects of various data sources, assumptions, error scenarios and sample sizes on the accuracy with which the age-structured production model and the Schaefer model (assessment models) were able to estimate key management quantities for a fish resource similar to the Cape hakes (Merluccius capensis and M. paradoxus). An age-structured production model was used as the operating model to simulate hypothetical fish resource population dynamics for which management quantities could be determined by the assessment models. Different stocks were simulated with various harvest rate histories. These harvest rates produced Downhill trip data, where harvest rates increase over time until the resource is close to collapse, and Good contrast data, where the harvest rate increases over time until the resource is at less than half of it’s exploitable biomass, and then it decreases allowing the resource to rebuild. The accuracy of the assessment models were determined when data were drawn from the operating model with various combinations of error. The age-structured production model was more accurate at estimating maximum sustainable yield, maximum sustainable yield level and the maximum sustainable yield ratio. The Schaefer model gave more accurate estimates of Depletion and Total Allowable Catch. While the assessment models were able to estimate management quantities using Downhill trip data, the estimates improved significantly when the models were tuned with Good contrast data. When autocorrelation in the spawner-recruit curve was not accounted for by the deterministic assessment model, inaccuracy in parameter estimates were high. The assessment model management quantities were not greatly affected by multinomial ageing error in the catch-at-age matrices at a sample size of 5000 otoliths. Assessment model estimates were closer to their true values when log-normal error were assumed in the catch-at-age matrix, even when the true underlying error were multinomial. However, the multinomial had smaller coefficients of variation at all sample sizes, between 1000 and 10000, of otoliths aged. It was recommended that the assessment model is chosen based on the management quantity of interest. When the underlying error is multinomial, the weighted log-normal likelihood function should be used in the catch-at-age matrix to obtain accurate parameter estimates. However, the multinomial likelihood should be used to minimise the coefficient of variation. Investigation into correcting for autocorrelation in the stock-recruitment relationship should be carried out, as it had a large effect on the accuracy of management quantities.
- Full Text: