Extreme value theory with applications in finance
- Authors: Matshaya, Aphelele
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465047 , vital:76568
- Description: The development and implementation of extreme value theory models has been very significant as they demonstrate an application of statistics that is very much needed in the analysis of extreme events in a wide range of industries, and more recently the cryptocurrency industry. The crypto industry is booming as the phenomenon of cryptocurrencies is spreading worldwide and constantly drawing the attention of investors, the media, as well as financial institutions. Cryptocurrencies are highly volatile assets whose price fluctuations continually lead to the loss of millions in a variety of currencies in the market. In this thesis, the extreme behaviour in the tail of the distribution of returns of Bitcoin will be examined. High-frequency Bitcoin data spanning periods before as well as after the COVID-19 pandemic will be utilised. The Peaks-over-Threshold method will be used to build models based on the generalised Pareto distribution, and both positive returns and negative returns will be modelled. Several techniques to select appropriate thresholds for the models are explored and the goodness-offit of the models assessed to determine the extent to which extreme value theory can model Bitcoin returns sufficiently. The analysis is extended and performed on Bitcoin data from a different crypto exchange to ensure model robustness is achieved. Using Bivariate extreme value theory, a Gumbel copula is fitted by the method of maximum likelihood with censored data to model the dynamic relationship between Bitcoin returns and trading volumes at the extreme tails. The extreme dependence and correlation structures will be analysed using tail dependence coefficients and the related extreme correlation coefficients. All computations are executed in R and the results are recorded in tabular and graphical formats. Tail-related measures of risk, namely Value-at-Risk and Expected Shortfall, are estimated from the extreme value models. Backtesting procedures are performed on the results from the risk models. A comparison between the negative returns of Bitcoin and those of Gold is carried out to determine which is the less risky asset to invest in during extreme market conditions. Extreme risk is calculated using the same extreme value approach and the results show that Bitcoin is riskier than Gold. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
- Authors: Matshaya, Aphelele
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465047 , vital:76568
- Description: The development and implementation of extreme value theory models has been very significant as they demonstrate an application of statistics that is very much needed in the analysis of extreme events in a wide range of industries, and more recently the cryptocurrency industry. The crypto industry is booming as the phenomenon of cryptocurrencies is spreading worldwide and constantly drawing the attention of investors, the media, as well as financial institutions. Cryptocurrencies are highly volatile assets whose price fluctuations continually lead to the loss of millions in a variety of currencies in the market. In this thesis, the extreme behaviour in the tail of the distribution of returns of Bitcoin will be examined. High-frequency Bitcoin data spanning periods before as well as after the COVID-19 pandemic will be utilised. The Peaks-over-Threshold method will be used to build models based on the generalised Pareto distribution, and both positive returns and negative returns will be modelled. Several techniques to select appropriate thresholds for the models are explored and the goodness-offit of the models assessed to determine the extent to which extreme value theory can model Bitcoin returns sufficiently. The analysis is extended and performed on Bitcoin data from a different crypto exchange to ensure model robustness is achieved. Using Bivariate extreme value theory, a Gumbel copula is fitted by the method of maximum likelihood with censored data to model the dynamic relationship between Bitcoin returns and trading volumes at the extreme tails. The extreme dependence and correlation structures will be analysed using tail dependence coefficients and the related extreme correlation coefficients. All computations are executed in R and the results are recorded in tabular and graphical formats. Tail-related measures of risk, namely Value-at-Risk and Expected Shortfall, are estimated from the extreme value models. Backtesting procedures are performed on the results from the risk models. A comparison between the negative returns of Bitcoin and those of Gold is carried out to determine which is the less risky asset to invest in during extreme market conditions. Extreme risk is calculated using the same extreme value approach and the results show that Bitcoin is riskier than Gold. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
Statistical classification, an application to credit default
- Authors: Sikhakhane, Anele Gcina
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465069 , vital:76570
- Description: Statistical learning has been used in both industry and academia to create credit scoring models. These models are used to predict who might default on their loan repayments, thus minimizing the risk financial institutions face. In this study six traditional and one more recent classifier, namely kNN, LDA, CART, RF, AdaBoost, XGBoost and SynBoost were used to predict who might default on their loans. The data set used in this study was imbalanced thus sampling and performance evaluation techniques were investigated and used to balance the class distribution and assess the classifiers performance. In addition to the standard variables and data set, new variables called synthetic variables and synthetic data sets were produced, investigated and used to predict who might default on their loans. This study found that the synthetic data set had strong predictive power and sampling methods negatively affected the classifiers performance. The best-performing classifier was XGBoost, with an AUC score of 0.7732. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
- Authors: Sikhakhane, Anele Gcina
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465069 , vital:76570
- Description: Statistical learning has been used in both industry and academia to create credit scoring models. These models are used to predict who might default on their loan repayments, thus minimizing the risk financial institutions face. In this study six traditional and one more recent classifier, namely kNN, LDA, CART, RF, AdaBoost, XGBoost and SynBoost were used to predict who might default on their loans. The data set used in this study was imbalanced thus sampling and performance evaluation techniques were investigated and used to balance the class distribution and assess the classifiers performance. In addition to the standard variables and data set, new variables called synthetic variables and synthetic data sets were produced, investigated and used to predict who might default on their loans. This study found that the synthetic data set had strong predictive power and sampling methods negatively affected the classifiers performance. The best-performing classifier was XGBoost, with an AUC score of 0.7732. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
Suspicious sctivity reports: Enhancing the detection of terrorist financing and suspicious transactions in migrant remittances
- Authors: Mbiva, Stanley Munamato
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465058 , vital:76569
- Description: Migrant remittances have become an important factor in poverty alleviation and microeconomic development in low-income nations. Global migrant remittances are expected to exceed US $630 billion by 2023, according to the World Bank. In addition to offering an alternate source of income that supplements the recipient’s household earnings, they are less likely to be affected by global economic downturns, ensuring stability and a consistent stream of revenue. However, the ease of global migrant remittance financial transfers has attracted the risk of being abused by terrorist organizations to quickly move and conceal operating cash, hence facilitating terrorist financing. This study aims to develop an unsupervised machine-learning model capable of detecting suspicious financial transactions associated with terrorist financing in migrant remittances. The data used in this study came from a World Bank survey of migrant remitters in Belgium. To understand the natural structures and grouping in the dataset, agglomerative hierarchical clustering and k-prototype clustering techniques were employed. This established the number of clusters present in the dataset making it possible to compare individual migrant remittances in the dataset with their peers. A Structural Equation Model (SEM) and an Local Outlier Factor - Isolation Forest (LOF-IF) algorithm were applied to analyze and detect suspicious transactions in the dataset. A traditional Rule-Based Method (RBM) was also created as a benchmark algorithm that evaluates model performance. The results show that the SEM model classifies a significantly high number of transactions as suspicious, making it prone to detecting false positives. Finally, the study applied the proposed ensemble outlier detection model to detect suspicious transactions in the same data set. The proposed ensemble model utilized an Isolation Forest (IF) for pruning and a Local Outlier Factor (LOF) to detect local outliers. The model performed exceptionally well, being able to detect over 90% of suspicious transactions in the testing data set during model cross-validation. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
- Authors: Mbiva, Stanley Munamato
- Date: 2024-10-11
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/465058 , vital:76569
- Description: Migrant remittances have become an important factor in poverty alleviation and microeconomic development in low-income nations. Global migrant remittances are expected to exceed US $630 billion by 2023, according to the World Bank. In addition to offering an alternate source of income that supplements the recipient’s household earnings, they are less likely to be affected by global economic downturns, ensuring stability and a consistent stream of revenue. However, the ease of global migrant remittance financial transfers has attracted the risk of being abused by terrorist organizations to quickly move and conceal operating cash, hence facilitating terrorist financing. This study aims to develop an unsupervised machine-learning model capable of detecting suspicious financial transactions associated with terrorist financing in migrant remittances. The data used in this study came from a World Bank survey of migrant remitters in Belgium. To understand the natural structures and grouping in the dataset, agglomerative hierarchical clustering and k-prototype clustering techniques were employed. This established the number of clusters present in the dataset making it possible to compare individual migrant remittances in the dataset with their peers. A Structural Equation Model (SEM) and an Local Outlier Factor - Isolation Forest (LOF-IF) algorithm were applied to analyze and detect suspicious transactions in the dataset. A traditional Rule-Based Method (RBM) was also created as a benchmark algorithm that evaluates model performance. The results show that the SEM model classifies a significantly high number of transactions as suspicious, making it prone to detecting false positives. Finally, the study applied the proposed ensemble outlier detection model to detect suspicious transactions in the same data set. The proposed ensemble model utilized an Isolation Forest (IF) for pruning and a Local Outlier Factor (LOF) to detect local outliers. The model performed exceptionally well, being able to detect over 90% of suspicious transactions in the testing data set during model cross-validation. , Thesis (MSc) -- Faculty of Science, Statistics, 2024
- Full Text:
- Date Issued: 2024-10-11
The application of statistical classification to predict sovereign default
- Authors: Vele, Rendani
- Date: 2023-10-13
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424563 , vital:72164
- Description: When considering sovereign loans, it is imperative for a financial institution to have a good understanding of the sovereign they are transacting with. Defaults can occur if proper evaluation steps are not considered. To aid in the prediction of potential sovereign defaults, financial institutions, together with grading companies, quantify the risk associated with issuing a loan to a sovereign by developing sovereign default early warning systems (EWS). Various classification models are considered in this study to develop sovereign default EWS. These models are the binary logit, probit, Bayesian additive regression trees, and artificial neural networks. This study investigates the predictive performance of the various classification techniques. Sovereign information is not readily available, so missing data techniques are considered in order to counter the data availability issue. Sovereign defaults are rare, which results in an imbalance in the distribution of the binary dependent variable. To assess data sets with such characteristics, metrics for imbalanced data are considered for model performance comparison. From the findings, the Bayesian additive regression technique generated better results than the other techniques when considering a basic data analysis. Moreover when cross-validation was considered, the neural network technique performed best. In addition, regional models had better results than the global model when considering model predictive capability. The significance of this study is to develop sovereign default prediction models using various classification techniques focused on enhancing previous literature and analysis through the application of Bayesian additive regression trees. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
- Date Issued: 2023-10-13
- Authors: Vele, Rendani
- Date: 2023-10-13
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/424563 , vital:72164
- Description: When considering sovereign loans, it is imperative for a financial institution to have a good understanding of the sovereign they are transacting with. Defaults can occur if proper evaluation steps are not considered. To aid in the prediction of potential sovereign defaults, financial institutions, together with grading companies, quantify the risk associated with issuing a loan to a sovereign by developing sovereign default early warning systems (EWS). Various classification models are considered in this study to develop sovereign default EWS. These models are the binary logit, probit, Bayesian additive regression trees, and artificial neural networks. This study investigates the predictive performance of the various classification techniques. Sovereign information is not readily available, so missing data techniques are considered in order to counter the data availability issue. Sovereign defaults are rare, which results in an imbalance in the distribution of the binary dependent variable. To assess data sets with such characteristics, metrics for imbalanced data are considered for model performance comparison. From the findings, the Bayesian additive regression technique generated better results than the other techniques when considering a basic data analysis. Moreover when cross-validation was considered, the neural network technique performed best. In addition, regional models had better results than the global model when considering model predictive capability. The significance of this study is to develop sovereign default prediction models using various classification techniques focused on enhancing previous literature and analysis through the application of Bayesian additive regression trees. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
- Date Issued: 2023-10-13
Enhancing the use of large-scale assessment data in South Africa: Multidimensional Item Response Theory
- Authors: Lahoud, Tamlyn Ann
- Date: 2023-03-29
- Subjects: Differential item functioning , Item response theory , Mathematical ability Testing , Educational tests and measurements , Multidimensional scaling
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422389 , vital:71938
- Description: This research aims to enhance the use of large-scale assessment data in South Africa by evaluating assessment validity by means of multidimensional item response theory and its associated statistical techniques, which have been severely underutilised. Data from the 2014 administration of the grade 6 Mathematics annual national assessment was used in this study and all analyses were conducted using the mirt package in R. A two parameter logistic item response theory model was developed which indicated a clear alignment between the model parameters and difficulty specifications of the test. The test was found to favour learners within the central band on the ability scale. An exploratory five dimensional item response theory model was then developed to investigate the alignment with the test specifications as evidence for construct validity. Significant discrepancies between the factor structure and the specifications of the test were identified. Notably, the results suggest that some items measured an ability that was not purely mathematical, such as reading ability, which would distort the test’s representation of Mathematics ability, disadvantage learners with lower English literacy, and reduce the construct validity of the test. Further validity evidence was obtained by differential item functioning analyses which revealed that fourteen items function differently for learners from different provinces. Although possible reasons for the presence of differential item functioning among provinces were not discussed, its presence provided sufficient evidence against the validity of the test. In conclusion, multidimensional item response theory provided an effective and rigorous approach to establishing the validity of a large-scale assessment. To avoid the pitfalls of the annual national assessments, it is recommended that this multidimensional item and differential item functioning techniques are utilised for the development and evaluation of future national assessment instruments in South Africa. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
- Date Issued: 2023-03-29
- Authors: Lahoud, Tamlyn Ann
- Date: 2023-03-29
- Subjects: Differential item functioning , Item response theory , Mathematical ability Testing , Educational tests and measurements , Multidimensional scaling
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/422389 , vital:71938
- Description: This research aims to enhance the use of large-scale assessment data in South Africa by evaluating assessment validity by means of multidimensional item response theory and its associated statistical techniques, which have been severely underutilised. Data from the 2014 administration of the grade 6 Mathematics annual national assessment was used in this study and all analyses were conducted using the mirt package in R. A two parameter logistic item response theory model was developed which indicated a clear alignment between the model parameters and difficulty specifications of the test. The test was found to favour learners within the central band on the ability scale. An exploratory five dimensional item response theory model was then developed to investigate the alignment with the test specifications as evidence for construct validity. Significant discrepancies between the factor structure and the specifications of the test were identified. Notably, the results suggest that some items measured an ability that was not purely mathematical, such as reading ability, which would distort the test’s representation of Mathematics ability, disadvantage learners with lower English literacy, and reduce the construct validity of the test. Further validity evidence was obtained by differential item functioning analyses which revealed that fourteen items function differently for learners from different provinces. Although possible reasons for the presence of differential item functioning among provinces were not discussed, its presence provided sufficient evidence against the validity of the test. In conclusion, multidimensional item response theory provided an effective and rigorous approach to establishing the validity of a large-scale assessment. To avoid the pitfalls of the annual national assessments, it is recommended that this multidimensional item and differential item functioning techniques are utilised for the development and evaluation of future national assessment instruments in South Africa. , Thesis (MSc) -- Faculty of Science, Statistics, 2023
- Full Text:
- Date Issued: 2023-03-29
Statistical and Mathematical Learning: an application to fraud detection and prevention
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
- Date Issued: 2022-04-06
- Authors: Hamlomo, Sisipho
- Date: 2022-04-06
- Subjects: Credit card fraud , Bootstrap (Statistics) , Support vector machines , Neural networks (Computer science) , Decision trees , Machine learning , Cross-validation , Imbalanced data
- Language: English
- Type: Master's thesis , text
- Identifier: http://hdl.handle.net/10962/233795 , vital:50128
- Description: Credit card fraud is an ever-growing problem. There has been a rapid increase in the rate of fraudulent activities in recent years resulting in a considerable loss to several organizations, companies, and government agencies. Many researchers have focused on detecting fraudulent behaviours early using advanced machine learning techniques. However, credit card fraud detection is not a straightforward task since fraudulent behaviours usually differ for each attempt and the dataset is highly imbalanced, that is, the frequency of non-fraudulent cases outnumbers the frequency of fraudulent cases. In the case of the European credit card dataset, we have a ratio of approximately one fraudulent case to five hundred and seventy-eight non-fraudulent cases. Different methods were implemented to overcome this problem, namely random undersampling, one-sided sampling, SMOTE combined with Tomek links and parameter tuning. Predictive classifiers, namely logistic regression, decision trees, k-nearest neighbour, support vector machine and multilayer perceptrons, are applied to predict if a transaction is fraudulent or non-fraudulent. The model's performance is evaluated based on recall, precision, F1-score, the area under receiver operating characteristics curve, geometric mean and Matthew correlation coefficient. The results showed that the logistic regression classifier performed better than other classifiers except when the dataset was oversampled. , Thesis (MSc) -- Faculty of Science, Statistics, 2022
- Full Text:
- Date Issued: 2022-04-06
A modelling approach to the analysis of complex survey data
- Authors: Dlangamandla, Olwethu
- Date: 2021-10-29
- Subjects: Sampling (Statistics) , Linear models (Statistics) , Multilevel models (Statistics) , Logistic regression analysis , Complex survey data
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192955 , vital:45284
- Description: Surveys are an essential tool for collecting data and most surveys use complex sampling designs to collect the data. Complex sampling designs are used mainly to enhance representativeness in the sample by accounting for the underlying structure of the population. This often results in data that are non-independent and clustered. Ignoring complex design features such as clustering, stratification, multistage and unequal probability sampling may result in inaccurate and incorrect inference. An overview of, and difference between, design-based and model-based approaches to inference for complex survey data has been discussed. This study adopts a model-based approach. The objective of this study is to discuss and describe the modelling approach in analysing complex survey data. This is specifically done by introducing the principle inference methods under which data from complex surveys may be analysed. In particular, discussions on the theory and methods of model fitting for the analysis of complex survey data are presented. We begin by discussing unique features of complex survey data and explore appropriate methods of analysis that account for the complexity inherent in the survey data. We also explore the widely applied logistic regression modelling of binary data in a complex sample survey context. In particular, four forms of logistic regression models are fitted. These models are generalized linear models, multilevel models, mixed effects models and generalized linear mixed models. Simulated complex survey data are used to illustrate the methods and models. Various R packages are used for the analysis. The results presented and discussed in this thesis indicate that a logistic mixed model with first and second level predictors has a better fit compared to a logistic mixed model with first level predictors. In addition, a logistic multilevel model with first and second level predictors and nested random effects provides a better fit to the data compared to other logistic multilevel fitted models. Similar results were obtained from fitting a generalized logistic mixed model with first and second level predictor variables and a generalized linear mixed model with first and second level predictors and nested random effects. , Thesis (MSC) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Dlangamandla, Olwethu
- Date: 2021-10-29
- Subjects: Sampling (Statistics) , Linear models (Statistics) , Multilevel models (Statistics) , Logistic regression analysis , Complex survey data
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10962/192955 , vital:45284
- Description: Surveys are an essential tool for collecting data and most surveys use complex sampling designs to collect the data. Complex sampling designs are used mainly to enhance representativeness in the sample by accounting for the underlying structure of the population. This often results in data that are non-independent and clustered. Ignoring complex design features such as clustering, stratification, multistage and unequal probability sampling may result in inaccurate and incorrect inference. An overview of, and difference between, design-based and model-based approaches to inference for complex survey data has been discussed. This study adopts a model-based approach. The objective of this study is to discuss and describe the modelling approach in analysing complex survey data. This is specifically done by introducing the principle inference methods under which data from complex surveys may be analysed. In particular, discussions on the theory and methods of model fitting for the analysis of complex survey data are presented. We begin by discussing unique features of complex survey data and explore appropriate methods of analysis that account for the complexity inherent in the survey data. We also explore the widely applied logistic regression modelling of binary data in a complex sample survey context. In particular, four forms of logistic regression models are fitted. These models are generalized linear models, multilevel models, mixed effects models and generalized linear mixed models. Simulated complex survey data are used to illustrate the methods and models. Various R packages are used for the analysis. The results presented and discussed in this thesis indicate that a logistic mixed model with first and second level predictors has a better fit compared to a logistic mixed model with first level predictors. In addition, a logistic multilevel model with first and second level predictors and nested random effects provides a better fit to the data compared to other logistic multilevel fitted models. Similar results were obtained from fitting a generalized logistic mixed model with first and second level predictor variables and a generalized linear mixed model with first and second level predictors and nested random effects. , Thesis (MSC) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-10-29
The application of Classification Trees in the Banking Sector
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-04
- Authors: Mtwa, Sithayanda
- Date: 2021-04
- Subjects: To be added
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/178514 , vital:42946
- Description: Access restricted until April 2026. , Thesis (MSc) -- Faculty of Science, Statistics, 2021
- Full Text:
- Date Issued: 2021-04
Bayesian accelerated life tests for the Weibull distribution under non-informative priors
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
- Date Issued: 2020
- Authors: Mostert, Philip
- Date: 2020
- Subjects: Accelerated life testing -- Statistical methods , Accelerated life testing -- Mathematical models , Failure time data analysis , Bayesian statistical decision theory , Monte Carlo method , Weibull distribution
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172181 , vital:42173
- Description: In a competitive world where products are designed to last for long periods of time, obtaining time-to-failure data is both difficult and costly. Hence for products with high reliability, accelerated life testing is required to obtain relevant life-data quickly. This is done by placing the products under higher-than-use stress levels, thereby causing the products to fail prematurely. Part of the analysis of accelerated life-data requires a life distribution that describes the lifetime of a product at a given stress level and a life-stress relationship – which is some function that describes the way in which the life distribution changes across different stress levels. In this thesis it is assumed that the underlying life distribution is the wellknown Weibull distribution, with shape parameter constant over all stress levels and scale parameter as a log-linear function of stress. The primary objective of this thesis is to obtain estimates from Bayesian analysis, and this thesis considers five types of non-informative prior distributions: Jeffreys’ prior, reference priors, maximal data information prior, uniform prior and probability matching priors. Since the associated posterior distribution under all the derived non-informative priors are of an unknown form, the propriety of the posterior distributions is assessed to ensure admissible results. For comparison purposes, estimates obtained via the method of maximum likelihood are also considered. Finding these estimates requires solving non-linear equations, hence the Newton-Raphson algorithm is used to obtain estimates. A simulation study based on the time-to-failure of accelerated data is conducted to compare results between maximum likelihood and Bayesian estimates. As a result of the Bayesian posterior distributions being analytically intractable, two methods to obtain Bayesian estimates are considered: Markov chain Monte Carlo methods and Lindley’s approximation technique. In the simulation study the posterior means and the root mean squared error values of the estimates under the symmetric squared error loss function and the two asymmetric loss functions: the LINEX loss function and general entropy loss function, are considered. Furthermore the coverage rates for the Bayesian Markov chain Monte Carlo and maximum likelihood estimates are found, and are compared by their average interval lengths. A case study using a dataset based on accelerated time-to-failure of an insulating fluid is considered. The fit of these data for the Weibull distribution is studied and is compared to that of other popular life distributions. A full simulation study is conducted to illustrate convergence of the proper posterior distributions. Both maximum likelihood and Bayesian estimates are found for these data. The deviance information criterion is used to compare Bayesian estimates between the prior distributions. The case study is concluded by finding reliability estimates of the data at use-stress levels.
- Full Text:
- Date Issued: 2020
Default in payment, an application of statistical learning techniques
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
- Date Issued: 2020
- Authors: Gcakasi, Lulama
- Date: 2020
- Subjects: Credit -- South Africa -- Risk assessment , Risk management -- Statistical methods -- South Africa , Credit -- Management -- Statistical methods , Commercial statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/141547 , vital:37984
- Description: The ability of financial institutions to detect whether a customer will default on their credit card payment is essential for its profitability. To that effect, financial institutions have credit scoring systems in place to be able to estimate the credit risk associated with a customer. Various classification models are used to develop credit scoring systems such as k-nearest neighbours, logistic regression and classification trees. This study aims to assess the performance of different classification models on the prediction of credit card payment default. Credit data is usually of high dimension and as a result dimension reduction techniques, namely principal component analysis and linear discriminant analysis, are used in this study as a means to improve model performance. Two classification models are used, namely neural networks and support vector machines. Model performance is evaluated using accuracy and area under the curve (AUC). The neuarl network classifier performed better than the support vector machine classifier as it produced higher accuracy rates and AUC values. Dimension reduction techniques were not effective in improving model performance but did result in less computationally expensive models.
- Full Text:
- Date Issued: 2020
Bayesian hierarchical modelling with application in spatial epidemiology
- Authors: Southey, Richard Robert
- Date: 2018
- Subjects: Bayesian statistical decision theory , Spatial analysis (Statistics) , Medical mapping , Pericarditis , Mortality Statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/59489 , vital:27617
- Description: Disease mapping and spatial statistics have become an important part of modern day statistics and have increased in popularity as the methods and techniques have evolved. The application of disease mapping is not only confined to the analysis of diseases as other applications of disease mapping can be found in Econometric and financial disciplines. This thesis will consider two data sets. These are the Georgia oral cancer 2004 data set and the South African acute pericarditis 2014 data set. The Georgia data set will be used to assess the hyperprior sensitivity of the precision for the uncorrelated heterogeneity and correlated heterogeneity components in a convolution model. The correlated heterogeneity will be modelled by a conditional autoregressive prior distribution and the uncorrelated heterogeneity will be modelled with a zero mean Gaussian prior distribution. The sensitivity analysis will be performed using three models with conjugate, Jeffreys' and a fixed parameter prior for the hyperprior distribution of the precision for the uncorrelated heterogeneity component. A simulation study will be done to compare four prior distributions which will be the conjugate, Jeffreys', probability matching and divergence priors. The three models will be fitted in WinBUGS® using a Bayesian approach. The results of the three models will be in the form of disease maps, figures and tables. The results show that the hyperprior of the precision for the uncorrelated heterogeneity and correlated heterogeneity components are sensitive to changes and will result in different results depending on the specification of the hyperprior distribution of the precision for the two components in the model. The South African data set will be used to examine whether there is a difference between the proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the correlated heterogeneity component in a convolution model. Two models will be fitted in WinBUGS® for this comparison. Both the hyperpriors of the precision for the uncorrelated heterogeneity and correlated heterogeneity components will be modelled using a Jeffreys' prior distribution. The results show that there is no significant difference between the results of the model with a proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the South African data, although there are a few disadvantages of using a proper conditional autoregressive prior for the correlated heterogeneity which will be stated in the conclusion.
- Full Text:
- Date Issued: 2018
- Authors: Southey, Richard Robert
- Date: 2018
- Subjects: Bayesian statistical decision theory , Spatial analysis (Statistics) , Medical mapping , Pericarditis , Mortality Statistics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/59489 , vital:27617
- Description: Disease mapping and spatial statistics have become an important part of modern day statistics and have increased in popularity as the methods and techniques have evolved. The application of disease mapping is not only confined to the analysis of diseases as other applications of disease mapping can be found in Econometric and financial disciplines. This thesis will consider two data sets. These are the Georgia oral cancer 2004 data set and the South African acute pericarditis 2014 data set. The Georgia data set will be used to assess the hyperprior sensitivity of the precision for the uncorrelated heterogeneity and correlated heterogeneity components in a convolution model. The correlated heterogeneity will be modelled by a conditional autoregressive prior distribution and the uncorrelated heterogeneity will be modelled with a zero mean Gaussian prior distribution. The sensitivity analysis will be performed using three models with conjugate, Jeffreys' and a fixed parameter prior for the hyperprior distribution of the precision for the uncorrelated heterogeneity component. A simulation study will be done to compare four prior distributions which will be the conjugate, Jeffreys', probability matching and divergence priors. The three models will be fitted in WinBUGS® using a Bayesian approach. The results of the three models will be in the form of disease maps, figures and tables. The results show that the hyperprior of the precision for the uncorrelated heterogeneity and correlated heterogeneity components are sensitive to changes and will result in different results depending on the specification of the hyperprior distribution of the precision for the two components in the model. The South African data set will be used to examine whether there is a difference between the proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the correlated heterogeneity component in a convolution model. Two models will be fitted in WinBUGS® for this comparison. Both the hyperpriors of the precision for the uncorrelated heterogeneity and correlated heterogeneity components will be modelled using a Jeffreys' prior distribution. The results show that there is no significant difference between the results of the model with a proper conditional autoregressive prior and intrinsic conditional autoregressive prior for the South African data, although there are a few disadvantages of using a proper conditional autoregressive prior for the correlated heterogeneity which will be stated in the conclusion.
- Full Text:
- Date Issued: 2018
Generalized linear models, with applications in fisheries research
- Authors: Sidumo, Bonelwa
- Date: 2018
- Subjects: Western mosquitofish , Analysis of variance , Fisheries Catch effort South Africa Sundays River (Eastern Cape) , Linear models (Statistics) , Multilevel models (Statistics) , Experimental design
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/61102 , vital:27975
- Description: Gambusia affinis (G. affinis) is an invasive fish species found in the Sundays River Valley of the Eastern Cape, South Africa, The relative abundance and population dynamics of G. affinis were quantified in five interconnected impoundments within the Sundays River Valley, This study utilised a G. affinis data set to demonstrate various, classical ANOVA models. Generalized linear models were used to standardize catch per unit effort (CPUE) estimates and to determine environmental variables which influenced the CPUE, Based on the generalized linear model results dam age, mean temperature, Oreochromis mossambicus abundance and Glossogobius callidus abundance had a significant effect on the G. affinis CPUE. The Albany Angling Association collected data during fishing tag and release events. These data were utilized to demonstrate repeated measures designs. Mixed-effects models provided a powerful and flexible tool for analyzing clustered data such as repeated measures data and nested data, lienee it has become tremendously popular as a framework for the analysis of bio-behavioral experiments. The results show that the mixed-effects methods proposed in this study are more efficient than those based on generalized linear models. These data were better modeled with mixed-effects models due to their flexibility in handling missing data.
- Full Text:
- Date Issued: 2018
- Authors: Sidumo, Bonelwa
- Date: 2018
- Subjects: Western mosquitofish , Analysis of variance , Fisheries Catch effort South Africa Sundays River (Eastern Cape) , Linear models (Statistics) , Multilevel models (Statistics) , Experimental design
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/61102 , vital:27975
- Description: Gambusia affinis (G. affinis) is an invasive fish species found in the Sundays River Valley of the Eastern Cape, South Africa, The relative abundance and population dynamics of G. affinis were quantified in five interconnected impoundments within the Sundays River Valley, This study utilised a G. affinis data set to demonstrate various, classical ANOVA models. Generalized linear models were used to standardize catch per unit effort (CPUE) estimates and to determine environmental variables which influenced the CPUE, Based on the generalized linear model results dam age, mean temperature, Oreochromis mossambicus abundance and Glossogobius callidus abundance had a significant effect on the G. affinis CPUE. The Albany Angling Association collected data during fishing tag and release events. These data were utilized to demonstrate repeated measures designs. Mixed-effects models provided a powerful and flexible tool for analyzing clustered data such as repeated measures data and nested data, lienee it has become tremendously popular as a framework for the analysis of bio-behavioral experiments. The results show that the mixed-effects methods proposed in this study are more efficient than those based on generalized linear models. These data were better modeled with mixed-effects models due to their flexibility in handling missing data.
- Full Text:
- Date Issued: 2018
Missing values: a closer look
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
- Authors: Thorpe, Kerri
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/d1017827 , vital:20798
- Description: Problem: In today’s world, missing values are more present than ever. Due to the ever-changing and fast paced global society in which we live, most business and research data produced around the world contain missing data. This means that locating data which is meticulously precise can be a hard task in itself, but at times may prove essential as the consequences of making use of incomplete data could be disastrous. The reasons for missing data cropping up in almost all forms of work are numerous and shall be discussed in this dissertation. For example, those being interviewed or polled may choose to simply ignore questions which are posed to them, recording equipment may malfunction or be misplaced, or organisers may not be able to locate the respondent in order to rectify the missing data. Whatever the reasons for data being incomplete, it is necessary to avoid having to use inefficient and incomplete data as a result from the above problems. Therefore, various strategies or methods have been developed in order to handle these missing values. It is important, however, that these strategies or methods are utilised effectively as missing data treatment can introduce bias into the analysis. This dissertation shall look at these and other problems in more detail by using a data set which consists of records for 581 children who were interviewed in 1990 as part of the National Longitudinal Survey of Youth (NLSY). Approach: As mentioned above, many strategies or methods have been developed in order to deal with missing values. More specifically, traditional methods such as complete case analysis, available case analysis or single imputation are widely used by researchers and shall be discussed herein. Although these methods are simple and easy to implement, they require assumptions about the data that are not often satisfied in practice. Over the years, more up to date and relevant methods, such as multiple imputation and maximum likelihood have been developed. These methods rely on weaker assumptions and contain superior statistical properties when compared to the traditional techniques. In this dissertation, these traditional methods shall be reviewed and assessed in SAS and shall be compared to the more modern techniques. Results: The ad hoc techniques for handling missing data such as complete case and available case methods produce biased parameter estimates when the data is not missing completely at random (MCAR). Single imputation techniques likewise produce biased estimates as well as result in the underestimation of standard errors. Although the expectation maximisation (EM) algorithm yields unbiased parameter estimates, the lack of convenient standard errors suggests that using this algorithm for hypothesis testing is not a good idea. Multiple imputation, however, yields unbiased parameter estimates and correctly estimates standard errors. Conclusion: Ignoring missing data in any analysis produces biased parameter estimates. Using single imputation to handle missing values is not recommended, as using a single value to replace missing values does not account for the variation that would have been present if the variables were observed. As a result, the variance will be greatly underestimated. The more modern missing data methods such as the EM algorithm and multiple imputation are preferred over the traditional techniques as they require less stringent assumptions and they also mitigate the downsides of the older methods.
- Full Text:
- Date Issued: 2017
Reliability analysis: assessment of hardware and human reliability
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
- Authors: Mafu, Masakheke
- Date: 2017
- Subjects: Bayesian statistical decision theory , Reliability (Engineering) , Human machine systems , Probabilities , Markov processes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/6280 , vital:21077
- Description: Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
- Full Text:
- Date Issued: 2017
Stochastic models in finance
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
- Authors: Mazengera, Hassan
- Date: 2017
- Subjects: Finance -- Mathematical models , C++ (Computer program language) , GARCH model , Lebesgue-Radon-Nikodym theorems , Radon measures , Stochastic models , Stochastic processes , Stochastic processes -- Computer programs , Martingales (Mathematics) , Pricing -- Mathematical models
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/162724 , vital:40976
- Description: Stochastic models for pricing financial securities are developed. First, we consider the Black Scholes model, which is a classic example of a complete market model and finally focus on Lévy driven models. Jumps may render the market incomplete and are induced in a model by inclusion of a Poisson process. Lévy driven models are more realistic in modelling of asset price dynamics than the Black Scholes model. Martingales are central in pricing, especially of derivatives and we give them the desired attention in the context of pricing. There are an increasing number of important pricing models where analytical solutions are not available hence computational methods come in handy, see Broadie and Glasserman (1997). It is also important to note that computational methods are also applicable to models with analytical solutions. We computationally value selected stochastic financial models using C++. Computational methods are also used to value or price complex financial instruments such as path dependent derivatives. This pricing procedure is applied in the computational valuation of a stochastic (revenue based) loan contract. Derivatives with simple pay of functions and models with analytical solutions are considered for illustrative purposes. The Black-Scholes P.D.E is complex to solve analytically and finite difference methods are widely used. Explicit finite difference scheme is considered in this thesis for computational valuation of derivatives that are modelled by the Black-Scholes P.D.E. Stochastic modelling of asset prices is important for the valuation of derivatives: Gaussian, exponential and gamma variates are simulated for the valuation purposes.
- Full Text:
- Date Issued: 2017
A review of generalized linear models for count data with emphasis on current geospatial procedures
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
- Date Issued: 2016
- Authors: Michell, Justin Walter
- Date: 2016
- Subjects: Spatial analysis (Statistics) , Bayesian statistical decision theory , Geospatial data , Malaria -- Botswana -- Statistics , Malaria -- Botswana -- Research -- Statistical methods
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:5582 , http://hdl.handle.net/10962/d1019989
- Description: Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
- Full Text:
- Date Issued: 2016
Bayesian accelerated life tests: exponential and Weibull models
- Authors: Izally, Sharkay Ruwade
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3003 , vital:20351
- Description: Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.
- Full Text:
- Date Issued: 2016
- Authors: Izally, Sharkay Ruwade
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3003 , vital:20351
- Description: Reliability life testing is used for life data analysis in which samples are tested under normal conditions to obtain failure time data for reliability assessment. It can be costly and time consuming to obtain failure time data under normal operating conditions if the mean time to failure of a product is long. An alternative is to use failure time data from an accelerated life test (ALT) to extrapolate the reliability under normal conditions. In accelerated life testing, the units are placed under a higher than normal stress condition such as voltage, current, pressure, temperature, to make the items fail in a shorter period of time. The failure information is then transformed through an accelerated model commonly known as the time transformation function, to predict the reliability under normal operating conditions. The power law will be used as the time transformation function in this thesis. We will first consider a Bayesian inference model under the assumption that the underlying life distribution in the accelerated life test is exponentially distributed. The maximal data information (MDI) prior, the Ghosh Mergel and Liu (GML) prior and the Jeffreys prior will be derived for the exponential distribution. The propriety of the posterior distributions will be investigated. Results will be compared when using these non-informative priors in a simulation study by looking at the posterior variances. The Weibull distribution as the underlying life distribution in the accelerated life test will also be investigated. The maximal data information prior will be derived for the Weibull distribution using the power law. The uniform prior and a mixture of Gamma and uniform priors will be considered. The propriety of these posteriors will also be investigated. The predictive reliability at the use-stress will be computed for these models. The deviance information criterion will be used to compare these priors. As a result of using a time transformation function, Bayesian inference becomes analytically intractable and Markov Chain Monte Carlo (MCMC) methods will be used to alleviate this problem. The Metropolis-Hastings algorithm will be used to sample from the posteriors for the exponential model in the accelerated life test. The adaptive rejection sampling method will be used to sample from the posterior distributions when the Weibull model is considered.
- Full Text:
- Date Issued: 2016
Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier
- Eldud Omer, Ahmed Abdelkarim
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
- Date Issued: 2016
- Authors: Eldud Omer, Ahmed Abdelkarim
- Date: 2016
- Subjects: Bayesian statistical decision theory , Logistic regression analysis , Biostatistics , Proteins -- Structure
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5581 , http://hdl.handle.net/10962/d1019985
- Description: The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
- Full Text:
- Date Issued: 2016
Eliciting and combining expert opinion : an overview and comparison of methods
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
- Date Issued: 2015
- Authors: Chinyamakobvu, Mutsa Carole
- Date: 2015
- Subjects: Decision making -- Statistical methods , Expertise , Bayesian statistical decision theory , Statistical decision , Delphi method , Paired comparisons (Statistics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5579 , http://hdl.handle.net/10962/d1017827
- Description: Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
- Full Text:
- Date Issued: 2015
An analysis of the Libor and Swap market models for pricing interest-rate derivatives
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
- Date Issued: 2012
- Authors: Mutengwa, Tafadzwa Isaac
- Date: 2012
- Subjects: LIBOR market model , Monte Carlo method , Interest rates -- Mathematical models , Derivative securities
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5573 , http://hdl.handle.net/10962/d1005535
- Description: This thesis focuses on the non-arbitrage (fair) pricing of interest rate derivatives, in particular caplets and swaptions using the LIBOR market model (LMM) developed by Brace, Gatarek, and Musiela (1997) and Swap market model (SMM) developed Jamshidan (1997), respectively. Today, in most financial markets, interest rate derivatives are priced using the renowned Black-Scholes formula developed by Black and Scholes (1973). We present new pricing models for caplets and swaptions, which can be implemented in the financial market other than the Black-Scholes model. We theoretically construct these "new market models" and then test their practical aspects. We show that the dynamics of the LMM imply a pricing formula for caplets that has the same structure as the Black-Scholes pricing formula for a caplet that is used by market practitioners. For the SMM we also theoretically construct an arbitrage-free interest rate model that implies a pricing formula for swaptions that has the same structure as the Black-Scholes pricing formula for swaptions. We empirically compare the pricing performance of the LMM against the Black-Scholes for pricing caplets using Monte Carlo methods.
- Full Text:
- Date Issued: 2012