Bayesian inference for Cronbach's alpha
- Authors: Izally, Sharkay Ruwade
- Date: 2025-04-03
- Subjects: Bayesian inference , Bayesian statistical decision theory , Cronbach's alpha , Confidence distribution , Probability matching , Jeffreys prior , Random effects model
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/479919 , vital:78380 , DOI 10.21504/10962/479919
- Description: Cronbach’s alpha is used as a measure of reliability in fields like education, psychology and sociology. The reason for the popularity of Cronbach’s alpha is that it is computationally simple. Only the sample size and the variance components are needed and it can be computed for continuous as well as binary data. Cronbach’s alpha has been studied extensively using maximum likelihood estimation. Since Cronbach’s alpha is a function of the variance components, this often results in negative estimates of the variance components when the maximum likelihood method is considered as a method of estimation. In the field of Bayesian statistics, the parameters are random variables, and this can alleviate some of the problems of estimating negative variance estimates that often occur when the frequentist approach is used. The Bayesian approach also incorporates loss functions that considers the symmetry of the distribution of the parameters being estimated and adds some flexibility in obtaining better estimates of the unknown parameters. The Bayesian approach often results in better coverage probabilities than the frequentist approach especially for smaller sample sizes and it is therefore important to consider a Bayesian analysis in the estimation of Cronbach’s alpha. The reference and probability matching priors for Cronbach’s alpha will be derived using a one-way random effects model. The performance of these two priors will be compared to that of the well-known Jeffreys prior and a divergence prior. A simulation study will be considered to compare the performance of the priors, where the coverage rates, average interval lengths and standard deviations of the interval lengths will be computed. A second simulation study will be considered where the mean relative error will be compared for the various priors using the squared error, the absolute error and the linear in exponential (LINEX) loss functions. An illustrative example will also be considered. The combined Bayesian estimation of more than one Cronbach’s alpha will also be considered for m experiments with equal α but possibly different variance components. It will be shown that the reference and the probability-matching priors are the same. The Bayesian theory and results will be applied to two examples. The intervals for the combined model are however much shorter than those of the individual models. Also, the point estimates of the combined model are more accurate than those of the individual models. It is further concluded that the posterior distribution of α for the combined model becomes more important as the number of samples and models increase. The reference and probability matching priors for Cronbach’s alpha will be derived using a three-component hierarchical model. The performance of these two priors will be compared to that of the well-known Jeffreys prior and a divergence prior. A simulation study will be v vi considered to compare the performance of the priors, where the coverage rates, average interval lengths and standard deviations of the interval lengths will be computed. Two illustrative examples will also be considered. Statistical control limits will be obtained for Cronbach’s alpha in the case of a balanced one-way random effects model. This will be achieved by deriving the predictive distribution of a future Cronbach’s alpha. The unconditional posterior predictive distribution will be determined using Monte Carlo simulation and the Rao-Blackwell procedure. The predictive distribution will be used to obtain control limits and to determine the run-length and average run-length. Cronbach’s alpha will be estimated for a general covariance matrix using a Bayesian approach and comparing these results to the asymptotic frequentist interval valid under a general covariance matrix framework. Most of the results used in the literature require the compound symmetry assumption for analyses of Cronbach’s alpha. Fiducial and posterior distributions will be derived for Cronbach’s alpha in the case of the bivariate normal distribution. Various objective priors will be considered for the variance components and the correlation coefficient. One of the priors considered corresponds to the fiducial distribution. The performance of these priors will be compared to an asymptotic frequentist interval often used in the literature. A simulation study will be considered to compare the performance of the priors and the asymptotic interval, where the coverage rates and average interval lengths will be computed. , Thesis (PhD) -- Faculty of Science, Statistics, 2025
- Full Text:
KalCal: a novel calibration framework for radio interferometry using the Kalman Filter and Smoother
- Authors: Welman, Brian Allister
- Date: 2024-10-11
- Subjects: Radio interferometers , Calibration , Kalman filtering , Bayesian inference , Signal processing , Radio astronomy , MeerKAT
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/467127 , vital:76818
- Description: Calibration in radio interferometry is essential for correcting measurement errors. Traditional methods employ maximum likelihood techniques and non-linear least squares solvers but face challenges due to the data volumes and increased noise sensitivity of contemporary instruments such as MeerKAT. A common approach for mitigating these issues is using “solution intervals”, which helps manage the data volume and reduces overfitting. However, inappropriate interval sizes can degrade calibration quality, and determining optimal sizes is challenging, often relying on brute-force methods. This study introduces Kalman Filtering and Smoothing in Calibration (KalCal), a new framework for calibration that combines the Kalman Filter, Kalman Smoother, and the energy function: the negative logarithm of the Bayesian evidence. KalCal offers Bayesian-optimal solutions as probability densities and models calibration effects with lower computational requirements than iterative approaches. Unlike traditional methods, which require all the data for a particular solution to be in memory simultaneously, KalCal’s recursive computations only need a single pass through the data with appropriate prior information. The energy function provides the means for KalCal to determine this prior information. Theoretical contributions include additions to complex optimisation literature and the “Kalman-Woodbury Identity” that reformulates the traditional Kalman Filter. A Python implementation of the KalCal framework was benchmarked against solution intervals as implemented in the QuartiCal package. Simulations show KalCal matching solution intervals in high Signal-to-Noise Ratio (SNR) scenarios and surpassing them in low SNR conditions. Moreover, the energy function produced minima that coincide with KalCal’s Mean Square Error (MSE) on the true gain signal. This result is significant as the MSE is unavailable in real applications. Further research is needed to assess the computational feasibility and intricacies of KalCal. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2024
- Full Text: