Performance evaluation of baseline-dependent window functions with several weighing functions
- Authors: Vanqa, Kamvulethu
- Date: 2024-04-04
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435850 , vital:73206
- Description: Radio interferometric data volume is exponentially increasing with the potential to cause slow processing and data storage issues for radio observations recorded at high time and frequency resolutions. This necessitates that a sort of data compression is imposed. The conventional method to compress the data is averaging across time and frequency. However, this results in amplitude loss and source distortion at the edges of the field of view. To reduce amplitude loss and source distortion, baseline-dependent window functions (BDWFs) are proposed in theliterature. BDWFs are visibility data compression methods using window functions to retainthe signals within a field of interest (FoI) and to suppress signals outside this FoI. However,BDWFs are used with window functions as discussed in the signal processing field without any optimisation. This thesis evaluates the performance of BDWFs and then proposes to use machine learning with gradient descent to optimize the window functions employed in BDWFs. Results show that the convergence of the objective function is limited due to the band-limited nature of the window functions in the Fourier space. BDWFs performance is also investigated and discussed using several weighting schemes. Results show that there exists an optimal parameter tuning (not necessarily unique) that suggests an optimal combination of BDWFs and density sampling. With this, ∼ 4 % smearing is observed within the FoI, and ∼ 80 % source suppression is achieved outside the FoI using the MeerKAT telescope at 1.4 GHz, sampled at 1 s and 184.3 kHz then averaged with BDWFs to achieve a compression factor of 4 in time and 3 in frequency. , Thesis (MA) -- Faculty of Science, Mathematics, 2024
- Full Text:
Towards an artificial intelligence-based agent for characterising the organisation of primes
- Authors: Oyetunji, Nicole Armlade
- Date: 2024-04-04
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/435389 , vital:73153
- Description: Machine learning has experienced significant growth in recent decades, driven by advancements in computational power and data storage. One of the applications of machine learning is in the field of number theory. Prime numbers hold significant importance in mathematics and its applications, for example in cryptography, owing to their distinct properties. Therefore, it is crucial to efficiently obtain the complete list of primes below a given threshold, with low relatively computational cost. This study extensively explores a deterministic scheme, proposed by Hawing and Okouma (2016), that is centered around Consecutive Composite Odd Numbers, showing the link between these numbers and prime numbers by examining their internal structure. The main objective of this dissertation is to develop two main artificial intelligence agents capable of learning and recognizing patterns within a list of consecutive composite odd numbers. To achieve this, the mathematical foundations of the deterministic scheme are used to generate a dataset of consecutive composite odd numbers. This dataset is further transformed into a dataset of differences to simplify the prediction problem. A literature review is conducted which encompasses research from the domains of machine learning and deep learning. Two main machine learning algorithms are implemented along with their variations, Long Short-Term Memory Networks and Error Correction Neural Networks. These models are trained independently on two separate but related datasets, the dataset of consecutive composite odd numbers and the dataset of differences between those numbers. The evaluation of these models includes relevant metrics, for example, Root Mean Square Error, Mean Absolute Percentage Error, Theil U coefficient, and Directional Accuracy. Through a comparative analysis, the study identifies the top-performing 3 models, with a particular emphasis on accuracy and computational efficiency. The results indicate that the LSTM model, when trained on difference data and coupled with exponential smoothing, displays superior performance as the most accurate model overall. It achieves a RMSE of 0.08, which significantly outperforms the dataset’s standard deviation of 0.42. This model exceeds the performance of basic estimator models, implying that a data-driven approach utilizing machine learning techniques can provide valuable insights in the field of number theory. The second best model, the ECNN trained on difference data combined with exponential smoothing, achieves an RMSE of 0.28. However, it is worth mentioning that this model is the most computationally efficient, being 32 times faster than the LSTM model. , Thesis (MSc) -- Faculty of Science, Mathematics, 2024
- Full Text:
Wildlife-vehicle collisions mitigation measures using road ecological data and deep learning
- Authors: Nandutu, Irene
- Date: 2023-10-13
- Subjects: Uncatalogued
- Language: English
- Type: Academic theses , Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/431907 , vital:72814
- Description: Access restricted. Expected release in 2025. , Thesis (PhD) -- Faculty of Science, Mathematics, 2023
- Full Text: