Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Ionospheric disturbances during magnetic storms at SANAE
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Nonlinear optical responses of phthalocyanines in the presence of nanomaterials or when embedded in polymeric materials
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
Real-time audio spectrum analyser research, design, development and implementation using the 32 bit ARMR Cortex-M4 microcontroller
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
Thermoluminescence of synthetic quartz annealed beyond its second phase inversion temperature
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
Beta decay of 100/400 Zr produced in neutron-induced fission of natural uranium
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
- Authors: Kamoto, Thokozani
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3024 , vital:20353
- Description: Fission fragments, produced by neutron bombardment of natural uranium at the Physics Department, Jyväskylä, Finland, are studied in this work. The data had been sorted into 25 Y — y coincidence matrices which were then analysed. In this work we aimed to identify the fission products using Y-Y coincidence analysis and then study the beta-decay of some of the fission products. Sixteen fission products ranging from A = 94 to A = 136 were identified. Out of these fission products beta decay of the A = 100 (100/40 Zr – 100/41 Nb – 100/42 Mo) chain was studied in greater detail. We have also studied the variation of the relative intensities as a function of time of the 159-, 528-, 600-, 768-, 928- and 1502-keV Y-rav lines in 100/42 Mo and the profiles of the relative intensities have been modelled with the variation of the activity of 100/41 Nb against time. Configuration assignments of 100 Zr and 100/42 Mo are discussed.
- Full Text:
- Date Issued: 2016
Calibration and wide field imaging with PAPER: a catalogue of compact sources
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
- Authors: Philip, Liju
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2397 , vital:20285
- Description: Observations of the redshifted 21 cm HI line promise to be a formidable tool for cosmology, allowing the investigation of the end of the so-called dark ages, when the first galaxies formed, and the subsequent Epoch of Reionization when the intergalactic medium transitioned from neutral to ionized. Such observations are plagued by foreground emission which is a few orders of magnitude brighter than the 21 cm line. In this thesis I analyzed data from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in order to improve the characterization of the extragalactic foreground component. I derived a catalogue of unresolved radio sources down to a 5 Jy flux density limit at 150 MHz and derived their spectral index distribution using literature data at 408 MHz. I implemented advanced techniques to calibrate radio interferometric data that led to a few percent accuracy on the flux density scale of the derived catalogue. This work, therefore, represents a further step towards creating an accurate, global sky model that is crucial to improve calibration of Epoch of Reionization observations.
- Full Text:
- Date Issued: 2016
Classical and quantum picture of the interior of two-dimensional black holes
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
- Authors: Shawa, Mark
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3629 , vital:20531
- Description: A quantum-mechanical description of black holes would represent the final step in our understanding of the nature of space-time. However, any progress towards that end is usually foiled by persistent space-time singularities that exist at the center of black holes. From the four-dimensional point of view, black holes seem to resist quantization. Under highly symmetric conditions, all higher-dimensional black holes are two-dimensional. Unlike their higher-dimensional counterparts, two dimensional black holes may not resist quantization. A non-trivial description of gravity in two dimensions is not possible using Einstein’s theory of gravity alone. However, we may still arrive at a consistent description of gravity by introducing a scalar field known as the dilaton. In this thesis, we study both the classical and quantum aspects of the interior of two-dimensional black holes using a generalized dilaton-gravity theory. Classically, we will find that the interior of most two-dimensional black holes is not much different from that of four-dimensional black holes. But by introducing quantized matter into the theory, the fluctuations in space-time will give a different picture of the structure of interior of black holes. Using a low-energy effective field theory, we will show that it is indeed possible to identify quantum modes in the interior of black holes and perform quantum-mechanical calculations near the singularity.
- Full Text:
- Date Issued: 2016
Single station TEC modelling during storm conditions
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
- Authors: Uwamahoro, Jean Claude
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/3812 , vital:20545
- Description: It has been shown in ionospheric research that modelling total electron content (TEC) during storm conditions is a big challenge. In this study, mathematical equations were developed to estimate TEC over Sutherland (32.38oS, 20.81oE), during storm conditions, using the Empirical Orthogonal Function (EOF) analysis, combined with regression analysis. TEC was derived from GPS observations and a geomagnetic storm was defined for Dst ≤ -50 nT. The inputs for the model were chosen based on the factors that influence TEC variation, such as diurnal, seasonal, solar and geomagnetic activity variation, and these were represented by hour of the day, day number of the year, F10.7 and A index respectively. The EOF model was developed using GPS TEC data from 1999 to 2013 and tested on different storms. For the model validation (interpolation), three storms were chosen in 2000 (solar maximum period) and three others in 2006 (solar minimum period), while for extrapolation six storms including three in 2014 and three in 2015 were chosen. Before building the model, TEC values for the selected 2000 and 2006 storms were removed from the dataset used to construct the model in order to make the model validation independent on data. A comparison of the observed and modelled TEC showed that the EOF model works well for storms with non-significant ionospheric TEC response and storms that occurred during periods of low solar activity. High correlation coefficients between the observed and modelled TEC were obtained showing that the model covers most of the information contained in the observed TEC. Furthermore, it has been shown that the EOF model developed for a specific station may be used to estimate TEC over other locations within a latitudinal and longitudinal coverage of 8.7o and 10.6o respectively. This is an important result as it reduces the data dimensionality problem for computational purposes. It may therefore not be necessary for regional storm-time TEC modelling to compute TEC data for all the closest GPS receiver stations since most of the needed information can be extracted from measurements at one location.
- Full Text:
- Date Issued: 2016
The EPR paradox: back from the future
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
Thermoluminescence of annealed synthetic quartz
- Atang, Elizabeth Fende Midiki
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
A light-emitting-diode pulsing system for measurement of time-resolved luminescence
- Authors: Uriri, Solomon Akpore
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20976 , http://hdl.handle.net/10962/5788
- Description: A new light-emitting-diode based pulsing system for measurement of time-resolved luminescence has been developed. The light-emitting-diodes are pulsed at various pulse-widths by a 555-timer operated as a monostable multivibrator. The light-emitting-diodes are arranged in a dural holder connected in parallel in sets of four, each containing four diodes in series. The output pulse from the 555-timer is fed into an 2N7000 MOSFET to produce a pulse-current of 500 mA to drive the set of 16 light-emitting-diodes. This size of current is sufficient to drive the diodes with each driven at a pulse-current of 90 mA with a possible maximum of 110 mA per diode. A multichannel scaler is used to trigger the pulsing system and to record data at selectable dwell times. The system is capable of generating pulse-widths in the range of microseconds upwards.
- Full Text:
- Date Issued: 2015
- Authors: Uriri, Solomon Akpore
- Date: 2015
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:20976 , http://hdl.handle.net/10962/5788
- Description: A new light-emitting-diode based pulsing system for measurement of time-resolved luminescence has been developed. The light-emitting-diodes are pulsed at various pulse-widths by a 555-timer operated as a monostable multivibrator. The light-emitting-diodes are arranged in a dural holder connected in parallel in sets of four, each containing four diodes in series. The output pulse from the 555-timer is fed into an 2N7000 MOSFET to produce a pulse-current of 500 mA to drive the set of 16 light-emitting-diodes. This size of current is sufficient to drive the diodes with each driven at a pulse-current of 90 mA with a possible maximum of 110 mA per diode. A multichannel scaler is used to trigger the pulsing system and to record data at selectable dwell times. The system is capable of generating pulse-widths in the range of microseconds upwards.
- Full Text:
- Date Issued: 2015
Assignment of spin and parity to states in the nucleus ¹⁹⁶T1
- Authors: Uwitonze, Pierre Celestin
- Date: 2015
- Subjects: Nuclear spin , Particles (Nuclear physics) -- Chirality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5558 , http://hdl.handle.net/10962/d1017903
- Description: This work presents a study of high-spin states in the nucleus ¹⁹⁶Tl via γ-spectroscopy. ¹⁹⁶Tl was produced via the ¹⁹⁷Au(⁴He,5n) ¹⁹⁶Tl reaction at a beam energy of 63 MeV. The γ-γ coincidence measurements were performed using the AFRODITE γ-spectrometer array at iThemba LABS. The previous level scheme of ¹⁹⁶Tl has been extended up to an excitation of 4071 keV including 24 new γ-ray transitions. The spin and parity assignment to levels was made from the directional correlation of oriented nuclei (DCO) and linear polarization anisotropy ratios. An analysis of the B(M1)/B(E2) ratios was found to be consistent with the configuration of πh₉/₂♁vi₁₃/₂ for the ground state band. Although no chiral band was found in ¹⁹⁶TI and ¹⁹⁸TI.
- Full Text:
- Date Issued: 2015
- Authors: Uwitonze, Pierre Celestin
- Date: 2015
- Subjects: Nuclear spin , Particles (Nuclear physics) -- Chirality
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5558 , http://hdl.handle.net/10962/d1017903
- Description: This work presents a study of high-spin states in the nucleus ¹⁹⁶Tl via γ-spectroscopy. ¹⁹⁶Tl was produced via the ¹⁹⁷Au(⁴He,5n) ¹⁹⁶Tl reaction at a beam energy of 63 MeV. The γ-γ coincidence measurements were performed using the AFRODITE γ-spectrometer array at iThemba LABS. The previous level scheme of ¹⁹⁶Tl has been extended up to an excitation of 4071 keV including 24 new γ-ray transitions. The spin and parity assignment to levels was made from the directional correlation of oriented nuclei (DCO) and linear polarization anisotropy ratios. An analysis of the B(M1)/B(E2) ratios was found to be consistent with the configuration of πh₉/₂♁vi₁₃/₂ for the ground state band. Although no chiral band was found in ¹⁹⁶TI and ¹⁹⁸TI.
- Full Text:
- Date Issued: 2015
Link between ghost artefacts, source suppression and incomplete calibration sky models
- Authors: Nunhokee, Chuneeta Devi
- Date: 2015
- Subjects: Interferometry , Calibration
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5556 , http://hdl.handle.net/10962/d1017900
- Description: Calibration is a fundamental step towards producing radio interferometric images. However, naive calibration produces calibration artefacts, in the guise of spurious emission, buried in the thermal noise. This work investigates these calibration artefacts, henceforth referred to as “ghosts”. A 21 cm observation with the Westerbork Synthesis Radio Telescope yielded similar ghost sources, and it was anticipated that they were due to calibrating with incomplete sky models. An analytical ghost distribution of a two-source scenario is derived to substantiate this theory and to seek answers to the related bewildering features (regular ghost pattern, points spread function-like sidelobes, independent of model flux). The theoretically predicted ghost distribution qualitatively matches with the observational ones and shows high dependence on the array geometry. The theory draws the conclusion that both the ghost phenomenon and suppression of the unmodelled flux have the same root cause. In addition, the suppression of the unmodelled flux is studied as functions of unmodelled flux, differential gain solution interval and the number of sources subjected to direction-dependent gains. These studies summarise that the suppression rate is constant irrespective of the degree of incompleteness of the calibration sky model. In the presence of a direction-dependent effect, the suppression drastically increases; however, this increase can be compensated for by using longer solution intervals.
- Full Text:
- Date Issued: 2015
- Authors: Nunhokee, Chuneeta Devi
- Date: 2015
- Subjects: Interferometry , Calibration
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5556 , http://hdl.handle.net/10962/d1017900
- Description: Calibration is a fundamental step towards producing radio interferometric images. However, naive calibration produces calibration artefacts, in the guise of spurious emission, buried in the thermal noise. This work investigates these calibration artefacts, henceforth referred to as “ghosts”. A 21 cm observation with the Westerbork Synthesis Radio Telescope yielded similar ghost sources, and it was anticipated that they were due to calibrating with incomplete sky models. An analytical ghost distribution of a two-source scenario is derived to substantiate this theory and to seek answers to the related bewildering features (regular ghost pattern, points spread function-like sidelobes, independent of model flux). The theoretically predicted ghost distribution qualitatively matches with the observational ones and shows high dependence on the array geometry. The theory draws the conclusion that both the ghost phenomenon and suppression of the unmodelled flux have the same root cause. In addition, the suppression of the unmodelled flux is studied as functions of unmodelled flux, differential gain solution interval and the number of sources subjected to direction-dependent gains. These studies summarise that the suppression rate is constant irrespective of the degree of incompleteness of the calibration sky model. In the presence of a direction-dependent effect, the suppression drastically increases; however, this increase can be compensated for by using longer solution intervals.
- Full Text:
- Date Issued: 2015
Properties of traveling ionospheric disturbances (TIDs) over the Western Cape, South Africa
- Authors: Tyalimpi, Vumile Mike
- Date: 2015
- Subjects: Doppler radar , Geographic information systems , Traveling ionospheric disturbances -- south Africa , Ionospheric disturbances -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5557 , http://hdl.handle.net/10962/d1017901
- Description: Travelling Ionospheric Disturbances (TIDs) are said to be produced by atmospheric gravitational waves propagating through the neutral ionosphere. These are smaller in amplitude and period when compared to most ionospheric disturbances and hence more difficult to measure. Very little is known about the properties of the travelling ionospheric disturbances (TIDs) over the Southern Hemisphere regions since studies have been conducted mostly over the Northern Hemisphere regions. This study presents a framework, using a High Frequency (HF) Doppler radar to investigate the physical properties and the possible driving mechanisms of TIDs. This research focuses on studying the characteristics of the TIDs, such as period, velocity and temporal variations, using HF Doppler measurements taken in South Africa. By making use of a Wavelet Analysis technique, the TIDs’ characteristics were determined. A statistical summary on speed and direction of propagation of the observed TIDs was performed. The winter medium scale travelling ionospheric disturbances (MSTIDs) observed are generally faster than the summer MSTIDs. For all seasons, the MSTIDs had a preferred south-southwest direction of propagation. Most of the large scale travelling ionospheric disturbances (LSTIDs) were observed during the night and of these, the spring LSTIDs were fastest when compared to autumn and summer LSTIDs. The general direction of travel of the observed LSTIDs is south-southeast. Total Electron Content (TEC), derived from Global Positioning System (GPS) measurements, were used to validate some of the TID results obtained from the HF Doppler data. The Horizontal Wind Model (HWM07), magnetic K index, and solar terminators were used to determine the possible sources of the observed TIDs. Only 41% of the observed TIDs were successfully linked to their possible sources of excitation. The information gathered from this study will be valuable in future radio communications and will serve as means to improve the existing ionospheric models over the South African region.
- Full Text:
- Date Issued: 2015
- Authors: Tyalimpi, Vumile Mike
- Date: 2015
- Subjects: Doppler radar , Geographic information systems , Traveling ionospheric disturbances -- south Africa , Ionospheric disturbances -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5557 , http://hdl.handle.net/10962/d1017901
- Description: Travelling Ionospheric Disturbances (TIDs) are said to be produced by atmospheric gravitational waves propagating through the neutral ionosphere. These are smaller in amplitude and period when compared to most ionospheric disturbances and hence more difficult to measure. Very little is known about the properties of the travelling ionospheric disturbances (TIDs) over the Southern Hemisphere regions since studies have been conducted mostly over the Northern Hemisphere regions. This study presents a framework, using a High Frequency (HF) Doppler radar to investigate the physical properties and the possible driving mechanisms of TIDs. This research focuses on studying the characteristics of the TIDs, such as period, velocity and temporal variations, using HF Doppler measurements taken in South Africa. By making use of a Wavelet Analysis technique, the TIDs’ characteristics were determined. A statistical summary on speed and direction of propagation of the observed TIDs was performed. The winter medium scale travelling ionospheric disturbances (MSTIDs) observed are generally faster than the summer MSTIDs. For all seasons, the MSTIDs had a preferred south-southwest direction of propagation. Most of the large scale travelling ionospheric disturbances (LSTIDs) were observed during the night and of these, the spring LSTIDs were fastest when compared to autumn and summer LSTIDs. The general direction of travel of the observed LSTIDs is south-southeast. Total Electron Content (TEC), derived from Global Positioning System (GPS) measurements, were used to validate some of the TID results obtained from the HF Doppler data. The Horizontal Wind Model (HWM07), magnetic K index, and solar terminators were used to determine the possible sources of the observed TIDs. Only 41% of the observed TIDs were successfully linked to their possible sources of excitation. The information gathered from this study will be valuable in future radio communications and will serve as means to improve the existing ionospheric models over the South African region.
- Full Text:
- Date Issued: 2015
PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
- Authors: Kenyon, Jonathan
- Date: 2015
- Subjects: Radio astronomy , Imaging systems in astronomy , MOdel REconstruction by Synthesis-ANalysis Estimators (MORESANE)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5563 , http://hdl.handle.net/10962/d1020098
- Description: The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
- Full Text:
- Date Issued: 2015
Statistical analysis of the ionospheric response during storm conditions over South Africa using ionosonde and GPS data
- Matamba, Tshimangadzo Merline
- Authors: Matamba, Tshimangadzo Merline
- Date: 2015
- Subjects: Ionospheric storms -- South Africa -- Grahamstown , Ionospheric storms -- South Africa -- Madimbo , Magnetic storms -- South Africa -- Grahamstown , Magnetic storms -- South Africa -- Madimbo , Ionosondes , Global Positioning System
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5555 , http://hdl.handle.net/10962/d1017899
- Description: Ionospheric storms are an extreme form of space weather phenomena which affect space- and ground-based technological systems. Extreme solar activity may give rise to Coronal Mass Ejections (CME) and solar flares that may result in ionospheric storms. This thesis reports on a statistical analysis of the ionospheric response over the ionosonde stations Grahamstown (33.3◦S, 26.5◦E) and Madimbo (22.4◦S,30.9◦E), South Africa, during geomagnetic storm conditions which occurred during the period 1996 - 2011. Total Electron Content (TEC) derived from Global Positioning System (GPS) data by a dual Frequency receiver and an ionosonde at Grahamstown, was analysed for the storms that occurred during the period 2006 - 2011. A comprehensive analysis of the critical frequency of the F2 layer (foF2) and TEC was done. To identify the geomagnetically disturbed conditions the Disturbance storm time (Dst) index with a storm criteria of Dst ≤ −50 nT was used. The ionospheric disturbances were categorized into three responses, namely single disturbance, double disturbance and not significant (NS) ionospheric storms. Single disturbance ionospheric storms refer to positive (P) and negative (N) ionospheric storms observed separately, while double disturbance storms refer to negative and positive ionospheric storms observed during the same storm period. The statistics show the impact of geomagnetic storms on the ionosphere and indicate that negative ionospheric effects follow the solar cycle. In general, only a few ionospheric storms (0.11%) were observed during solar minimum. Positive ionospheric storms occurred most frequently (47.54%) during the declining phase of solar cycle 23. Seasonally, negative ionospheric storms occurred mostly during the summer (63.24%), while positive ionospheric storms occurred frequently during the winter (53.62%). An important finding is that only negative ionospheric storms were observed during great geomagnetic storm activity (Dst ≤ −350 nT). For periods when both ionosonde and GPS was available, the two data sets indicated similar ionospheric responses. Hence, GPS data can be used to effectively identify the ionospheric response in the absence of ionosonde data.
- Full Text:
- Date Issued: 2015
- Authors: Matamba, Tshimangadzo Merline
- Date: 2015
- Subjects: Ionospheric storms -- South Africa -- Grahamstown , Ionospheric storms -- South Africa -- Madimbo , Magnetic storms -- South Africa -- Grahamstown , Magnetic storms -- South Africa -- Madimbo , Ionosondes , Global Positioning System
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5555 , http://hdl.handle.net/10962/d1017899
- Description: Ionospheric storms are an extreme form of space weather phenomena which affect space- and ground-based technological systems. Extreme solar activity may give rise to Coronal Mass Ejections (CME) and solar flares that may result in ionospheric storms. This thesis reports on a statistical analysis of the ionospheric response over the ionosonde stations Grahamstown (33.3◦S, 26.5◦E) and Madimbo (22.4◦S,30.9◦E), South Africa, during geomagnetic storm conditions which occurred during the period 1996 - 2011. Total Electron Content (TEC) derived from Global Positioning System (GPS) data by a dual Frequency receiver and an ionosonde at Grahamstown, was analysed for the storms that occurred during the period 2006 - 2011. A comprehensive analysis of the critical frequency of the F2 layer (foF2) and TEC was done. To identify the geomagnetically disturbed conditions the Disturbance storm time (Dst) index with a storm criteria of Dst ≤ −50 nT was used. The ionospheric disturbances were categorized into three responses, namely single disturbance, double disturbance and not significant (NS) ionospheric storms. Single disturbance ionospheric storms refer to positive (P) and negative (N) ionospheric storms observed separately, while double disturbance storms refer to negative and positive ionospheric storms observed during the same storm period. The statistics show the impact of geomagnetic storms on the ionosphere and indicate that negative ionospheric effects follow the solar cycle. In general, only a few ionospheric storms (0.11%) were observed during solar minimum. Positive ionospheric storms occurred most frequently (47.54%) during the declining phase of solar cycle 23. Seasonally, negative ionospheric storms occurred mostly during the summer (63.24%), while positive ionospheric storms occurred frequently during the winter (53.62%). An important finding is that only negative ionospheric storms were observed during great geomagnetic storm activity (Dst ≤ −350 nT). For periods when both ionosonde and GPS was available, the two data sets indicated similar ionospheric responses. Hence, GPS data can be used to effectively identify the ionospheric response in the absence of ionosonde data.
- Full Text:
- Date Issued: 2015
Structure of the nucleus ¹¹⁴Sn using gamma-ray coincidence data
- Authors: Oates, Sean Benjamin
- Date: 2015
- Subjects: High spin physics , Nuclear structure , Nuclear shell theory , Neutron counters , Decay schemes (Radioactivity) , Coincidence circuits , Collective excitations , Anisotropy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5562 , http://hdl.handle.net/10962/d1019870
- Full Text:
- Date Issued: 2015
- Authors: Oates, Sean Benjamin
- Date: 2015
- Subjects: High spin physics , Nuclear structure , Nuclear shell theory , Neutron counters , Decay schemes (Radioactivity) , Coincidence circuits , Collective excitations , Anisotropy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5562 , http://hdl.handle.net/10962/d1019870
- Full Text:
- Date Issued: 2015