A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
Multi-instrument observations of ionospheric irregularities over South Africa
- Authors: Amabayo, Emirant Bertillas
- Date: 2012
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Ionospheric storms , Solar activity , Sunspots
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5476 , http://hdl.handle.net/10962/d1005261 , Ionosphere -- Research , Sudden ionospheric disturbances , Ionospheric storms , Solar activity , Sunspots
- Description: The occurrence of mid-latitude spread F (SF) over South Africa has not been extensively studied since the installation of the DPS-4 digisondes at Madimbo (30.88◦E, 22.38◦S), Grahamstown (33.32◦S, 26.50◦E) and Louisvale (28.51◦S, 21.24◦E). This study is intended to quantify the probability of the occurrence of F region disturbances associated with ionospheric spread F (SF) and L-band scintillation over South Africa. This study used available ionosonde data for 8 years (2000-2008) from the three South African stations. The SF events were identified manually on ionograms and grouped for further statistical analysis into frequency SF (FSF), range SF (RSF) and mixed SF (MSF). The results show that the diurnal pattern of SF occurrence peaks strongly between 23:00 and 00:00 UT. This pattern is true for all seasons and types of SF at Madimbo and Grahamstown during 2001 and 2005, except for RSF which had peaks during autumn and spring during 2001 at Madimbo. The probability of both MSF and FSF tends to increase with decreasing sunspot number (SSN), with a peak in 2005 (a moderate solar activity period). The seasonal peaks of MSF and FSF are more frequent during winter months at both Madimbo and Grahamstown. In this study SF was evident in ∼ 0.03% and ∼ 0.06% of the available ionograms at Madimbo and Grahamstown respectively during the eight year period. The presence of ionospheric irregularities associated with SF and scintillation was investigated using data from selected Global Positioning System (GPS) receiver stations distributed across South Africa. The results, based on GPS total electron content (TEC) and ionosonde measurements, show that SF over this region can most likely be attributed to travelling ionospheric disturbances (TIDs), caused by gravity waves (GWs) and neutral wind composition changes. The GWs were mostly associated with geomagnetic storms and sub-storms that occurred during periods of high and moderate solar activity (2001-2005). SF occurrence during the low solar activity period (2006-2008)can probably be attributed to neutral wind composition changes.
- Full Text:
- Date Issued: 2012
- Authors: Amabayo, Emirant Bertillas
- Date: 2012
- Subjects: Ionosphere -- Research , Sudden ionospheric disturbances , Ionospheric storms , Solar activity , Sunspots
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5476 , http://hdl.handle.net/10962/d1005261 , Ionosphere -- Research , Sudden ionospheric disturbances , Ionospheric storms , Solar activity , Sunspots
- Description: The occurrence of mid-latitude spread F (SF) over South Africa has not been extensively studied since the installation of the DPS-4 digisondes at Madimbo (30.88◦E, 22.38◦S), Grahamstown (33.32◦S, 26.50◦E) and Louisvale (28.51◦S, 21.24◦E). This study is intended to quantify the probability of the occurrence of F region disturbances associated with ionospheric spread F (SF) and L-band scintillation over South Africa. This study used available ionosonde data for 8 years (2000-2008) from the three South African stations. The SF events were identified manually on ionograms and grouped for further statistical analysis into frequency SF (FSF), range SF (RSF) and mixed SF (MSF). The results show that the diurnal pattern of SF occurrence peaks strongly between 23:00 and 00:00 UT. This pattern is true for all seasons and types of SF at Madimbo and Grahamstown during 2001 and 2005, except for RSF which had peaks during autumn and spring during 2001 at Madimbo. The probability of both MSF and FSF tends to increase with decreasing sunspot number (SSN), with a peak in 2005 (a moderate solar activity period). The seasonal peaks of MSF and FSF are more frequent during winter months at both Madimbo and Grahamstown. In this study SF was evident in ∼ 0.03% and ∼ 0.06% of the available ionograms at Madimbo and Grahamstown respectively during the eight year period. The presence of ionospheric irregularities associated with SF and scintillation was investigated using data from selected Global Positioning System (GPS) receiver stations distributed across South Africa. The results, based on GPS total electron content (TEC) and ionosonde measurements, show that SF over this region can most likely be attributed to travelling ionospheric disturbances (TIDs), caused by gravity waves (GWs) and neutral wind composition changes. The GWs were mostly associated with geomagnetic storms and sub-storms that occurred during periods of high and moderate solar activity (2001-2005). SF occurrence during the low solar activity period (2006-2008)can probably be attributed to neutral wind composition changes.
- Full Text:
- Date Issued: 2012
Thermoluminescence of annealed synthetic quartz
- Atang, Elizabeth Fende Midiki
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
- Authors: Atang, Elizabeth Fende Midiki
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/420 , vital:19957
- Description: The kinetic and dosimetric features of the main thermoluminescent peak of synthetic quartz have been investigated in quartz ordinarily annealed at 500_C as well as quartz annealed at 500_C for 10 minutes. The main peak is found at 78 _C for the samples annealed at 500_C for 10 minutes irradiated to 10 Gy and heated at 1.0 _C/s. For the samples ordinarily annealed at 500_C the main peak is found at 106 _C after the sample has been irradiated to 30 Gy and heated at 5.0 _C/s. In these samples, the intensity of the main peak is enhanced with repetitive measurement whereas its maximum temperature is unaffected. The peak position of the main peak in the sample is independent of the irradiation dose and this, together with its fading characteristics, are consistent with first-order kinetics. For doses between 5 and 25 Gy, the dose response of the main peak of the annealed sample is superlinear. The half-life of the main TL peak of the annealed sample is about 1 h. The activation energy E of the main peak is around 0.90 eV. For a heating rate of 0.4 _C/s, its order of kinetics b derived from the whole curve method of analysis is 1.0. Following irradiation, preheating and illumination with 470 nm blue light, the main peak in the annealed sample is regenerated during heating. The resulting phototransferred peak occurs at the same temperature as the original peak and has similar kinetic and dosimetric features, with a half-life of about 1 h. For a preheat temperature of 200 _C, the intensity of the phototransferred peak in the sample increases with illumination time up to a maximum and decreases thereafter. At longer illumination times, no further decrease in the intensity of the phototransferred peak is observed. The traps associated with the 325 _C peak are the main source of the electrons responsible for the regenerated peak.
- Full Text:
- Date Issued: 2016
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Using co-located radars and instruments to analyse ionespheric events over South Africa
- Authors: Athieno, Racheal
- Date: 2012
- Subjects: Ionosphere -- Research -- South Africa , Space environment -- Research -- South Africa , Meteorology -- Research -- South Africa , Ionosondes -- Research -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5493 , http://hdl.handle.net/10962/d1005279 , Ionosphere -- Research -- South Africa , Space environment -- Research -- South Africa , Meteorology -- Research -- South Africa , Ionosondes -- Research -- South Africa
- Description: Space weather and its effect on technological systems are important for scientific research. Developing an understanding of the behaviour, sources and effects of ionospheric events form a basis for improving space weather prediction. This thesis attempts to use co-located radars and instruments for the analysis of ionospheric events over South Africa. The HF Doppler radar, ionosonde, Global Positioning System (GPS) and GPS ionospheric scintillation monitor (GISTM) receivers are co-located in Hermanus (34.4°S, 19.2°E), one of the observatories for the space science directorate of the South African National Space Agency (SANSA). Data was obtained from these radars and instruments and analysed for ionospheric events. Only the Hermanus station was selected for this analysis, because it is currently the only South African station that hosts all the mentioned radars and instruments. Ionospheric events identified include wave-like structures, Doppler spread, sudden frequency deviations and ionospheric oscillations associated with geomagnetic pulsations. For the purpose of this work, ionospheric events are defined as any unusual structures observed on the received signal and inferred from observations made by the HF Doppler radar. They were identified by visual inspection of the Doppler shift spectrograms. The magnitude and nature of the events vary, depending on their source and were observed by all, some or one instrument. This study suggests that the inclusion of a wider data coverage and more stations in South Africa merit consideration, especially since plans are underway to host a co-located radar network similar to that in Hermanus at at least three additional observatory sites in South Africa. This study lays a foundation for multi-station co-located radar and instrument observation and analysis of ionospheric events which should enhance the accuracy of space weather and HF communication prediction.
- Full Text:
- Date Issued: 2012
- Authors: Athieno, Racheal
- Date: 2012
- Subjects: Ionosphere -- Research -- South Africa , Space environment -- Research -- South Africa , Meteorology -- Research -- South Africa , Ionosondes -- Research -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5493 , http://hdl.handle.net/10962/d1005279 , Ionosphere -- Research -- South Africa , Space environment -- Research -- South Africa , Meteorology -- Research -- South Africa , Ionosondes -- Research -- South Africa
- Description: Space weather and its effect on technological systems are important for scientific research. Developing an understanding of the behaviour, sources and effects of ionospheric events form a basis for improving space weather prediction. This thesis attempts to use co-located radars and instruments for the analysis of ionospheric events over South Africa. The HF Doppler radar, ionosonde, Global Positioning System (GPS) and GPS ionospheric scintillation monitor (GISTM) receivers are co-located in Hermanus (34.4°S, 19.2°E), one of the observatories for the space science directorate of the South African National Space Agency (SANSA). Data was obtained from these radars and instruments and analysed for ionospheric events. Only the Hermanus station was selected for this analysis, because it is currently the only South African station that hosts all the mentioned radars and instruments. Ionospheric events identified include wave-like structures, Doppler spread, sudden frequency deviations and ionospheric oscillations associated with geomagnetic pulsations. For the purpose of this work, ionospheric events are defined as any unusual structures observed on the received signal and inferred from observations made by the HF Doppler radar. They were identified by visual inspection of the Doppler shift spectrograms. The magnitude and nature of the events vary, depending on their source and were observed by all, some or one instrument. This study suggests that the inclusion of a wider data coverage and more stations in South Africa merit consideration, especially since plans are underway to host a co-located radar network similar to that in Hermanus at at least three additional observatory sites in South Africa. This study lays a foundation for multi-station co-located radar and instrument observation and analysis of ionospheric events which should enhance the accuracy of space weather and HF communication prediction.
- Full Text:
- Date Issued: 2012
An investigation of traveling ionospheric disturbances (TIDs) in the SANAE HF radar data
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07
- Authors: Atilaw, Tsige Yared
- Date: 2022-04-07
- Subjects: Ionospheric storms Antarctica , Radar Antarctica , Range time-intensity (RTI) , South African National Antarctic Expedition (SANAE) , Super Dual Auroral Radar Network (SuperDARN)
- Language: English
- Type: Doctoral thesis , text
- Identifier: http://hdl.handle.net/10962/232377 , vital:49986 , DOI 10.21504/10962/232377
- Description: This thesis aims to study the characteristics of traveling ionospheric disturbances (TIDs) as identified in the radar data of the South African National Antarctic Expedition (SANAE) Super Dual Auroral Radar Network (SuperDARN) radar located in Antarctica. For this project, 22 TIDs were identified from visual inspection of range time-intensity (RTI) plots of backscattered power and Doppler velocity parameters of the SANAE radar between 2005âAS2015. These events were studied to determine their characteristics and driving mechanisms. Where good quality data were available, the SANAE HF radar data were supplemented by Halley radar data, which has large area of overlapping field of view (FOV) with the SANAE radar, and also by GPS TEC data. This provided a multi-instrument data analysis of some TID events. Different spectral analysis methods, namely the multitaper method (MTM), Fast Fourier transform (FFT) and the Lomb-Scargle periodogram were used to obtain spectral information of the observed waves. The advantage of using multiple windowing in MTM over the traditional windowing method was illustrated using one of the TID events. In addition, the analytic signal of the wave from the MTM method was used to estimate the instantaneous phase velocity and propagation azimuth of the wave, which was able to track the change in the characteristics of the medium-scale TID (MSTID) efficiently throughout the duration of the event. This is a clear advantage over other windowing techniques. The energy contribution by this MSTID through Joule heating was estimated over the region where spectral analysis of both SANAE and Halley data showed it to be present. The majority of the TIDs (65.4%) could be classified as MSTIDs with periods of 20–60 minutes, velocities of 50–333 ms1 and wavelengths of 129–833 km. The TID occurrence rate was high around the March equinox with 12 out of the 16 event days being during March–May. March had a particularly high number of occurrences of TIDs (46%). The majority of the TIDs observed during this month propagated northward or southeastward. In terms of prevailing geomagnetic conditions, 6 out of 16 event days were geomagnetically quiet, while 10 occurred during geomagnetic storms and substorms. During quiet conditions, TIDs could be linked to Es and polarised electric fields in 2 of these events. The other quiet time events could not be related to Es instability and polarised electric field either because their exact propagation direction could not be determined or data quality from the Es region scatter was too poor to perform spectral analysis. The storm-/substorm-related TIDs are possibly generated through Joule heating, the Lorentz force and energetic particle precipitation. , Thesis (PhD) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Date Issued: 2022-04-07
Some ionospheric effects observed at sunrise
- Authors: Baker, D C
- Date: 1964
- Subjects: Sun -- Rising and setting , Ionosphere -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5513 , http://hdl.handle.net/10962/d1009496 , Sun -- Rising and setting , Ionosphere -- Research
- Description: The study of the ionosphere over the sunrise period is necessary for an understanding of the vtiriations in layer structure with time and has been a topic of research of many workers. On the whole these investigations have been restricted to a study of critical frequency variations with relatively short intervals of a few minutes between successive records, of N-h curves deduced from ionograms with long intervals (15 minutes or so) between successive N-h curves or of continuously monitored single frequency reflections. Not one of the three techniques is entirely satisfactory for a detailed study of ionospheric behaviour over sunrise. The first two do not give a sufficiently clear indication of what happens in the initial stages of layer development, while from the third incomplete data is obtained as to what is happening at a specific electron-density level. For this reason a preliminary investigation of the ionosphere over sunrise was made at Rhodes University during August, 1959. The records were obtained at four-and-a-half minute intervals and scaled by the method of KELSO (1952 ). "Many of t he results were inconclusive but it appeared that records would have to be taken at approximately one minute intervals and reduced to N-h curves by a scaling technique which made full allowance for low-level ionization if useful results were to be obtained. An attempt has been made in this thesis to investigate the behaviour of the ionosphere over sunrise more fully than can be done by the three techniques referred to. A number of observed phenomena are also examined. Part I deals with the theoretical background to ionosphere physics in general and describes the equipment, equipment modifications and experimental procedure. Part II presents the results obtained. The records for a largescale travelling disturbance are analysed. Various observed phenomena are described and discussed. A simple method of obtaining production rates from experimental data is described. The implications of the observed variations of production rates with height and time are discussed. Suggestions for further research and improvement of the methods used arc made in Charter 9.
- Full Text:
- Date Issued: 1964
- Authors: Baker, D C
- Date: 1964
- Subjects: Sun -- Rising and setting , Ionosphere -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5513 , http://hdl.handle.net/10962/d1009496 , Sun -- Rising and setting , Ionosphere -- Research
- Description: The study of the ionosphere over the sunrise period is necessary for an understanding of the vtiriations in layer structure with time and has been a topic of research of many workers. On the whole these investigations have been restricted to a study of critical frequency variations with relatively short intervals of a few minutes between successive records, of N-h curves deduced from ionograms with long intervals (15 minutes or so) between successive N-h curves or of continuously monitored single frequency reflections. Not one of the three techniques is entirely satisfactory for a detailed study of ionospheric behaviour over sunrise. The first two do not give a sufficiently clear indication of what happens in the initial stages of layer development, while from the third incomplete data is obtained as to what is happening at a specific electron-density level. For this reason a preliminary investigation of the ionosphere over sunrise was made at Rhodes University during August, 1959. The records were obtained at four-and-a-half minute intervals and scaled by the method of KELSO (1952 ). "Many of t he results were inconclusive but it appeared that records would have to be taken at approximately one minute intervals and reduced to N-h curves by a scaling technique which made full allowance for low-level ionization if useful results were to be obtained. An attempt has been made in this thesis to investigate the behaviour of the ionosphere over sunrise more fully than can be done by the three techniques referred to. A number of observed phenomena are also examined. Part I deals with the theoretical background to ionosphere physics in general and describes the equipment, equipment modifications and experimental procedure. Part II presents the results obtained. The records for a largescale travelling disturbance are analysed. Various observed phenomena are described and discussed. A simple method of obtaining production rates from experimental data is described. The implications of the observed variations of production rates with height and time are discussed. Suggestions for further research and improvement of the methods used arc made in Charter 9.
- Full Text:
- Date Issued: 1964
Nonlinear optical responses of phthalocyanines in the presence of nanomaterials or when embedded in polymeric materials
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
Comparison of A₄ neutrino mass models
- Barry, James Munnik Hamilton
- Authors: Barry, James Munnik Hamilton
- Date: 2010
- Subjects: Neutrinos -- Mass , Standard model (Nuclear physics) , Particles (Nuclear physics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5554 , http://hdl.handle.net/10962/d1015271
- Description: The present neutrino oscillation data are compatible with tri-bimaximal mixing, to leading order. The addition of an A₄ family symmetry and extended Higgs sector to the Standard Model can generate this mixing pattern, assuming the correct vacuum expectation value alignment of Higgs scalars. The effect of deviating this alignment is studied, for different types of A₄ models, with a phenomenological emphasis: the effect of perturbations on the model predictions for the neutrino oscillation and neutrino mass observables. The standard theoretical description of neutrino oscillations is presented, along with a summary of the past, present and future experimental efforts aimed at measuring the neutrino mixing parameters. Additionally, the current constraints on the sum of absolute neutrino masses and the amplitude for neutrinoless double beta decay, which is yet to be observed, are discussed. These constraints provide a model-independent test of family symmetery models. The Standard Model is reviewed, and extensions to the Standard Model such as the seesaw mechanism(s) are discussed: these are designed to endow neutrinos with mass, and can be incorporated into A₄ symmetry models. Models with different A₄ particle assignments are analysed for deviations from tribimaximal mixing. There are nine models presented in Chapter 5, with lepton doublets transforming as 3 (underlined) and right-handed charged leptons transforming as 1, 1', 1" (all underlined) ; five of these include right-handed neutrinos transforming as 3 (underlined) and make use of the seesaw mechanism. Chapter 6 contains the analysis of six models that assign all leptons to the 3 (underlined) representation, with four of these utilising the seesaw mechanism. The models are tested for any degree of fine tuning of the parameters that define the mass matrices. The effect of perturbations on the mixing angle observables, in particular sin² ∅₁₃ and sin² ∅₂₃, is studied, as well as the effect on the Jarlskog invariant, Jcp. Investigations of the (Mee)- ∑Mv parameter space allow for comparison with current data, and can lead to the possible exclusion of a particular model by constraints from future data.
- Full Text:
- Date Issued: 2010
- Authors: Barry, James Munnik Hamilton
- Date: 2010
- Subjects: Neutrinos -- Mass , Standard model (Nuclear physics) , Particles (Nuclear physics)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5554 , http://hdl.handle.net/10962/d1015271
- Description: The present neutrino oscillation data are compatible with tri-bimaximal mixing, to leading order. The addition of an A₄ family symmetry and extended Higgs sector to the Standard Model can generate this mixing pattern, assuming the correct vacuum expectation value alignment of Higgs scalars. The effect of deviating this alignment is studied, for different types of A₄ models, with a phenomenological emphasis: the effect of perturbations on the model predictions for the neutrino oscillation and neutrino mass observables. The standard theoretical description of neutrino oscillations is presented, along with a summary of the past, present and future experimental efforts aimed at measuring the neutrino mixing parameters. Additionally, the current constraints on the sum of absolute neutrino masses and the amplitude for neutrinoless double beta decay, which is yet to be observed, are discussed. These constraints provide a model-independent test of family symmetery models. The Standard Model is reviewed, and extensions to the Standard Model such as the seesaw mechanism(s) are discussed: these are designed to endow neutrinos with mass, and can be incorporated into A₄ symmetry models. Models with different A₄ particle assignments are analysed for deviations from tribimaximal mixing. There are nine models presented in Chapter 5, with lepton doublets transforming as 3 (underlined) and right-handed charged leptons transforming as 1, 1', 1" (all underlined) ; five of these include right-handed neutrinos transforming as 3 (underlined) and make use of the seesaw mechanism. Chapter 6 contains the analysis of six models that assign all leptons to the 3 (underlined) representation, with four of these utilising the seesaw mechanism. The models are tested for any degree of fine tuning of the parameters that define the mass matrices. The effect of perturbations on the mixing angle observables, in particular sin² ∅₁₃ and sin² ∅₂₃, is studied, as well as the effect on the Jarlskog invariant, Jcp. Investigations of the (Mee)- ∑Mv parameter space allow for comparison with current data, and can lead to the possible exclusion of a particular model by constraints from future data.
- Full Text:
- Date Issued: 2010
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Neutral Atomic Hydrogen in Gravitationally Lensed Systems
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
- Authors: Blecher, Tariq Dylan
- Date: 2021-10-29
- Subjects: Uncatalogued
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10962/192776 , vital:45263
- Description: Thesis (PhD) -- Faculty of Law, Law, 2021
- Full Text:
- Date Issued: 2021-10-29
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
Ionospheric total electron content variability and its influence in radio astronomy
- Authors: Botai, Ondego Joel
- Date: 2006
- Subjects: Electrons , Global Positioning System , Global Positioning System -- Data processing , Ionosphere , Ionospheric radio wave propagation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5473 , http://hdl.handle.net/10962/d1005258 , Electrons , Global Positioning System , Global Positioning System -- Data processing , Ionosphere , Ionospheric radio wave propagation
- Description: Ionospheric phase delays of radio signals from Global Positioning System (GPS) satellites have been used to compute ionospheric Total Electron Content (TEC). An extended Chapman profle model is used to estimate the electron density profles and TEC. The Chapman profle that can be used to predict TEC over the mid-latitudes only applies during day time. To model night time TEC variability, a polynomial function is fitted to the night time peak electron density profles derived from the online International Reference Ionosphere (IRI) 2001. The observed and predicted TEC and its variability have been used to study ionospheric in°uence on Radio Astronomy in South Africa region. Di®erential phase delays of the radio signals from Radio Astronomy sources have been simulated using TEC. Using the simulated phase delays, the azimuth and declination o®sets of the radio sources have been estimated. Results indicate that, pointing errors of the order of miliarcseconds (mas) are likely if the ionospheric phase delays are not corrected for. These delays are not uniform and vary over a broad spectrum of timescales. This implies that fast frequency (referencing) switching, closure phases and fringe ¯tting schemes for ionospheric correction in astrometry are not the best option as they do not capture the real state of the ionosphere especially if the switching time is greater than the ionospheric TEC variability. However, advantage can be taken of the GPS satellite data available at intervals of a second from the GPS receiver network in South Africa to derive parameters which could be used to correct for the ionospheric delays. Furthermore GPS data can also be used to monitor the occurrence of scintillations, (which might corrupt radio signals) especially for the proposed, Square Kilometer Array (SKA) stations closer to the equatorial belt during magnetic storms and sub-storms. A 10 minute snapshot of GPS data recorded with the Hermanus [34:420 S, 19:220 E ] dual frequency receiver on 2003-04-11 did not show the occurrence of scintillations. This time scale is however too short and cannot be representative. Longer time scales; hours, days, seasons are needed to monitor the occurrence of scintillations.
- Full Text:
- Date Issued: 2006
- Authors: Botai, Ondego Joel
- Date: 2006
- Subjects: Electrons , Global Positioning System , Global Positioning System -- Data processing , Ionosphere , Ionospheric radio wave propagation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5473 , http://hdl.handle.net/10962/d1005258 , Electrons , Global Positioning System , Global Positioning System -- Data processing , Ionosphere , Ionospheric radio wave propagation
- Description: Ionospheric phase delays of radio signals from Global Positioning System (GPS) satellites have been used to compute ionospheric Total Electron Content (TEC). An extended Chapman profle model is used to estimate the electron density profles and TEC. The Chapman profle that can be used to predict TEC over the mid-latitudes only applies during day time. To model night time TEC variability, a polynomial function is fitted to the night time peak electron density profles derived from the online International Reference Ionosphere (IRI) 2001. The observed and predicted TEC and its variability have been used to study ionospheric in°uence on Radio Astronomy in South Africa region. Di®erential phase delays of the radio signals from Radio Astronomy sources have been simulated using TEC. Using the simulated phase delays, the azimuth and declination o®sets of the radio sources have been estimated. Results indicate that, pointing errors of the order of miliarcseconds (mas) are likely if the ionospheric phase delays are not corrected for. These delays are not uniform and vary over a broad spectrum of timescales. This implies that fast frequency (referencing) switching, closure phases and fringe ¯tting schemes for ionospheric correction in astrometry are not the best option as they do not capture the real state of the ionosphere especially if the switching time is greater than the ionospheric TEC variability. However, advantage can be taken of the GPS satellite data available at intervals of a second from the GPS receiver network in South Africa to derive parameters which could be used to correct for the ionospheric delays. Furthermore GPS data can also be used to monitor the occurrence of scintillations, (which might corrupt radio signals) especially for the proposed, Square Kilometer Array (SKA) stations closer to the equatorial belt during magnetic storms and sub-storms. A 10 minute snapshot of GPS data recorded with the Hermanus [34:420 S, 19:220 E ] dual frequency receiver on 2003-04-11 did not show the occurrence of scintillations. This time scale is however too short and cannot be representative. Longer time scales; hours, days, seasons are needed to monitor the occurrence of scintillations.
- Full Text:
- Date Issued: 2006
Towards modelling the formation of ore bodies initial results dealing with the fluid mechanical aspects of magma chamber convection
- Authors: Botha, André Erasmus
- Date: 1999
- Subjects: Ore deposits , Fluid mechanics , Magmatism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5492 , http://hdl.handle.net/10962/d1005278 , Ore deposits , Fluid mechanics , Magmatism
- Description: This thesis forms part of a larger effort which aims to establish the means of assessing the fluid mechanical behaviour of magma 1 as it cools inside a magma chamber surrounded by porous country rock. The reason for doing so is to advance the understanding of some types of mineral deposits; for example,the Platinum Group Elements (PGEs). The magma is modelled with the governing equations for a single-phase incompressible Newtonian fluid with variable viscosity and density. In this thesis, thermal conductivity and specific heat are approximated as constants and the country rock is treated as a conducting solid so as to save on computational time in the initial phases of the project. A basic review of the relevant literature is presented as background material and three basic models of magma chambers are discussed: crystal settling, compositional convection and double diffusive convection.The results presented in this thesis are from finite element calculations by a commercial computer code: ANSYS 5.4. This code has been employed in industry for over 26 years and has a long and successful benchmark history. In this context, finite element methods that are applicable to the code are discussed in chapter 5. In chapter 6, results that were obtained in the course of this research are presented. The thesis concludes with an indication of the possible geological significance of the results and various refinements that should be made to future models.
- Full Text:
- Date Issued: 1999
- Authors: Botha, André Erasmus
- Date: 1999
- Subjects: Ore deposits , Fluid mechanics , Magmatism
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5492 , http://hdl.handle.net/10962/d1005278 , Ore deposits , Fluid mechanics , Magmatism
- Description: This thesis forms part of a larger effort which aims to establish the means of assessing the fluid mechanical behaviour of magma 1 as it cools inside a magma chamber surrounded by porous country rock. The reason for doing so is to advance the understanding of some types of mineral deposits; for example,the Platinum Group Elements (PGEs). The magma is modelled with the governing equations for a single-phase incompressible Newtonian fluid with variable viscosity and density. In this thesis, thermal conductivity and specific heat are approximated as constants and the country rock is treated as a conducting solid so as to save on computational time in the initial phases of the project. A basic review of the relevant literature is presented as background material and three basic models of magma chambers are discussed: crystal settling, compositional convection and double diffusive convection.The results presented in this thesis are from finite element calculations by a commercial computer code: ANSYS 5.4. This code has been employed in industry for over 26 years and has a long and successful benchmark history. In this context, finite element methods that are applicable to the code are discussed in chapter 5. In chapter 6, results that were obtained in the course of this research are presented. The thesis concludes with an indication of the possible geological significance of the results and various refinements that should be made to future models.
- Full Text:
- Date Issued: 1999
The selection and evaluation of grey-level thresholds applied to digital images
- Authors: Brink, Anton David
- Date: 1988
- Subjects: Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5443 , http://hdl.handle.net/10962/d1001996
- Description: Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
- Full Text:
- Date Issued: 1988
- Authors: Brink, Anton David
- Date: 1988
- Subjects: Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5443 , http://hdl.handle.net/10962/d1001996
- Description: Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
- Full Text:
- Date Issued: 1988
Physics and applications of scintillation detectors
- Authors: Brooks, Francis Dey
- Date: 1996
- Subjects: Nuclear physics Scintillation counters Nuclear counters Detectors
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:5482 , http://hdl.handle.net/10962/d1005268
- Description: The papers submitted in this volume present contributions and reviews on the physics of the scintillation process together with contributions to the development of scintillation detection techniques and the use of these techniques in nuclear physics research and in the applications of nuclear methods to other fields.
- Full Text:
- Date Issued: 1996
- Authors: Brooks, Francis Dey
- Date: 1996
- Subjects: Nuclear physics Scintillation counters Nuclear counters Detectors
- Language: English
- Type: Thesis , Doctoral , DSc
- Identifier: vital:5482 , http://hdl.handle.net/10962/d1005268
- Description: The papers submitted in this volume present contributions and reviews on the physics of the scintillation process together with contributions to the development of scintillation detection techniques and the use of these techniques in nuclear physics research and in the applications of nuclear methods to other fields.
- Full Text:
- Date Issued: 1996
Analysing emergent time within an isolated Universe through the application of interactions in the conditional probability approach
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
- Authors: Bryan, Kate Louise Halse
- Date: 2020
- Subjects: Space and time , Quantum gravity , Quantum theory , Relativity (Physics)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/146676 , vital:38547
- Description: Time remains a frequently discussed issue in physics and philosophy. One interpretation of growing popularity is the ‘timeless’ view which states that our experience of time is only an illusion. The isolated Universe model, provided by the Wheeler-DeWitt equation, supports this interpretation by describing time using clocks in the conditional probability interpretation (CPI). However, the CPI customarily dismisses interaction effects as negligible creating a potential blind spot which overlooks the potential influence of interaction effects. Accounting for interactions opens up a new avenue of analysis and a potential challenge to the interpretation of time. In aid of our assessment of the impact interaction effects have on the CPI, we present rudimentary definitions of time and its associated concepts. Defined in a minimalist manner, time is argued to require a postulate of causality as a means of accounting for temporal ordering in physical theories. Several of these theories are discussed here in terms of their respective approaches to time and, despite their differences, there are indications that the accounts of time are unified in a more fundamental theory. An analytic analysis of the CPI, incorporating two different clock choices, and a qualitative analysis both confirm that interactions have a necessary role within the CPI. The consequence of removing interactions is a maximised uncertainty in any measurement of the clock and a restriction to a two-state system, as indicated by the results of the toy models and qualitative argument respectively. The philosophical implication is that we are not restricted to the timeless view since including interactions as agents of causal interventions between systems provides an account of time as a real phenomenon. This result highlights the reliance on a postulate of causality which forms a pressing problem in explaining our experience of time.
- Full Text:
- Date Issued: 2020
The EPR paradox: back from the future
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
- Authors: Bryan, Kate Louise Halse
- Date: 2016
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/2881 , vital:20338
- Description: The Einstein-Podolsky-Rosen (EPR) thought experiment produced a problem regarding the interpretation of quantum mechanics provided for entangled systems. Although the thought experiment was reformulated mathematically in Bell's Theorem, the conclusion regarding entanglement correlations is still debated today. In an attempt to provide an explanation of how entangled systems maintain their correlations, this thesis investigates the theory of post-state teleportation as a possible interpretation of how information moves between entangled systems without resorting to nonlocal action. Post-state teleportation describes a method of communicating to the past via a quantum information channel. The resulting picture of the EPR thought experiment relied on information propagating backward from a final boundary condition to ensure all correlations were maintained. Similarities were found between this resolution of the EPR paradox and the final state solution to the black hole information paradox and the closely related firewall problem. The latter refers to an apparent conflict between unitary evaporation of a black hole and the strong subadditivity condition. The use of observer complementarity allows this solution of the black hole problem to be shown to be the same as a seemingly different solution known as “ER=EPR", where ‘ER’ refers to an Einstein-Rosen bridge or wormhole.
- Full Text:
- Date Issued: 2016
The photo-fluorescence properties of some organic materials
- Authors: Cameron, Antony John Wesley
- Date: 1959
- Subjects: Hydrocarbons -- Spectra -- Fluorescence , Organic compounds , Energy transfer
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5514 , http://hdl.handle.net/10962/d1010041 , Hydrocarbons -- Spectra -- Fluorescence , Organic compounds , Energy transfer
- Description: In this thesis I have given an account of the experimental work carried out by me at Rhodes University from the beginning of 1954 to the end of 1955, and the analysis of the results which was completed during the following two years, 1956 and 1957. The dissertation is divided into two sections; Part I deals with the photo fluorescence spectra of a large group of organic compounds, and Part 2 describes an investigation of the photo-fluorescence properties of and energy transfer in liquid organic solutions.
- Full Text:
- Date Issued: 1959
- Authors: Cameron, Antony John Wesley
- Date: 1959
- Subjects: Hydrocarbons -- Spectra -- Fluorescence , Organic compounds , Energy transfer
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:5514 , http://hdl.handle.net/10962/d1010041 , Hydrocarbons -- Spectra -- Fluorescence , Organic compounds , Energy transfer
- Description: In this thesis I have given an account of the experimental work carried out by me at Rhodes University from the beginning of 1954 to the end of 1955, and the analysis of the results which was completed during the following two years, 1956 and 1957. The dissertation is divided into two sections; Part I deals with the photo fluorescence spectra of a large group of organic compounds, and Part 2 describes an investigation of the photo-fluorescence properties of and energy transfer in liquid organic solutions.
- Full Text:
- Date Issued: 1959
Observations of cosmic re-ionisation with the Hydrogen Epoch of Reionization Array: simulations of closure phase spectra
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04
- Authors: Charles, Ntsikelelo
- Date: 2021-04
- Subjects: Epoch of reionization , Space interferometry , Astronomy -- Observations , Closure phase spectra
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/174470 , vital:42480
- Description: The 21 cm transition from neutral Hydrogen promises to be the best observational probe of the Epoch of Reionisation. It has driven the construction of the new generation of low frequency radio interferometric arrays, including the Hydrogen Epoch of Reionization Array (HERA). The main difficulty in measuring the 21 cm signal is the presence of bright foregrounds that require very accurate interferometric calibration. Thyagarajan et al. (2018) proposed the use of closure phase quantities as a means to detect the 21 cm signal, which has the advantage of being independent (to first order) from calibration errors and therefore, bypasses the need for accurate calibration. Closure phases are, however, affected by so-called direction dependent effects, e.g. the fact that the dishes - or antennas - of an interferometric array are not identical to each other and , therefore, yield different antenna primary beam responses. In this thesis, we investigate the impact of direction dependent effects on closure quantities and simulate the impact that primary antenna beams affected by mutual coupling have on the foreground closure phase and its power spectrum i.e. the power spectrum of the bispectrum phase (Thyagarajan et al., 2020). Our simulations show that primary beams affected by mutual coupling lead to an overall leakage of foreground power in the so-called EoR window, i.e. power from smooth-spectrum foregrounds is confined to low k modes. We quantified this effect and found that the leakage is up to ~ 8 orders magnitude higher than the case of an ideal beam at kǁ > 0:5 h Mpc-1. We also found that the foreground leakage is worse when edge antennas are included, as they have a more different primary beam compared to antennas at the centre of the array. The leakage magnitude is worse when bright foregrounds appear in the antenna sidelobes, as expected. Our simulations provide a useful framework to interpret observations and assess which power spectrum region is expected to be most contaminated by foreground power leakage.
- Full Text:
- Date Issued: 2021-04