Studying the brightest radio sources in the southern sky
- Authors: Sejake, Precious Katlego
- Date: 2022-04-06
- Subjects: Galaxies Formation , Galaxies Evolution , Active galaxies , Radio galaxies , Radio sources (Astronomy) , Southern sky (Astronomy)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/455350 , vital:75423
- Description: Active Galactic Nuclei (AGN) are among the most remarkable and powerful extragalactic radio sources in the Universe. The study of AGN enables us to understand better the critical mechanisms leading to the launch of radio jets, and its link to the central engine. Radio jets are thought to impact their host galaxy by promoting or suppressing star formation. By studying AGN, we can better understand their formation, evolution, and environment. The host galaxy cross-identification is a crucial step to be carried out to build a multi-wavelength analysis of powerful radio sources; AGN. The GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) 4Jy (G4Jy) Sample comprises 1,863 of the powerful radio sources in the southern sky. However, 140 sources from the G4Jy Sample were followed-up with the Open Time on MeerKAT. Of these 140 sources, 126 had an ambiguous host galaxy, and 13 had an identified host galaxy; however, there were some discrepancies in the literature concerning the host galaxy. The host-galaxy identification of these sources is limited by the poor resolution of radio data at 25" to 45". This study aims to assess the radio morphology of these 140 sources and identify their host galaxy using the ⇠ 7” resolution images from MeerKAT in conjunction with datasets at other wavelengths. This analysis is carried out by visually inspecting the overlays. The overlays comprise radio contours from 150 MHz, 200 MHz, 843/1400 MHz and 1300 MHz overlaid on the mid-infrared image (3.4 μm). The MeerKAT images reveal sources with various radio morphologies. While most of the sources have radio morphology of typical symmetric lobes, 10 radio sources have head-tail morphology, 14 are wide-angle tail (WAT), and 5 have X-, S- /Z-shaped morphology. Overall, we find host galaxies for 70% of the sources in the sample, with the remainder comprising sources with ambiguous host galaxy (20.7%) and sources with a faint mid-infrared host galaxy (9.3%). These results highlight the importance of angular resolution and sensitivity for morphological classification and host galaxy cross-identification. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
- Authors: Sejake, Precious Katlego
- Date: 2022-04-06
- Subjects: Galaxies Formation , Galaxies Evolution , Active galaxies , Radio galaxies , Radio sources (Astronomy) , Southern sky (Astronomy)
- Language: English
- Type: Academic theses , Master's theses , text
- Identifier: http://hdl.handle.net/10962/455350 , vital:75423
- Description: Active Galactic Nuclei (AGN) are among the most remarkable and powerful extragalactic radio sources in the Universe. The study of AGN enables us to understand better the critical mechanisms leading to the launch of radio jets, and its link to the central engine. Radio jets are thought to impact their host galaxy by promoting or suppressing star formation. By studying AGN, we can better understand their formation, evolution, and environment. The host galaxy cross-identification is a crucial step to be carried out to build a multi-wavelength analysis of powerful radio sources; AGN. The GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) 4Jy (G4Jy) Sample comprises 1,863 of the powerful radio sources in the southern sky. However, 140 sources from the G4Jy Sample were followed-up with the Open Time on MeerKAT. Of these 140 sources, 126 had an ambiguous host galaxy, and 13 had an identified host galaxy; however, there were some discrepancies in the literature concerning the host galaxy. The host-galaxy identification of these sources is limited by the poor resolution of radio data at 25" to 45". This study aims to assess the radio morphology of these 140 sources and identify their host galaxy using the ⇠ 7” resolution images from MeerKAT in conjunction with datasets at other wavelengths. This analysis is carried out by visually inspecting the overlays. The overlays comprise radio contours from 150 MHz, 200 MHz, 843/1400 MHz and 1300 MHz overlaid on the mid-infrared image (3.4 μm). The MeerKAT images reveal sources with various radio morphologies. While most of the sources have radio morphology of typical symmetric lobes, 10 radio sources have head-tail morphology, 14 are wide-angle tail (WAT), and 5 have X-, S- /Z-shaped morphology. Overall, we find host galaxies for 70% of the sources in the sample, with the remainder comprising sources with ambiguous host galaxy (20.7%) and sources with a faint mid-infrared host galaxy (9.3%). These results highlight the importance of angular resolution and sensitivity for morphological classification and host galaxy cross-identification. , Thesis (MSc) -- Faculty of Science, Physics and Electronics, 2022
- Full Text:
A Systematic Visualisation Framework for Radio-Imaging Pipelines
- Authors: Andati, Lexy Acherwa Livoyi
- Date: 2021-04
- Subjects: Radio interferometers , Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Jupyter
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177338 , vital:42812
- Description: Pipelines for calibration and imaging of radio interferometric data produce many intermediate images and other data products (gain tables, etc.) These often contain valuable information about the quality of the data and the calibration, and can provide the user with valuable insights, if only visualised in the right way. However, the deluge of data that we’re experiencing with modern instruments means that most of these products are never looked at, and only the final images and data products are examined. Furthermore, the variety of imaging algorithms currently available, and the range of their options, means that very different results can be produced from the same set of original data. Proper understanding of this requires a systematic comparison that can be carried out both by individual users locally, and by the community globally. We address both problems by developing a systematic visualisation framework based around Jupyter notebooks, enriched with interactive plots based on the Bokeh and Datashader visualisation libraries. , Thesis (MSc) -- Faculty of Science, Department of Physics and Electronics, 2021
- Full Text:
- Authors: Andati, Lexy Acherwa Livoyi
- Date: 2021-04
- Subjects: Radio interferometers , Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Jupyter
- Language: English
- Type: thesis , text , Masters , MSc
- Identifier: http://hdl.handle.net/10962/177338 , vital:42812
- Description: Pipelines for calibration and imaging of radio interferometric data produce many intermediate images and other data products (gain tables, etc.) These often contain valuable information about the quality of the data and the calibration, and can provide the user with valuable insights, if only visualised in the right way. However, the deluge of data that we’re experiencing with modern instruments means that most of these products are never looked at, and only the final images and data products are examined. Furthermore, the variety of imaging algorithms currently available, and the range of their options, means that very different results can be produced from the same set of original data. Proper understanding of this requires a systematic comparison that can be carried out both by individual users locally, and by the community globally. We address both problems by developing a systematic visualisation framework based around Jupyter notebooks, enriched with interactive plots based on the Bokeh and Datashader visualisation libraries. , Thesis (MSc) -- Faculty of Science, Department of Physics and Electronics, 2021
- Full Text:
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
Automation of source-artefact classification
- Sebokolodi, Makhuduga Lerato Lydia
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- «
- ‹
- 1
- ›
- »