Accelerated implementations of the RIME for DDE calibration and source modelling
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
- Authors: Van Staden, Joshua
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration , Radio astronomy -- Data processing , Radio inferometers -- Data processing , Radio inferometers -- Calibration -- Data processing
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172422 , vital:42199
- Description: Second- and third-generation calibration methods filter out subtle effects in interferometer data, and therefore yield significantly higher dynamic ranges. The basis of these calibration techniques relies on building a model of the sky and corrupting it with models of the effects acting on the sources. The sensitivities of modern instruments call for more elaborate models to capture the level of detail that is required to achieve accurate calibration. This thesis implements two types of models to be used in for second- and third-generation calibration. The first model implemented is shapelets, which can be used to model radio source morphologies directly in uv space. The second model implemented is Zernike polynomials, which can be used to represent the primary beam of the antenna. We implement these models in the CODEX-AFRICANUS package and provide a set of unit tests for each model. Additionally, we compare our implementations against other methods of representing these objects and instrumental effects, namely NIFTY-GRIDDER against shapelets and a FITS-interpolation method against the Zernike polynomials. We find that to achieve sufficient accuracy, our implementation of the shapelet model has a higher runtime to that of the NIFTY-GRIDDER. However, the NIFTY-GRIDDER cannot simulate a component-based sky model while the shapelet model can. Additionally, the shapelet model is fully parametric, which allows for integration into a parameterised solver. We find that, while having a smaller memory footprint, our Zernike model has a greater computational complexity than that of the FITS-interpolated method. However, we find that the Zernike implementation has floating-point accuracy in its modelling, while the FITS-interpolated model loses some accuracy through the discretisation of the beam.
- Full Text:
- Date Issued: 2021
Design patterns and software techniques for large-scale, open and reproducible data reduction
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
- Authors: Molenaar, Gijs Jan
- Date: 2021
- Subjects: Radio astronomy -- Data processing , Radio astronomy -- Data processing -- Software , Radio astronomy -- South Africa , ASTRODECONV2019 dataset , Radio telescopes -- South Africa , KERN (omputer software)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/172169 , vital:42172 , 10.21504/10962/172169
- Description: The preparation for the construction of the Square Kilometre Array, and the introduction of its operational precursors, such as LOFAR and MeerKAT, mark the beginning of an exciting era for astronomy. Impressive new data containing valuable science just waiting for discovery is already being generated, and these devices will produce far more data than has ever been collected before. However, with every new data instrument, the data rates grow to unprecedented quantities of data, requiring novel new data-processing tools. In addition, creating science grade data from the raw data still requires significant expert knowledge for processing this data. The software used is often developed by a scientist who lacks proper training in software development skills, resulting in the software not progressing beyond a prototype stage in quality. In the first chapter, we explore various organisational and technical approaches to address these issues by providing a historical overview of the development of radioastronomy pipelines since the inception of the field in the 1940s. In that, the steps required to create a radio image are investigated. We used the lessons-learned to identify patterns in the challenges experienced, and the solutions created to address these over the years. The second chapter describes the mathematical foundations that are essential for radio imaging. In the third chapter, we discuss the production of the KERN Linux distribution, which is a set of software packages containing most radio astronomy software currently in use. Considerable effort was put into making sure that the contained software installs appropriately, all items next to one other on the same system. Where required and possible, bugs and portability fixes were solved and reported with the upstream maintainers. The KERN project also has a website, and issue tracker, where users can report bugs and maintainers can coordinate the packaging effort and new releases. The software packages can be used inside Docker and Singularity containers, enabling the installation of these packages on a wide variety of platforms. In the fourth and fifth chapters, we discuss methods and frameworks for combining the available data reduction tools into recomposable pipelines and introduce the Kliko specification and software. This framework was created to enable end-user astronomers to chain and containerise operations of software in KERN packages. Next, we discuss the Common Workflow Language (CommonWL), a similar but more advanced and mature pipeline framework invented by bio-informatics scientists. CommonWL is supported by a wide range of tools already; among other schedulers, visualisers and editors. Consequently, when a pipeline is made with CommonWL, it can be deployed and manipulated with a wide range of tools. In the final chapter, we attempt something unconventional, applying a generative adversarial network based on deep learning techniques to perform the task of sky brightness reconstruction. Since deep learning methods often require a large number of training samples, we constructed a CommonWL simulation pipeline for creating dirty images and corresponding sky models. This simulated dataset has been made publicly available as the ASTRODECONV2019 dataset. It is shown that this method is useful to perform the restoration and matches the performance of a single clean cycle. In addition, we incorporated domain knowledge by adding the point spread function to the network and by utilising a custom loss function during training. Although it was not possible to improve the cleaning performance of commonly used existing tools, the computational time performance of the approach looks very promising. We suggest that a smaller scope should be the starting point for further studies and optimising of the training of the neural network could produce the desired results.
- Full Text:
- Date Issued: 2021
Parametrised gains for direction-dependent calibration
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
- Authors: Russeeaeon, Cyndie
- Date: 2021
- Subjects: Radio astronomy , Radio inferometers , Radio inferometers -- Calibration
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/172400 , vital:42196
- Description: Calibration in radio interferometry describes the process of estimating and correcting for instrumental errors from data. Direction-Dependent (DD) calibration entails correcting for corruptions which vary across the sky. For small field of view observations, DD corruptions can be ignored but for wide fild observations, it is crucial to account for them. Traditional maximum likelihood calibration is not necessarily efficient in low signal-to-noise ratio (SNR) scenarios and this can lead to ovefitting. This can bias continuum subtraction and hence, restrict the spectral line studies. Since DD effects are expected to vary smoothly across the sky, the gains can be parametrised as a smooth function of the sky coordinates. Hence, we implement a solver where the atmosphere is modelled using a time-variant 2-dimensional phase screen with an arbitrary known frequency dependence. We assume arbitrary linear basis functions for the gains over the phase screen. The implemented solver is ptimised using the diagonal approximation of the Hessian as shown in previous studies. We present a few simulations to illustrate the performance of the solver.
- Full Text:
- Date Issued: 2021
A 150 MHz all sky survey with the Precision Array to Probe the Epoch of Reionization
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
- Authors: Chege, James Kariuki
- Date: 2020
- Subjects: Epoch of reionization -- Research , Astronomy -- Observations , Radio interferometers
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/117733 , vital:34556
- Description: The Precision Array to Probe the Epoch of Reionization (PAPER) was built to measure the redshifted 21 cm line of hydrogen from cosmic reionization. Such low frequency observations promise to be the best means of understanding the cosmic dawn; when the first galaxies in the universe formed, and also the Epoch of Reionization; when the intergalactic medium changed from neutral to ionized. The major challenges to these observations is the presence of astrophysical foregrounds that are much brighter than the cosmological signal. Here, I present an all-sky survey at 150 MHz obtained from the analysis of 300 hours of PAPER observations. Particular focus is given to the calibration and imaging techniques that need to deal with the wide field of view of a non-tracking instrument. The survey covers ~ 7000 square degrees of the southern sky. From a sky area of 4400 square degrees out of the total survey area, I extract a catalogue of sources brighter than 4 Jy whose accuracy was tested against the published GLEAM catalogue, leading to a fractional difference rms better than 20%. The catalogue provides an all-sky accurate model of the extragalactic foreground to be used for the calibration of future Epoch of Reionization observations and to be subtracted from the PAPER observations themselves in order to mitigate the foreground contamination.
- Full Text:
- Date Issued: 2020
A Bayesian approach to tilted-ring modelling of galaxies
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
- Authors: Maina, Eric Kamau
- Date: 2020
- Subjects: Bayesian statistical decision theory , Galaxies , Radio astronomy , TiRiFiC (Tilted Ring Fitting Code) , Neutral hydrogen , Spectroscopic data cubes , Galaxy parametrisation
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/145783 , vital:38466
- Description: The orbits of neutral hydrogen (H I) gas found in most disk galaxies are circular and also exhibit long-lived warps at large radii where the restoring gravitational forces of the inner disk become weak (Spekkens and Giovanelli 2006). These warps make the tilted-ring model an ideal choice for galaxy parametrisation. Analysis software utilizing the tilted-ring-model can be grouped into two and three-dimensional based software. Józsa et al. (2007b) demonstrated that three dimensional based software is better suited for galaxy parametrisation because it is affected by the effect of beam smearing only by increasing the uncertainty of parameters but not with the notorious systematic effects observed for two-dimensional fitting techniques. TiRiFiC, The Tilted Ring Fitting Code (Józsa et al. 2007b), is a software to construct parameterised models of high-resolution data cubes of rotating galaxies. It uses the tilted-ring model, and with that, a combination of some parameters such as surface brightness, position angle, rotation velocity and inclination, to describe galaxies. TiRiFiC works by directly fitting tilted-ring models to spectroscopic data cubes and hence is not affected by beam smearing or line-of-site-effects, e.g. strong warps. Because of that, the method is unavoidable as an analytic method in future Hi surveys. In the current implementation, though, there are several drawbacks. The implemented optimisers search for local solutions in parameter space only, do not quantify correlations between parameters and cannot find errors of single parameters. In theory, these drawbacks can be overcome by using Bayesian statistics, implemented in Multinest (Feroz et al. 2008), as it allows for sampling a posterior distribution irrespective of its multimodal nature resulting in parameter samples that correspond to the maximum in the posterior distribution. These parameter samples can be used as well to quantify correlations and find errors of single parameters. Since this method employs Bayesian statistics, it also allows the user to leverage any prior information they may have on parameter values.
- Full Text:
- Date Issued: 2020
Modelling and investigating primary beam effects of reflector antenna arrays
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
- Authors: Iheanetu, Kelachukwu
- Date: 2020
- Subjects: Antennas, Reflector , Radio telescopes , Astronomical instruments -- Calibration , Holography , Polynomials , Very large array telescopes -- South Africa , Astronomy -- Data processing , Primary beam effects , Jacobi-Bessel pattern , Cassbeam software , MeerKAT telescope
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/147425 , vital:38635
- Description: Signals received by a radio telescope are always affected by propagation and instrumental effects. These effects need to be modelled and accounted for during the process of calibration. The primary beam (PB) of the antenna is one major instrumental effect that needs to be accounted for during calibration. Producing accurate models of the radio antenna PB is crucial, and many approaches (like electromagnetic and optical simulations) have been used to model it. The cos³ function, Jacobi-Bessel pattern, characteristic basis function patterns (CBFP) and Cassbeam software (which uses optical ray-tracing with antenna parameters) have also been used to model it. These models capture the basic PB effects. Real-life PB patterns differ from these models due to various subtle effects such as mechanical deformation and effects introduced into the PB due to standing waves that exist in reflector antennas. The actual patterns can be measured via a process called astro-holography (or holography), but this is subject to noise, radio frequency interference, and other measurement errors. In our approach, we use principal component analysis and Zernike polynomials to model the PBs of the Very Large Array (VLA) and the MeerKAT telescopes from their holography measured data. The models have reconstruction errors of less than 5% at a compression factor of approximately 98% for both arrays. We also present steps that can be used to generate accurate beam models for any telescope (independent of its design) based on holography measured data. Analysis of the VLA measured PBs revealed that the graph of the beam sizes (and centre offset positions) have a fast oscillating trend (superimposed on a slow trend) with frequency. This spectral behaviour we termed ripple or characteristic effects. Most existing PB models that are used in calibrating VLA data do not incorporate these direction dependent effects (DDEs). We investigate the impact of using PB models that ignore this DDE in continuum calibration and imaging via simulations. Our experiments show that, although these effects translate into less than 10% errors in source flux recovery, they do lead to 30% reduction in the dynamic range. To prepare data for Hi and radio halo (faint emissions) science analysis requires carrying out foreground subtraction of bright (continuum) sources. We investigate the impact of using beam models that ignore these ripple effects during continuum subtraction. These show that using PB models which completely ignore the ripple effects in continuum subtraction could translate to error of more to 30% in the recovered Hi spectral properties. This implies that science inferences drawn from the results for Hi studies could have errors of the same magnitude.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Abell 773 galaxy cluster
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
- Authors: Sichone, Gift L
- Date: 2020
- Subjects: Galaxies -- Clusters -- Observations , Radio astronomy -- Observations , Astrophysics -- South Africa , Westerbork Radio Telescope , A773 galaxy cluster , Astronomy -- Observations , Radio sources (Astronomy
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/144945 , vital:38394
- Description: In this thesis, we present 18 and 21 cm observations of the A773 galaxy cluster observed with the Westerbork radio telescope. The final 18 and 21 cm images achieve a noise level of 0.018 mJy beam‾ 1 and 0.025 mJy beam-1 respectively. After subtracting the compact sources, the low resolution images show evidence of a radio halo at 18 cm, whereas its presence is more uncertain in the low resolution 21 cm images due the presence of residual sidelobes from bright sources. In the joint analysis of both frequencies, the radio halo has a 5.37 arcmin2 area with a 6.76 mJy flux density. Further observations and analysis are, however, required to fully characterize its properties.
- Full Text:
- Date Issued: 2020
Observations of diffuse radio emission in the Perseus Galaxy Cluster
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
- Authors: Mungwariri, Clemence
- Date: 2020
- Subjects: Galaxies -- Clusters , Radio sources (Astronomy) , Radio interferometers , Perseus Galaxy Cluster , Diffuse radio emission
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/143325 , vital:38233
- Description: In this thesis we analysed Westerbork observations of the Perseus Galaxy Cluster at 1380 MHz. Observations consist of two different pointings, covering a total of ∼ 0.5 square degrees, one including the known mini halo and the source 3C 84, the other centred on the source 3C 83.1 B. We obtained images with 83 μJy beam⁻¹ and 240 μJy beam⁻¹ noise rms for the two pointings respectively. We achieved a 60000 : 1 dynamic range in the image containing the bright 3C 84 source. We imaged the mini halo surrounding 3C 84 at high sensitivity, measuring its diameter to be ∼140 kpc and its power 4 x 10²⁴ W Hz⁻¹. Its morphology agrees quite well with that observed at 240 MHz (e.g. Gendron-Marsolais et al., 2017). We measured the flux density of 3C 84 to be 20.5 ± 0.4 Jy at the 2007 epoch, consistent with a factor of ∼2 increase since the 1960s.
- Full Text:
- Date Issued: 2020
A pilot wide-field VLBI survey of the GOODS-North field
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
- Authors: Akoto-Danso, Alexander
- Date: 2019
- Subjects: Radio astronomy , Very long baseline interferometry , Radio interometers , Imaging systems in astronomy , Hubble Space Telescope (Spacecraft) -- Observations
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/72296 , vital:30027
- Description: Very Long Baseline Interferometry (VLBI) has significant advantages in disentangling active galactic nuclei (AGN) from star formation, particularly at intermediate to high-redshift due to its high angular resolution and insensitivity to dust. Surveys using VLBI arrays are only just becoming practical over wide areas with numerous developments and innovations (such as multi-phase centre techniques) in observation and data analysis techniques. However, fully automated pipelines for VLBI data analysis are based on old software packages and are unable to incorporate new calibration and imaging algorithms. In this work, the researcher developed a pipeline for VLBI data analysis which integrates a recent wide-field imaging algorithm, RFI excision, and a purpose-built source finding algorithm specifically developed for the 64kx64k wide-field VLBI images. The researcher used this novel pipeline to process 6% (~ 9 arcmin2 of the total 160 arcmin2) of the data from the CANDELS GOODS- North extragalactic field at 1.6 GHz. The milli-arcsec scale images have an average rms of a ~ 10 uJy/beam. Forty four (44) candidate sources were detected, most of which are at sub-mJy flux densities, having brightness temperatures and luminosities of >5x105 K and >6x1021 W Hz-1 respectively. This work demonstrates that automated post-processing pipelines for wide-field, uniform sensitivity VLBI surveys are feasible and indeed made more efficient with new software, wide-field imaging algorithms and more purpose-built source- finders. This broadens the discovery space for future wide-field surveys with upcoming arrays such as the African VLBI Network (AVN), MeerKAT and the Square Kilometre Array (SKA).
- Full Text:
- Date Issued: 2019
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Foreground simulations for observations of the global 21-cm signal
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
- Authors: Klutse, Diana
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Electromagnetic waves , Radiation, Background
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76398 , vital:30557
- Description: The sky-averaged (global) spectrum of the redshifted 21-cm line promises to be a direct probe of the Dark Ages, the period before the first luminous sources formed and the Epoch of Reionization during which these sources produced enough ionizing photons to ionize the neutral intergalactic medium. However, observations of this signal are contaminated by both astrophysical foregrounds which are orders of magnitude brighter than the cosmological signal and by non-astrophysical and non-ideal instrumental effects. It is therefore crucial to understand all these data components and their impacts on the cosmological signal, for successful signal extraction. In this view, we investigated the impact that small scale spatial structures of diffuse Galactic foreground has on the foreground spectrum as observed by a global 21-cm observation. We simulated two different sets of observations using a realistic dipole beam model of two synchotron foreground templates that differ from each other in the small scale structure: the original 408 MHz all-sky map by Haslam et al. (1982) and a version where the calibration was improved to remove artifcats and point sources (Remazeilles et al., 2015). We generated simulated foreground spectra and modeled them using a polynomial expansion in frequency. We found that the different foreground templates have a modest impact on the simulated spectra, generate differences up to 2% in the root mean square of residual spectra after the log-polynomial best fit was subtracted out.
- Full Text:
- Date Issued: 2019
Observing cosmic reionization with PAPER: polarized foreground simulations and all sky images
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
- Authors: Nunhokee, Chuneeta Devi
- Date: 2019
- Subjects: Cosmic background radiation , Astronomy -- Observations , Epoch of reionization -- Research , Hydrogen -- Spectra , Radio interferometers
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/68203 , vital:29218
- Description: The Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al., 2010) was built with an aim to detect the redshifted 21 cm Hydrogen line, which is likely the best probe of thermal evolution of the intergalactic medium and reionization of neutral Hydrogen in our Universe. Observations of the 21 cm signal are challenged by bright astrophysical foregrounds and systematics that require precise modeling in order to extract the cosmological signal. In particular, the instrumental leakage of polarized foregrounds may contaminate the 21 cm power spectrum. In this work, we developed a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra and used it to predict contaminations in observations. We find the leakage due to a population of point sources to be higher than the diffuse Galactic emission – for which we can predict minimal contaminations at k>0.3 h Mpc -¹ We also analyzed data from the last observing season of PAPER via all-sky imaging with a view to characterize the foregrounds. We generated an all-sky catalogue of 88 sources down to a flux density of 5 Jy. Moreover, we measured both polarized point source and the Galactic diffuse emission, and used these measurements to constrain our model of polarization leakage. We find the leakage due to a population of point sources to be 12% lower than the prediction from our polarized model.
- Full Text:
- Date Issued: 2019
Statistical Analysis of the Radio-Interferometric Measurement Equation, a derived adaptive weighting scheme, and applications to LOFAR-VLBI observation of the Extended Groth Strip
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
- Authors: Bonnassieux, Etienne
- Date: 2019
- Subjects: Radio astronomy , Astrophysics , Astrophysics -- Instruments -- Calibration , Imaging systems in astronomy , Radio interferometers , Radio telescopes , Astronomy -- Observations
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/93789 , vital:30942
- Description: J.R.R Tolkien wrote, in his Mythopoeia, that “He sees no stars who does not see them first, of living silver made that sudden burst, to flame like flowers beneath the ancient song”. In his defense of myth-making, he formulates the argument that the attribution of meaning is an act of creation - that “trees are not ‘trees’ until so named and seen” - and that this capacity for creation defines the human creature. The scientific endeavour, in this context, can be understood as a social expression of a fundamental feature of humanity, and from this endeavour flows much understanding. This thesis, one thread among many, focuses on the study of astronomical objects as seen by the radio waves they emit. What are radio waves? Electromagnetic waves were theorised by James Clerk Maxwell (Maxwell 1864) in his great theoretical contribution to modern physics, their speed matching the speed of light as measured by Ole Christensen R0mer and, later, James Bradley. It was not until Heinrich Rudolf Hertz’s 1887 experiment that these waves were measured in a laboratory, leading to the dawn of radio communications - and, later, radio astronomy. The link between radio waves and light was one of association: light is known to behave as a wave (Young double-slit experiment), with the same propagation speed as electromagnetic radiation. Light “proper” is also known to exist beyond the optical regime: Herschel’s experiment shows that when diffracted through a prism, sunlight warms even those parts of a desk which are not observed to be lit (first evidence of infrared light). The link between optical light and unseen electromagnetic radiation is then an easy step to make, and one confirmed through countless technological applications (e.g. optical fiber to name but one). And as soon as this link is established, a question immediately comes to the mind of the astronomer: what does the sky, our Universe, look like to the radio “eye”? Radio astronomy has a short but storied history: from Karl Jansky’s serendipitous observation of the centre of the Milky Way, which outshines our Sun in the radio regime, in 1933, to Grote Reber’s hand-built back-yard radio antenna in 1937, which successfully detected radio emission from the Milky Way itself, to such monumental projects as the Square Kilometer Array and its multiple pathfinders, it has led to countless discoveries and the opening of a truly new window on the Universe. The work presented in this thesis is a contribution to this discipline - the culmination of three years of study, which is a rather short time to get a firm grasp of radio interferometry both in theory and in practice. The need for robust, automated methods - which are improving daily, thanks to the tireless labour of the scientists in the field - is becoming ever stronger as the SKA approaches, looming large on the horizon; but even today, in the precursor era of LOFAR, MeerKAT and other pathfinders, it is keenly felt. When I started my doctorate, the sheer scale of the task at hand felt overwhelming - to actually be able to contribute to its resolution seemed daunting indeed! Thankfully, as the saying goes, no society sets for itself material goals which it cannot achieve. This thesis took place at an exciting time for radio interferometry: at the start of my doctorate, the LOFAR international stations were - to my knowledge - only beginning to be used, and even then, only tentatively; MeerKAT had not yet shown its first light; the techniques used throughout my work were still being developed. At the time of writing, great strides have been made. One of the greatest technical challenges of LOFAR - imaging using the international stations - is starting to become reality. This technical challenge is the key problem that this thesis set out to address. While we only achieved partial success so far, it is a testament to the difficulty of the task that it is not yet truly resolved. One of the major results of this thesis is a model of a bright resolved source near a famous extragalactic field: properly modeling this source not only allows the use of international LOFAR stations, but also grants deeper access to the extragalactic field itself, which is otherwise polluted by the 3C source’s sidelobes. This result was only achieved thanks to the other major result of this thesis: the development of a theoretical framework with which to better understand the effect of calibration errors on images made from interferometric data, and an algorithm to strongly mitigate them. The structure of this manuscript is as follows: we begin with an introduction to radio interferometry, LOFAR, and the emission mechanisms which dominate for our field of interest. These introductions are primarily intended to give a brief overview of the technical aspects of the data reduced in this thesis. We follow with an overview of the Measurement Equation formalism, which underpins our theoretical work. This is the keystone of this thesis. We then show the theoretical work that was developed as part of the research work done during the doctorate - which was published in Astronomy & Astrophysics. Its practical application - a quality-based weighting scheme - is used throughout our data reduction. This data reduction is the next topic of this thesis: we contextualise the scientific interest of the data we reduce, and explain both the methods and the results we achieve.
- Full Text:
- Date Issued: 2019
The dispersion measure in broadband data from radio pulsars
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
- Authors: Rammala, Isabella
- Date: 2019
- Subjects: Pulsars , Radio astrophysics , Astrophsyics , Broadband communication systems
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/67857 , vital:29157
- Description: Modern day radio telescopes make use of wideband receivers to take advantage of the broadband nature of the radio pulsar emission. We ask how does the use of such broadband pulsar data affect the measured pulsar dispersion measure (DM). Previous works have shown that, although the exact pulsar radio emission processes are not well understood, observations reveal evidence of possible frequency dependence on the emission altitudes in the pulsar magnetosphere, a phenomenon known as the radius-to-frequency mapping (RFM). This frequency dependence due to RFM can be embedded in the dispersive delay of the pulse profiles, normally interpreted as an interstellar effect (DM). Thus we interpret this intrinsic effect as an additional component δDM to the interstellar DM, and investigate how it can be statistically attributed to intrinsic profile evolution, as well as profile scattering. We make use of Monte-Carlo simulations of beam models to simulate realistic pulsar beams of various geometry, from which we generate intrinsic profiles at various frequency bands. The results show that the excess DM due to intrinsic profile evolution is more pronounced at high frequencies, whereas scattering dominates the excess DM at low frequency. The implications of these results are presented with relation to broadband pulsar timing.
- Full Text:
- Date Issued: 2019
TiRiFiG, a graphical 3D kinematic modelling tool
- Authors: Twum, Samuel Nyarko
- Date: 2019
- Subjects: Tilted Ring Fitting GUI , Astronomy -- Observations , Galaxies -- Observations , Galaxies -- Measurement , Galaxies -- Measurement -- Data processing , Kinematics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76409 , vital:30558
- Description: Galaxy kinematics is of crucial importance to understanding the structure, formation and evolution of galaxies. The studies of mass distributions giving rise to the missing mass problem, first raised by Zwicky (1933), give us an insight into dark matter distributions which are tightly linked to cosmology. Neutral hydrogen (H i) has been widely used as a tracer in the kinematic studies of galaxies. The Square Kilometre Array (SKA) and its precursors will produce large Hi datasets which will require kinematic modelling tools to extract kinematic parameters such as rotation curves. TiRiFiC (Józsa et al., 2007) is an example of such a tool for 3D kinematic modelling of resolved spectroscopic observations of rotating disks in terms of the tilted-ring model with varying complexities. TiRiFiC can be used to model a large number (20+) of parameters which are set in a configuration file (.def) for its execution. However, manually editing these parameters in a text editor is uncomfortable. In this work, we present TiRiFiG, Tilted Ring Fitting GUI, which is the graphical user interface that provides an easy way for parameter inputs to be modified in an interactive manner.
- Full Text:
- Date Issued: 2019
- Authors: Twum, Samuel Nyarko
- Date: 2019
- Subjects: Tilted Ring Fitting GUI , Astronomy -- Observations , Galaxies -- Observations , Galaxies -- Measurement , Galaxies -- Measurement -- Data processing , Kinematics
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/76409 , vital:30558
- Description: Galaxy kinematics is of crucial importance to understanding the structure, formation and evolution of galaxies. The studies of mass distributions giving rise to the missing mass problem, first raised by Zwicky (1933), give us an insight into dark matter distributions which are tightly linked to cosmology. Neutral hydrogen (H i) has been widely used as a tracer in the kinematic studies of galaxies. The Square Kilometre Array (SKA) and its precursors will produce large Hi datasets which will require kinematic modelling tools to extract kinematic parameters such as rotation curves. TiRiFiC (Józsa et al., 2007) is an example of such a tool for 3D kinematic modelling of resolved spectroscopic observations of rotating disks in terms of the tilted-ring model with varying complexities. TiRiFiC can be used to model a large number (20+) of parameters which are set in a configuration file (.def) for its execution. However, manually editing these parameters in a text editor is uncomfortable. In this work, we present TiRiFiG, Tilted Ring Fitting GUI, which is the graphical user interface that provides an easy way for parameter inputs to be modified in an interactive manner.
- Full Text:
- Date Issued: 2019
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
Automation of source-artefact classification
- Sebokolodi, Makhuduga Lerato Lydia
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017