Automation of source-artefact classification
- Sebokolodi, Makhuduga Lerato Lydia
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
- Authors: Sebokolodi, Makhuduga Lerato Lydia
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/4920 , vital:20743
- Description: The high sensitivities of modern radio telescopes will enable the detection of very faint astrophysical sources in the distant Universe. However, these high sensitivities also imply that calibration artefacts, which were below the noise for less sensitive instruments, will emerge above the noise and may limit the dynamic range capabilities of these instruments. Detecting faint emission will require detection thresholds close to the noise and this may cause some of the artefacts to be incorrectly detected as real emission. The current approach is to manually remove the artefacts, or set high detection thresholds in order to avoid them. The former will not be possible given the large quantities of data that these instruments will produce, and the latter results in very shallow and incomplete catalogues. This work uses the negative detection method developed by Serra et al. (2012) to distinguish artefacts from astrophysical emission in radio images. We also present a technique that automates the identification of sources subject to severe direction-dependent (DD) effects and thus allows them to be flagged for DD calibration. The negative detection approach is shown to provide high reliability and high completeness catalogues for simulated data, as well as a JVLA observation of the 3C147 field (Mitra et al., 2015). We also show that our technique correctly identifies sources that require DD calibration for datasets from the KAT-7, LOFAR, JVLA and GMRT instruments.
- Full Text:
- Date Issued: 2017
Calibration and imaging with variable radio sources
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
- Authors: Mbou Sob, Ulrich Armel
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/37977 , vital:24721
- Description: Calibration of radio interferometric data is one of the most important steps that are required to produce high dynamic range radio maps with high fidelity. However, naive calibration (inaccurate knowledge of the sky and instruments) leads to the formation of calibration artefacts: the generation of spurious sources and the deformations in the structure of extended sources. A particular class of calibration artefacts, called ghost sources, which results from calibration with incomplete sky models has been extensively studied by Grobler et al. (2014, 2016) and Wijnholds et al. (2016). They developed a framework which can be used to predict the fluxes and positions of ghost sources. This work uses the approach initiated by these authors to study the calibration artefacts and ghost sources that are produced when variable sources are not considered in sky models during calibration. This work investigates both long-term and short-term variability and uses the root mean square (rms) and power spectrum as metrics to evaluate the “quality” of the residual visibilities obtained through calibration. We show that the overestimation and underestimation of source flux density during calibration produces similar but symmetrically opposite results. We show that calibration artefacts from sky model errors are not normally distributed. This prevents them from being removed by employing advanced techniques, such as stacking. The power spectrums measured from the residuals with a variable source was significantly higher than those from residuals without a variable source. This implies advanced calibration techniques and sky model completeness will be required for studies such as probing the Epoch of Reoinization, where we seek to detect faint signals below thermal noise.
- Full Text:
- Date Issued: 2017
Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
- Authors: Atemkeng, Marcellin T
- Date: 2017
- Subjects: Radio astronomy , Solar radio emission , Radio interferometers , Signal processing -- Digital techniques , Algorithms , Data compression (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/6324 , vital:21089
- Description: In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
- Full Text:
- Date Issued: 2017
Ionospheric disturbances during magnetic storms at SANAE
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
- Authors: Hiyadutuje, Alicreance
- Date: 2017
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/54956 , vital:26639
- Description: The coronal mass ejections (CMEs) and solar flares associated with extreme solar activity may strike the Earth's magnetosphere and give rise to geomagnetic storms. During geomagnetic storms, the polar plasma dynamics may influence the middle and low-latitude ionosphere via travelling ionospheric disturbances (TIDs). These are wave-like electron density disturbances caused by atmospheric gravity waves propagating in the ionosphere. TIDs focus and defocus SuperDARN signals producing a characteristic pattern of ground backscattered power (Samson et al., 1989). Geomagnetic storms may cause a decrease of total electron content (TEC), i.e. a negative storm effect, or/and an increase of TEC, i.e. a positive storm effect. The aim of this project was to investigate the ionospheric response to strong storms (Dst < -100 nT) between 2011 and 2015, using TEC and scintillation measurements derived from GPS receivers as well as SuperDARN power, Doppler velocity and convection maps. In this study the ionosphere's response to geomagnetic storms is determined by the magnitude and time of occurrence of the geomagnetic storm. The ionospheric TEC results of this study show that most of the storm effects observed were a combination of both negative and positive per storm per station (77.8%), and only 8.9% and 13.3% of effects on TEC were negative and positive respectively. The highest number of storm effects occurred in autumn (36.4%), while 31.6%, 28.4% and 3.6% occurred in winter, spring and summer respectively. During the storms studied, 71.4% had phase scintillation in the range of 0.7 - 1 radians, and only 14.3% of the storms had amplitude scintillations near 0.4. The storms studied at SANAE station generated TIDs with periods of less than an hour and amplitudes in the range 0.2 - 5 TECU. These TIDs were found to originate from the high-velocity plasma flows, some of which are visible in SuperDARN convection maps. Early studies concluded that likely sources of these disturbances correspond to ionospheric current surges (Bristow et al., 1994) in the dayside auroral zone (Huang et al., 1998).
- Full Text:
- Date Issued: 2017
MEQSILHOUETTE: a mm-VLBI observation and signal corruption simulator
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
- Authors: Blecher, Tariq
- Date: 2017
- Subjects: Large astronomical telescopes , Very long baseline interferometry , MEQSILHOUETTE (Software) , Event horizon telescope
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/40713 , vital:25019
- Description: The Event Horizon Telescope (EHT) aims to resolve the innermost emission of nearby supermassive black holes, Sgr A* and M87, on event horizon scales. This emission is predicted to be gravitationally lensed by the black hole which should produce a shadow (or silhouette) feature, a precise measurement of which is a test of gravity in the strong-field regime. This emission is also an ideal probe of the innermost accretion and jet-launch physics, offering the new insights into this data-limited observing regime. The EHT will use the technique of Very Long Baseline Interferometry (VLBI) at (sub)millimetre wavelengths, which has a diffraction limited angular resolution of order ~ 10 µ-arcsec. However, this technique suffers from unique challenges, including scattering and attenuation in the troposphere and interstellar medium; variable source structure; as well as antenna pointing errors comparable to the size of the primary beam. In this thesis, we present the meqsilhouette software package which is focused towards simulating realistic EHT data. It has the capability to simulate a time-variable source, and includes realistic descriptions of the effects of the troposphere, the interstellar medium as well as primary beams and associated antenna pointing errors. We have demonstrated through several examples simulations that these effects can limit the ability to measure the key science parameters. This simulator can be used to research calibration, parameter estimation and imaging strategies, as well as gain insight into possible systematic uncertainties.
- Full Text:
- Date Issued: 2017
Nonlinear optical responses of phthalocyanines in the presence of nanomaterials or when embedded in polymeric materials
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
- Authors: Bankole, Owolabi Mutolib
- Date: 2017
- Subjects: Phthalocyanines , Phthalocyanines -- Optical properties , Alkynes , Triazoles , Nonlinear optics , Photochemistry , Complex compounds , Amines , Mercaptopyridine
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/45794 , vital:25548
- Description: This work describes the synthesis, photophysical and nonlinear optical characterizations of alkynyl Pcs (1, 2, 3, 8 and 9), 1,2,3-triazole ZnPc (4), mercaptopyridine Pcs (5, 6 and 7) and amino Pcs (10 and 11). Complexes 1, 2, 4, 7, 8, 9 and 11 were newly synthesized and characterized using techniques including 1H-NMR, MALDI-TOF, UV-visible spectrophotometry, FTIR and elemental analysis. The results of the characterizations were in good agreement with their molecular structures, and confirmed the purity of the new molecules. Complex 10 was covalently linked to pristine graphene (GQDs), nitrogen- doped (NGQDs), and sulfur-nitrogen co-doped (SNGQDs) graphene quantum dots; gold nanoparticles (AuNPs); poly(acrylic acid) (PAA); Fe3O4@Ag core-shell and Fe3O4- Ag hybrid nanoparticles via covalent bonding. Complex 11 was linked to Agx Auy alloy nanoparticles via NH2-Au and/or Au-S bonding, 2 and 3 were linked to gold nanoparticles (AuNPs) via clicked reactions. Evidence of successful conjugation of 2, 3, 10 and 11 to nanomaterials was revealed within the UV-vis, EDS, TEM, XRD and XPS spectra. Optical limiting (OL) responses of the samples were evaluated using open aperture Z-scan technique at 532 nm and 10 ns radiation in solution or when embedded in polymer mixtures. The analyses of the Z-scan data for the studied samples did fit to a two-photon absorption mechanism (2PA), but the Pcs and Pc-nanomaterial or polymer composites also possess the multi-photon absorption mechanisms aided by the triplet-triplet population to have reverse saturable absorption (RSA) occur. Phthalocyanines doped in polymer matrices showed larger nonlinear absorption coefficients (ßeff), third-order susceptibility (Im [x(3)]) and second-order hyperpolarizability (y), with an accompanying low intensity threshold (Ium) than in solution. Aggregation in DMSO negatively affected NLO behaviour of Pcs (8 as a case study) at low laser power, and improved at relatively higher laser power. Heavy atom-substituted Pcs (6) enhanced NLO and OL properties than lighter atoms such as 5 and 7. Direct relationship between enhanced photophysical properties and nonlinear effects favoured by excited triplet absorption of the 2, 3, 10 and 11 in presence of nanomaterials was established. Major factor responsible for the enhanced nonlinearities of 10 in the presence of NGQDs and SNGQDs were fully described and attributed to the surface defects caused by the presence of heteroatoms such as nitrogen and sulfur. The studies showed that phthalocyanines-nanomaterial composites were useful in applications such as optical switching, pulse compressor and laser pulse narrowing.
- Full Text:
- Date Issued: 2017
Real-time audio spectrum analyser research, design, development and implementation using the 32 bit ARMR Cortex-M4 microcontroller
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
- Authors: Just, Stefan Antonio
- Date: 2017
- Subjects: Spectrum analyzers , Sound -- Recording and reproducing -- Digital techniques , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/50536 , vital:25997
- Description: This thesis describes the design and testing of a low-cost hand-held real-time audio analyser (RTAA). This includes the design of an embedded system, the development of the firmware executed by the embedded system, and the implementation of a real-time signal processing algorithms. One of the objectives of this project was to design an alternative low-cost audio analyser to the current commercially available solutions. The device was tested with the audio standard test signal (pink noise) and was compared to the expected at-spectrum response corresponding to a balanced audio system. The design makes use of an 32-bit Reduced Instruction Set Computer (RISC) processor core (ARM Cortex-M4), namely the STM32F4 family of microcontrollers. Due to the pin compatibility of the microcontroller (designed and manufactured by STMicroelectronics), the new development board can also be upgraded with the newly released Cortex-M7 microcontroller, namely the STM32F7 family of microcontrollers. Moreover, the low-cost hardware design features 256kB Random Access Memory (RAM); on-board Micro-Electro-Mechanical System (MEMS) microphone; on-chip 12-bit Analogue-to-Digital (A/D) and Digital-to-Analogue (D/A) Converters; 3.2" Thin-Film-Transistor Liquid-Crystal Display (TFT-LCD) with a resistive touch screen sensor and SD-Card Socket. Furthermore, two additional expansion modules were designed and can extend the functionality of the designed real-time audio analyser. Firstly, an audio/video module featuring a professional 24-bit 192kHz sampling rate audio CODEC; balanced audio microphone input; unbalanced line output; three MEMS microphone inputs; headphone output; and a Video Graphics Array (VGA) controller allowing the display of the analysed audio spectrum on either a projector or monitor. The second expansion module features two external memories: 1MB Static Random Access Memory (SRAM) and 16MB Synchronous Dynamic Random Access Memory (SDRAM). While the two additional expansion modules were not completely utilised by the firmware presented in this thesis, upgrades of the real-time audio analyser firmware in future revisions will provide a higher performing and more accurate analysis of the audio spectrum. The full research and design process for the real-time audio analyser is discussed and both Problems and pitfalls with the final implemented design are highlighted and possible resolutions were investigated. The development costs (excluding labour) are given in the form of a bill of materials (BOM) with the total costs averaging around R1000. Moreover, the additional VGA controller could further decrease the overall costs with the removal of the TFT-LCD screen from the audio analyser and provided the external display was not included in the BOM.
- Full Text:
- Date Issued: 2017
Thermoluminescence of synthetic quartz annealed beyond its second phase inversion temperature
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
- Authors: Mthwesi, Zuko
- Date: 2017
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/46077 , vital:25577
- Description: Thermoluminescence of synthetic quartz annealed at 1000 ºC for 10 minutes has been studied. The aim was to study mechanisms of thermoluminescence in annealed synthetic quartz and to discuss the results in terms of the physics of point defects. The sample was irradiated with a beta dose of 10 Gy of beta radiation and then heated at a linear heating rate of 1 ºC.s-1 up to 500 ºC. The thermoluminescence (TL) glow curve consists of three glow peaks. Peak I at 74 0C (main peak) with high intensity as compared to the other two peaks. Peak II at 144 ºC is more intense than peak III at 180 ºC. This study was on the main peak (MP) at 74 ºC and peak III at 180 ºC. Kinetic analysis was carried out to determine the trap depth E, frequency factor s and the order of kinetics b of both peaks using the initial rise, peak shape, variable heating rate, glow curve deconvolution and isothermal TL methods. The values of kinetic parameters obtained were around 0.7 to 1.0 eV for trap depth and in the interval of 108 to 1015 s-¹ for frequency factor for both peaks. The effect of heating rate from 0.5 to 5 ºC.s-¹ on the TL peak intensity and peak temperature was observed. Also the effect of thermal quenching was observed at high heating rates. Since the TL glow curve has overlapping TL peaks, the Tm-Tstop method from 54 ºC up to 64 ºC and E -Tstop methods were introduced where a first order single peak was observed. Phototransfered thermoluminescence (PTTL) was investigated and characterized by three peaks. First PTTL peak I at 72 ºC, peak II at 134 ºC and peak III at 176 ºC. Analysis was carried out on peaks I and III for the effect of dose dependence from 20-200 Gy. Thermal fading was observed on PTTL peaks I and III, after storage time of 30 minutes.
- Full Text:
- Date Issued: 2017
- «
- ‹
- 1
- ›
- »