Optimal cycle dating of large financial time series
- Authors: Kapp, Konrad Phillip
- Date: 2017
- Subjects: Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/17767 , vital:28452
- Description: The study of cycles in the context of economic time series has been active for many decades, if not centuries; however, it was only in recent decades that more formal approaches for identifying cycles have been developed. Litvine and Bismans (2015) proposed a new approach for dating cycles in financial time series, for purposes of optimising buysell strategies. In this approach, cycle dating is presented as an optimisation problem. They also introduced a method for optimising this problem, known as the hierarchical method (using full evaluation 2, or HR-FE2). However, this method may be impractical for large data sets as it may require unacceptably long computation time. In this study, new procedures that date cycles using the approach proposed by Litvine and Bismans (2015), were introduced, and were speciffically developed to be feasible for large time series data sets. These procedures are the stochastic generation and adaptation (SGA), buy-sell adapted Extrema importance identity sequence retrieval (BSA-EIISR) and buysell adapted bottom-up (BSA-BU) methods. An existing optimisation technique, known as particle swarm optimisation (PSO), was also employed. A statistical comparison was then made between these methods, including HR-FE2. This involved evaluating, on simulated data, the performance of the algorithms in terms of objective function value and computation time on different time series lengths, Hurst exponent, and number of buy-sell points. The SRace methodology (T. Zhang, Georgiopoulos, and Anagnostopoulos 2013) was then applied to these results in order to determine the most effcient methods. It was determined that, statistically, SGA, BSA-EIISR and BSA-BU are the most effcient methods. Number of buysell points was found to have the largest effect on relative performance of these methods. In some cases, the Hurst exponent also has a small effect on relative performance.
- Full Text:
- Date Issued: 2017
- Authors: Kapp, Konrad Phillip
- Date: 2017
- Subjects: Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/17767 , vital:28452
- Description: The study of cycles in the context of economic time series has been active for many decades, if not centuries; however, it was only in recent decades that more formal approaches for identifying cycles have been developed. Litvine and Bismans (2015) proposed a new approach for dating cycles in financial time series, for purposes of optimising buysell strategies. In this approach, cycle dating is presented as an optimisation problem. They also introduced a method for optimising this problem, known as the hierarchical method (using full evaluation 2, or HR-FE2). However, this method may be impractical for large data sets as it may require unacceptably long computation time. In this study, new procedures that date cycles using the approach proposed by Litvine and Bismans (2015), were introduced, and were speciffically developed to be feasible for large time series data sets. These procedures are the stochastic generation and adaptation (SGA), buy-sell adapted Extrema importance identity sequence retrieval (BSA-EIISR) and buysell adapted bottom-up (BSA-BU) methods. An existing optimisation technique, known as particle swarm optimisation (PSO), was also employed. A statistical comparison was then made between these methods, including HR-FE2. This involved evaluating, on simulated data, the performance of the algorithms in terms of objective function value and computation time on different time series lengths, Hurst exponent, and number of buy-sell points. The SRace methodology (T. Zhang, Georgiopoulos, and Anagnostopoulos 2013) was then applied to these results in order to determine the most effcient methods. It was determined that, statistically, SGA, BSA-EIISR and BSA-BU are the most effcient methods. Number of buysell points was found to have the largest effect on relative performance of these methods. In some cases, the Hurst exponent also has a small effect on relative performance.
- Full Text:
- Date Issued: 2017
A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
- Authors: Burger, Clayton
- Date: 2014
- Subjects: Neural networks (Computer science) , Computer algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/3957 , vital:20495
- Description: In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
- Full Text:
- Date Issued: 2014
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
A framework proposal for algorithm animation systems
- Authors: Yeh, Chih Lung
- Date: 2006
- Subjects: Computer programming , Computer algorithms , Computer graphics
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:10488 , http://hdl.handle.net/10948/d1019680
- Description: The learning and analysis of algorithms and algorithm concepts are challenging to students due to the abstract and conceptual nature of algorithms. Algorithm animation is a form of technological support tool which encourages algorithm comprehension by visualising algorithms in execution. Algorithm animation can potentially be utilised to support students while learning algorithms. Despite widespread acknowledgement for the usefulness of algorithm animation in algorithm courses at tertiary institutions, no recognised framework exists upon which algorithm animation systems can be effectively modelled. This dissertation consequently focuses on the design of an extensible algorithm animation framework to support the generation of interactive algorithm animations. A literature and extant system review forms the basis for the framework design process. The result of the review is a list of requirements for a pedagogically effective algorithm animation system. The proposed framework supports the pedagogic requirements by utilising an independent layer structure to support the generation and display of algorithm animations. The effectiveness of the framework is evaluated through the implementation of a prototype algorithm animation system using sorting algorithms as a case study. This dissertation is successful in proposing a framework to support the development of algorithm animations. The prototype developed will enable the integration of algorithm animations into the Nelson Mandela Metropolitan University’s teaching model, thereby permitting the university to conduct future research relating to the usefulness of algorithm animation in algorithm courses.
- Full Text:
- Date Issued: 2006
- Authors: Yeh, Chih Lung
- Date: 2006
- Subjects: Computer programming , Computer algorithms , Computer graphics
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:10488 , http://hdl.handle.net/10948/d1019680
- Description: The learning and analysis of algorithms and algorithm concepts are challenging to students due to the abstract and conceptual nature of algorithms. Algorithm animation is a form of technological support tool which encourages algorithm comprehension by visualising algorithms in execution. Algorithm animation can potentially be utilised to support students while learning algorithms. Despite widespread acknowledgement for the usefulness of algorithm animation in algorithm courses at tertiary institutions, no recognised framework exists upon which algorithm animation systems can be effectively modelled. This dissertation consequently focuses on the design of an extensible algorithm animation framework to support the generation of interactive algorithm animations. A literature and extant system review forms the basis for the framework design process. The result of the review is a list of requirements for a pedagogically effective algorithm animation system. The proposed framework supports the pedagogic requirements by utilising an independent layer structure to support the generation and display of algorithm animations. The effectiveness of the framework is evaluated through the implementation of a prototype algorithm animation system using sorting algorithms as a case study. This dissertation is successful in proposing a framework to support the development of algorithm animations. The prototype developed will enable the integration of algorithm animations into the Nelson Mandela Metropolitan University’s teaching model, thereby permitting the university to conduct future research relating to the usefulness of algorithm animation in algorithm courses.
- Full Text:
- Date Issued: 2006
The mining and visualisation of application services data
- Authors: Knoetze, Ronald Morgan
- Date: 2005
- Subjects: Data mining -- South Africa , Computer algorithms , Network performance (Telecommunication) -- Research -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10480 , http://hdl.handle.net/10948/451 , Data mining -- South Africa , Computer algorithms , Network performance (Telecommunication) -- Research -- South Africa
- Description: Many network monitoring tools do not provide sufficiently in-depth and useful reports on network usage, particularly in the domain of application services data. The optimisation of network performance is only possible if the networks are monitored effectively. Techniques that identify patterns of network usage can assist in the successful monitoring of network performance. The main goal of this research was to propose a model to mine and visualise application services data in order to support effective network management. To demonstrate the effectiveness of the model, a prototype, called NetPatterns, was developed using data for the Integrated Tertiary Software (ITS) application service collected by a network monitoring tool on the NMMU South Campus network. Three data mining algorithms for application services data were identified for the proposed model. The data mining algorithms used are classification (decision tree), clustering (K-Means) and association (correlation). Classifying application services data serves to categorise combinations of network attributes to highlight areas of poor network performance. The clustering of network attributes serves to indicate sparse and dense regions within the application services data. Association indicates the existence of any interesting relationships between different network attributes. Three visualisation techniques were selected to visualise the results of the data mining algorithms. The visualisation techniques selected were the organisation chart, bubble chart and scatterplots. Colour and a variety of other visual cues are used to complement the selected visualisation techniques. The effectiveness and usefulness of NetPatterns was determined by means of user testing. The results of the evaluation clearly show that the participants were highly satisfied with the visualisation of network usage presented by NetPatterns. All participants successfully completed the prescribed tasks and indicated that NetPatterns is a useful tool for the analysis of network usage patterns.
- Full Text:
- Date Issued: 2005
- Authors: Knoetze, Ronald Morgan
- Date: 2005
- Subjects: Data mining -- South Africa , Computer algorithms , Network performance (Telecommunication) -- Research -- South Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10480 , http://hdl.handle.net/10948/451 , Data mining -- South Africa , Computer algorithms , Network performance (Telecommunication) -- Research -- South Africa
- Description: Many network monitoring tools do not provide sufficiently in-depth and useful reports on network usage, particularly in the domain of application services data. The optimisation of network performance is only possible if the networks are monitored effectively. Techniques that identify patterns of network usage can assist in the successful monitoring of network performance. The main goal of this research was to propose a model to mine and visualise application services data in order to support effective network management. To demonstrate the effectiveness of the model, a prototype, called NetPatterns, was developed using data for the Integrated Tertiary Software (ITS) application service collected by a network monitoring tool on the NMMU South Campus network. Three data mining algorithms for application services data were identified for the proposed model. The data mining algorithms used are classification (decision tree), clustering (K-Means) and association (correlation). Classifying application services data serves to categorise combinations of network attributes to highlight areas of poor network performance. The clustering of network attributes serves to indicate sparse and dense regions within the application services data. Association indicates the existence of any interesting relationships between different network attributes. Three visualisation techniques were selected to visualise the results of the data mining algorithms. The visualisation techniques selected were the organisation chart, bubble chart and scatterplots. Colour and a variety of other visual cues are used to complement the selected visualisation techniques. The effectiveness and usefulness of NetPatterns was determined by means of user testing. The results of the evaluation clearly show that the participants were highly satisfied with the visualisation of network usage presented by NetPatterns. All participants successfully completed the prescribed tasks and indicated that NetPatterns is a useful tool for the analysis of network usage patterns.
- Full Text:
- Date Issued: 2005
Buffering strategies and bandwidth renegotiation for MPEG video streams
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
- Authors: Schonken, Nico
- Date: 1999
- Subjects: Video compression , Computer algorithms , Digital video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4651 , http://hdl.handle.net/10962/d1006620 , Video compression , Computer algorithms , Digital video
- Description: This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
- Full Text:
- Date Issued: 1999
- «
- ‹
- 1
- ›
- »