Clustering algorithms and their effect on edge preservation in image compression
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
- Date Issued: 2009
- Authors: Ndebele, Nothando Elizabeth
- Date: 2009
- Subjects: Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5576 , http://hdl.handle.net/10962/d1008210 , Image compression , Vector analysis , Cluster analysis , Cluster anaylsis -- Data processing , Algorithms
- Description: Image compression aims to reduce the amount of data that is stored or transmitted for images. One technique that may be used to this end is vector quantization. Vectors may be used to represent images. Vector quantization reduces the number of vectors required for an image by representing a cluster of similar vectors by one typical vector that is part of a set of vectors referred to as the code book. For compression, for each image vector, only the closest codebook vector is stored or transmitted. For reconstruction, the image vectors are again replaced by the the closest codebook vectors. Hence vector quantization is a lossy compression technique and the quality of the reconstructed image depends strongly on the quality of the codebook. The design of the codebook is therefore an important part of the process. In this thesis we examine three clustering algorithms which can be used for codebook design in image compression: c-means (CM), fuzzy c-means (FCM) and learning vector quantization (LVQ). We give a description of these algorithms and their application to codebook design. Edges are an important part of the visual information contained in an image. It is essential therefore to use codebooks which allow an accurate representation of the edges. One of the shortcomings of using vector quantization is poor edge representation. We therefore carry out experiments using these algorithms to compare their edge preserving qualities. We also investigate the combination of these algorithms with classified vector quantization (CVQ) and the replication method (RM). Both these methods have been suggested as methods for improving edge representation. We use a cross validation approach to estimate the mean squared error to measure the performance of each of the algorithms and the edge preserving methods. The results reflect that the edges are less accurately represented than the non - edge areas when using CM, FCM and LVQ. The advantage of using CVQ is that the time taken for code book design is reduced particularly for CM and FCM. RM is found to be effective where the codebook is trained using a set that has larger proportions of edges than the test set.
- Full Text:
- Date Issued: 2009
Qualitative and quantitative properties of solutions of ordinary differential equations
- Authors: Ogundare, Babatunde Sunday
- Date: 2009
- Subjects: Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms
- Language: English
- Type: Thesis , Doctoral , PhD (Applied Mathematics)
- Identifier: vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms
- Description: This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for
- Full Text:
- Date Issued: 2009
- Authors: Ogundare, Babatunde Sunday
- Date: 2009
- Subjects: Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms
- Language: English
- Type: Thesis , Doctoral , PhD (Applied Mathematics)
- Identifier: vital:11588 , http://hdl.handle.net/10353/244 , Differential equations , Lyapunov functions , Chebyshev polynomials , Algorithms
- Description: This thesis is concerned with the qualitative and quantitative properties of solutions of certain classes of ordinary di erential equations (ODEs); in particular linear boundary value problems of second order ODE's and non-linear ODEs of order at most four. The Lyapunov's second method of special functions called Lyapunov functions are employed extensively in this thesis. We construct suitable complete Lyapunov functions to discuss the qualitative properties of solutions to certain classes of non-linear ordinary di erential equations considered. Though there is no unique way of constructing Lyapunov functions, We adopt Cartwright's method to construct complete Lyapunov functions that are required in this thesis. Su cient conditions were established to discuss the qualitative properties such as boundedness, convergence, periodicity and stability of the classes of equations of our focus. Another aspect of this thesis is on the quantitative properties of solutions. New scheme based on interpolation and collocation is derived for solving initial value problem of ODEs. This scheme is derived from the general method of deriving the spline functions. Also by exploiting the Trigonometric identity property of the Chebyshev polynomials, We develop a new scheme for approximating the solutions of two-point boundary value problems. These schemes are user-friendly, easy to develop algorithm (computer program) and execute. They compare favorably with known standard methods used in solving the classes of problems they were derived for
- Full Text:
- Date Issued: 2009
The impact of an in-depth code comprehension tool in an introductory programming module
- Authors: Leppan, Ronald George
- Date: 2008
- Subjects: Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception
- Description: Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.
- Full Text:
- Date Issued: 2008
- Authors: Leppan, Ronald George
- Date: 2008
- Subjects: Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10471 , http://hdl.handle.net/10948/847 , Algorithms , Computer programming , Algorithms -- Study and teaching -- South Africa , Visual perception
- Description: Reading and understanding algorithms is not an easy task and often neglected by educators in an introductory programming course. One proposed solution to this problem is the incorporation of a technological support tool to aid program comprehension in introductory programming. Many researchers advocate the identification of beacons and the use of chunking as support for code comprehension. Beacon recognition and chunking can also be used as support in the teaching model of introductory programming. Educators use a variety of different support tools to facilitate program comprehension in introductory programming. Review of a variety of support tools fails to deliver an existing tool to support a teaching model that incorporates chunking and the identification of beacons. The experimental support tool in this dissertation (BeReT) is primarily designed to encourage a student to correctly identify beacons within provided program extracts. BeReT can also be used to allow students to group together related statements and to learn about plans implemented in any semantically and syntactically correct algorithm uploaded by an instructor. While these requirements are evident in the design and implementation of BeReT, data is required to measure the effect BeReT has on the indepth comprehension of introductory programming algorithms. A between-groups experiment is described which compares the program comprehension of students that used BeReT to study various introductory algorithms, with students that relied solely on traditional lecturing materials. The use of an eye tracker was incorporated into the empirical study to visualise the results of controlled experiments. The results indicate that a technological support tool like BeReT can have a significantly positive effect on student comprehension of algorithms traditionally taught in introductory programming. This research provides educators with an alternative way for the incorporation of in-depth code comprehension skills in introductory programming.
- Full Text:
- Date Issued: 2008
Algorithms for the solution of the quadratic programming problem
- Authors: Vankova, Martina
- Date: 2004
- Subjects: Quadratic programming , Nonlinear programming , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms
- Description: The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.
- Full Text:
- Date Issued: 2004
- Authors: Vankova, Martina
- Date: 2004
- Subjects: Quadratic programming , Nonlinear programming , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:11086 , http://hdl.handle.net/10948/348 , Quadratic programming , Nonlinear programming , Algorithms
- Description: The purpose of this dissertation was to provide a review of the theory of Optimization, in particular quadratic programming, and the algorithms suitable for solving both convex and non-convex quadratic programming problems. Optimization problems arise in a wide variety of fields and many can be effectively modeled with linear equations. However, there are problems for which linear models are not sufficient thus creating a need for non-linear systems. This dissertation includes a literature study of the formal theory necessary for understanding optimization and an investigation of the algorithms available for solving a special class of the non-linear programming problem, namely the quadratic programming problem. It was not the intention of this dissertation to discuss all possible algorithms for solving the quadratic programming problem, therefore certain algorithms for convex and non-convex quadratic programming problems were selected for a detailed discussion in the dissertation. Some of the algorithms were selected arbitrarily, because limited information was available comparing the efficiency of the various algorithms. Algorithms available for solving general non-linear programming problems were also included and briefly discussed as they can be used to solve quadratic programming problems. A number of algorithms were then selected for evaluation, depending on the frequency of use in practice and the availability of software implementing these algorithms. The evaluation included a theoretical and quantitative comparison of the algorithms. The quantitative results were analyzed and discussed and it was shown that the results supported the theoretical comparison. It was also shown that it is difficult to conclude that one algorithm is better than another as the efficiency of an algorithm greatly depends on the size of the problem, the complexity of an algorithm and many other implementation issues. Optimization problems arise continuously in a wide range of fields and thus create the need for effective methods of solving them. This dissertation provides the fundamental theory necessary for the understanding of optimization problems, with particular reference to quadratic programming problems and the algorithms that solve such problems. Keywords: Quadratic Programming, Quadratic Programming Algorithms, Optimization, Non-linear Programming, Convex, Non-convex.
- Full Text:
- Date Issued: 2004
Algorithmic skeletons as a method of parallel programming
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
- Authors: Watkins, Rees Collyer
- Date: 1993
- Subjects: Parallel programming (Computer science) , Algorithms
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4609 , http://hdl.handle.net/10962/d1004889 , Parallel programming (Computer science) , Algorithms
- Description: A new style of abstraction for program development, based on the concept of algorithmic skeletons, has been proposed in the literature. The programmer is offered a variety of independent algorithmic skeletons each of which describe the structure of a particular style of algorithm. The appropriate skeleton is used by the system to mould the solution. Parallel programs are particularly appropriate for this technique because of their complexity. This thesis investigates algorithmic skeletons as a method of hiding the complexities of parallel programming from the user, and for guiding them towards efficient solutions. To explore this approach, this thesis describes the implementation and benchmarking of the divide and conquer and task queue paradigms as skeletons. All but one category of problem, as implemented in this thesis, scale well over eight processors. The rate of speed up tails off when there are significant communication requirements. The results show that, with some user knowledge, efficient parallel programs can be developed using this method. The evaluation explores methods for fine tuning some skeleton programs to achieve increased efficiency.
- Full Text:
- Date Issued: 1993
- «
- ‹
- 1
- ›
- »