An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
- Authors: Tsegaye, Melekam Asrat
- Date: 2007
- Subjects: Visual programming (Computer science) Memory management (Computer science) Memory -- Data processing
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4640 , http://hdl.handle.net/10962/d1006563
- Description: Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
- Full Text:
Non-interactive modeling tools and support environment for procedural geometry generation
- Authors: Morkel, Chantelle
- Date: 2006
- Subjects: Computer graphics -- Mathematical models , Three-dimensional display systems , Computer simulation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4644 , http://hdl.handle.net/10962/d1006589 , Computer graphics -- Mathematical models , Three-dimensional display systems , Computer simulation
- Description: This research examines procedural modeling in the eld of computer graphics. Procedural modeling automates the generation of objects by representing models as procedures that provide a description of the process required to create the model. The problem we solve with this research is the creation of a procedural modeling environment that consists of a procedural modeling language and a set of non-interactive modeling tools. A goal of this research is to provide comparisons between 3D manual modeling and procedural modeling, which focus on the modeling strategies, tools and model representations used by each modeling paradigm. A procedural modeling language is presented that has the same facilities and features of existing procedural modeling languages. In addition, features such as caching and a pseudorandom number generator is included, demonstrating the advantages of a procedural modeling paradigm. The non-interactive tools created within the procedural modeling framework are selection, extrusion, subdivision, curve shaping and stitching. In order to demonstrate the usefulness of the procedural modeling framework, human and furniture models are created using this procedural modeling environment. Various techniques are presented to generate these objects, and may be used to create a variety of other models. A detailed discussion of each technique is provided. Six experiments are conducted to test the support of the procedural modeling benets provided by this non- interactive modeling environment. The experiments test, namely parameterisation, re-usability, base-shape independence, model complexity, the generation of reproducible random numbers and caching. We prove that a number of distinct models can be generated from a single procedure through the use parameterisation. Modeling procedures and sub-procedures are re-usable and can be applied to different models. Procedures can be base-shape independent. The level of complexity of a model can be increased by repeatedly applying geometry to the model. The pseudo-random number generator is capable of generating reproducible random numbers. The caching facility reduces the time required to generate a model that uses repetitive geometry.
- Full Text:
- Authors: Morkel, Chantelle
- Date: 2006
- Subjects: Computer graphics -- Mathematical models , Three-dimensional display systems , Computer simulation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4644 , http://hdl.handle.net/10962/d1006589 , Computer graphics -- Mathematical models , Three-dimensional display systems , Computer simulation
- Description: This research examines procedural modeling in the eld of computer graphics. Procedural modeling automates the generation of objects by representing models as procedures that provide a description of the process required to create the model. The problem we solve with this research is the creation of a procedural modeling environment that consists of a procedural modeling language and a set of non-interactive modeling tools. A goal of this research is to provide comparisons between 3D manual modeling and procedural modeling, which focus on the modeling strategies, tools and model representations used by each modeling paradigm. A procedural modeling language is presented that has the same facilities and features of existing procedural modeling languages. In addition, features such as caching and a pseudorandom number generator is included, demonstrating the advantages of a procedural modeling paradigm. The non-interactive tools created within the procedural modeling framework are selection, extrusion, subdivision, curve shaping and stitching. In order to demonstrate the usefulness of the procedural modeling framework, human and furniture models are created using this procedural modeling environment. Various techniques are presented to generate these objects, and may be used to create a variety of other models. A detailed discussion of each technique is provided. Six experiments are conducted to test the support of the procedural modeling benets provided by this non- interactive modeling environment. The experiments test, namely parameterisation, re-usability, base-shape independence, model complexity, the generation of reproducible random numbers and caching. We prove that a number of distinct models can be generated from a single procedure through the use parameterisation. Modeling procedures and sub-procedures are re-usable and can be applied to different models. Procedures can be base-shape independent. The level of complexity of a model can be increased by repeatedly applying geometry to the model. The pseudo-random number generator is capable of generating reproducible random numbers. The caching facility reduces the time required to generate a model that uses repetitive geometry.
- Full Text:
An investigation of hair modelling and rendering techniques with emphasis on African hairstyles
- Authors: Patrick, Deborah Michelle
- Date: 2005 , 2013-10-17
- Subjects: RenderMan , Hairstyles -- Africa , Hairstyles -- Computer simulation -- Africa , Hairdressing of Black people , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4639 , http://hdl.handle.net/10962/d1006561 , RenderMan , Hairstyles -- Africa , Hairstyles -- Computer simulation -- Africa , Hairdressing of Black people , Computer graphics
- Description: Many computer graphics applications make use of virtual humans. Methods for modelling and rendering hair are needed so that hairstyles can be added to the virtual humans. Modelling and rendering hair is challenging due to the large number of hair strands and their geometric properties, the complex lighting effects that occur among the strands of hair, and the complexity and large variation of human hairstyles. While methods have been developed for generating hair, no methods exist for generating African hair, which differs from hair of other ethnic groups. This thesis presents methods for modelling and rendering African hair. Existing hair modelling and rendering techniques are investigated, and the knowledge gained from the investigation is used to develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The different forms of hair identified are natural curly hair, straightened hair, and braids or twists of hair. The hair modelling techniques developed are implemented as plug-ins for the graphics program LightWave 3D. The plug-ins developed not only model the three identified forms of hair, but also add the modelled hair to a model of a head, and can be used to create a variety of African hairstyles. The plug-ins significantly reduce the time spent on hair modelling. Tests performed show that increasing the number of polygons used to model hair increases the quality of the hair produced, but also increases the rendering time. However, there is usually an upper bound to the number of polygons needed to produce a reasonable hairstyle, making it feasible to add African hairstyles to virtual humans. The rendering aspects investigated include hair illumination, texturing, shadowing and antialiasing. An anisotropic illumination model is developed that considers the properties of African hair, including the colouring, opacity and narrow width of the hair strands. Texturing is used in several instances to create the effect of individual strands of hair. Results show that texturing is useful for representing many hair strands because the density of the hair in a texture map does not have an effect on the rendering time. The importance of including a shadowing technique and applying an anti-aliasing method when rendering hair is demonstrated. The rendering techniques are implemented using the RenderMan Interface and Shading Language. A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully. , GNU Ghostscript 7.07
- Full Text:
- Authors: Patrick, Deborah Michelle
- Date: 2005 , 2013-10-17
- Subjects: RenderMan , Hairstyles -- Africa , Hairstyles -- Computer simulation -- Africa , Hairdressing of Black people , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4639 , http://hdl.handle.net/10962/d1006561 , RenderMan , Hairstyles -- Africa , Hairstyles -- Computer simulation -- Africa , Hairdressing of Black people , Computer graphics
- Description: Many computer graphics applications make use of virtual humans. Methods for modelling and rendering hair are needed so that hairstyles can be added to the virtual humans. Modelling and rendering hair is challenging due to the large number of hair strands and their geometric properties, the complex lighting effects that occur among the strands of hair, and the complexity and large variation of human hairstyles. While methods have been developed for generating hair, no methods exist for generating African hair, which differs from hair of other ethnic groups. This thesis presents methods for modelling and rendering African hair. Existing hair modelling and rendering techniques are investigated, and the knowledge gained from the investigation is used to develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The different forms of hair identified are natural curly hair, straightened hair, and braids or twists of hair. The hair modelling techniques developed are implemented as plug-ins for the graphics program LightWave 3D. The plug-ins developed not only model the three identified forms of hair, but also add the modelled hair to a model of a head, and can be used to create a variety of African hairstyles. The plug-ins significantly reduce the time spent on hair modelling. Tests performed show that increasing the number of polygons used to model hair increases the quality of the hair produced, but also increases the rendering time. However, there is usually an upper bound to the number of polygons needed to produce a reasonable hairstyle, making it feasible to add African hairstyles to virtual humans. The rendering aspects investigated include hair illumination, texturing, shadowing and antialiasing. An anisotropic illumination model is developed that considers the properties of African hair, including the colouring, opacity and narrow width of the hair strands. Texturing is used in several instances to create the effect of individual strands of hair. Results show that texturing is useful for representing many hair strands because the density of the hair in a texture map does not have an effect on the rendering time. The importance of including a shadowing technique and applying an anti-aliasing method when rendering hair is demonstrated. The rendering techniques are implemented using the RenderMan Interface and Shading Language. A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully. , GNU Ghostscript 7.07
- Full Text:
Design, evaluation and comparison of evolution and reinforcement learning models
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
- Authors: Mclean, Clinton Brett
- Date: 2002
- Subjects: Evolutionary computation Neural networks (Computer science) Reinforcement learning
- Language: English
- Type: Thesis , Masters , MEcon
- Identifier: vital:4625 , http://hdl.handle.net/10962/d1006493
- Description: This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
- Full Text:
Implementing non-photorealistic rendering enhancements with real-time performance
- Authors: Winnemöller, Holger
- Date: 2002 , 2013-05-09
- Subjects: Computer animation , Computer graphics , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4580 , http://hdl.handle.net/10962/d1003135 , Computer animation , Computer graphics , Real-time data processing
- Description: We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Winnemöller, Holger
- Date: 2002 , 2013-05-09
- Subjects: Computer animation , Computer graphics , Real-time data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4580 , http://hdl.handle.net/10962/d1003135 , Computer animation , Computer graphics , Real-time data processing
- Description: We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
Designing and implementing a virtual reality interaction framework
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
Development of the components of a low cost, distributed facial virtual conferencing system
- Authors: Panagou, Soterios
- Date: 2000 , 2011-11-10
- Subjects: Virtual computer systems , Virtual reality , Computer conferencing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4622 , http://hdl.handle.net/10962/d1006490 , Virtual computer systems , Virtual reality , Computer conferencing
- Description: This thesis investigates the development of a low cost, component based facial virtual conferencing system. The design is decomposed into an encoding phase and a decoding phase, which communicate with each other via a network connection. The encoding phase is composed of three components: model acquisition (which handles avatar generation), pose estimation and expression analysis. Audio is not considered part of the encoding and decoding process, and as such is not evaluated. The model acquisition component is implemented using a visual hull reconstruction algorithm that is able to reconstruct real-world objects using only sets of images of the object as input. The object to be reconstructed is assumed to lie in a bounding volume of voxels. The reconstruction process involves the following stages: - Space carving for basic shape extraction; - Isosurface extraction to remove voxels not part of the surface of the reconstruction; - Mesh connection to generate a closed, connected polyhedral mesh; - Texture generation. Texturing is achieved by Gouraud shading the reconstruction with a vertex colour map; - Mesh decimation to simplify the object. The original algorithm has complexity O(n), but suffers from an inability to reconstruct concave surfaces that do not form part of the visual hull of the object. A novel extension to this algorithm based on Normalised Cross Correlation (NCC) is proposed to overcome this problem. An extension to speed up traditional NCC evaluations is proposed which reduces the NCC search space from a 2D search problem down to a single evaluation. Pose estimation and expression analysis are performed by tracking six fiducial points on the face of a subject. A tracking algorithm is developed that uses Normalised Cross Correlation to facilitate robust tracking that is invariant to changing lighting conditions, rotations and scaling. Pose estimation involves the recovery of the head position and orientation through the tracking of the triangle formed by the subject's eyebrows and nose tip. A rule-based evaluation of points that are tracked around the subject's mouth forms the basis of the expression analysis. A user assisted feedback loop and caching mechanism is used to overcome tracking errors due to fast motion or occlusions. The NCC tracker is shown to achieve a tracking performance of 10 fps when tracking the six fiducial points. The decoding phase is divided into 3 tasks, namely: avatar movement, expression generation and expression management. Avatar movement is implemented using the base VR system. Expression generation is facilitated using a Vertex Interpolation Deformation method. A weighting system is proposed for expression management. Its function is to gradually transform from one expression to the next. The use of the vertex interpolation method allows real-time deformations of the avatar representation, achieving 16 fps when applied to a model consisting of 7500 vertices. An Expression Parameter Lookup Table (EPLT) facilitates an independent mapping between the two phases. It defines a list of generic expressions that are known to the system and associates an Expression ID with each one. For each generic expression, it relates the expression analysis rules for any subject with the expression generation parameters for any avatar model. The result is that facial expression replication between any subject and avatar combination can be performed by transferring only the Expression ID from the encoder application to the decoder application. The ideas developed in the thesis are demonstrated in an implementation using the CoRgi Virtual Reality system. It is shown that the virtual-conferencing application based on this design requires only a bandwidth of 2 Kbps. , Adobe Acrobat Pro 9.4.6 , Adobe Acrobat 9.46 Paper Capture Plug-in
- Full Text:
- Authors: Panagou, Soterios
- Date: 2000 , 2011-11-10
- Subjects: Virtual computer systems , Virtual reality , Computer conferencing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4622 , http://hdl.handle.net/10962/d1006490 , Virtual computer systems , Virtual reality , Computer conferencing
- Description: This thesis investigates the development of a low cost, component based facial virtual conferencing system. The design is decomposed into an encoding phase and a decoding phase, which communicate with each other via a network connection. The encoding phase is composed of three components: model acquisition (which handles avatar generation), pose estimation and expression analysis. Audio is not considered part of the encoding and decoding process, and as such is not evaluated. The model acquisition component is implemented using a visual hull reconstruction algorithm that is able to reconstruct real-world objects using only sets of images of the object as input. The object to be reconstructed is assumed to lie in a bounding volume of voxels. The reconstruction process involves the following stages: - Space carving for basic shape extraction; - Isosurface extraction to remove voxels not part of the surface of the reconstruction; - Mesh connection to generate a closed, connected polyhedral mesh; - Texture generation. Texturing is achieved by Gouraud shading the reconstruction with a vertex colour map; - Mesh decimation to simplify the object. The original algorithm has complexity O(n), but suffers from an inability to reconstruct concave surfaces that do not form part of the visual hull of the object. A novel extension to this algorithm based on Normalised Cross Correlation (NCC) is proposed to overcome this problem. An extension to speed up traditional NCC evaluations is proposed which reduces the NCC search space from a 2D search problem down to a single evaluation. Pose estimation and expression analysis are performed by tracking six fiducial points on the face of a subject. A tracking algorithm is developed that uses Normalised Cross Correlation to facilitate robust tracking that is invariant to changing lighting conditions, rotations and scaling. Pose estimation involves the recovery of the head position and orientation through the tracking of the triangle formed by the subject's eyebrows and nose tip. A rule-based evaluation of points that are tracked around the subject's mouth forms the basis of the expression analysis. A user assisted feedback loop and caching mechanism is used to overcome tracking errors due to fast motion or occlusions. The NCC tracker is shown to achieve a tracking performance of 10 fps when tracking the six fiducial points. The decoding phase is divided into 3 tasks, namely: avatar movement, expression generation and expression management. Avatar movement is implemented using the base VR system. Expression generation is facilitated using a Vertex Interpolation Deformation method. A weighting system is proposed for expression management. Its function is to gradually transform from one expression to the next. The use of the vertex interpolation method allows real-time deformations of the avatar representation, achieving 16 fps when applied to a model consisting of 7500 vertices. An Expression Parameter Lookup Table (EPLT) facilitates an independent mapping between the two phases. It defines a list of generic expressions that are known to the system and associates an Expression ID with each one. For each generic expression, it relates the expression analysis rules for any subject with the expression generation parameters for any avatar model. The result is that facial expression replication between any subject and avatar combination can be performed by transferring only the Expression ID from the encoder application to the decoder application. The ideas developed in the thesis are demonstrated in an implementation using the CoRgi Virtual Reality system. It is shown that the virtual-conferencing application based on this design requires only a bandwidth of 2 Kbps. , Adobe Acrobat Pro 9.4.6 , Adobe Acrobat 9.46 Paper Capture Plug-in
- Full Text:
Minimal motion capture with inverse kinematics for articulated human figure animation
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- Authors: Casanueva, Luis
- Date: 2000
- Subjects: Virtual reality , Image processing -- Digital techniques
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4620 , http://hdl.handle.net/10962/d1006485 , Virtual reality , Image processing -- Digital techniques
- Description: Animating an articulated figure usually requires expensive hardware in terms of motion capture equipment, processing power and rendering power. This implies a high cost system and thus eliminates the use of personal computers to drive avatars in virtual environments. We propose a system to animate an articulated human upper body in real-time, using minimal motion capture trackers to provide position and orientation for the limbs. The system has to drive an avatar in a virtual environment on a low-end computer. The cost of the motion capture equipment must be relatively low (hence the use of minimal trackers). We discuss the various types of motion capture equipment and decide to use electromagnetic trackers which are adequate for our requirements while being reasonably priced. We also discuss the use of inverse kinematics to solve for the articulated chains making up the topology of the articulated figure. Furthermore, we offer a method to describe articulated chains as well as a process to specify the reach of up to four link chains with various levels of redundancy for use in articulated figures. We then provide various types of constraints to reduce the redundancy of non-defined articulated chains, specifically for chains found in an articulated human upper body. Such methods include a way to solve for the redundancy in the orientation of the neck link, as well as three different methods to solve the redundancy of the articulated human arm. The first method involves eliminating a degree of freedom from the chain, thus reducing its redundancy. The second method calculates the elevation angle of the elbow position from the elevation angle of the hand. The third method determines the actual position of the elbow from an average of previous positions of the elbow according to the position and orientation of the hand. The previous positions of the elbow are captured during the calibration process. The redundancy of the neck is easily solved due to the small amount of redundancy in the chain. When solving the arm, the first method which should give a perfect result in theory, gives a poor result in practice due to the limitations of both the motion capture equipment and the design. The second method provides an adequate result for the position of the redundant elbow in most cases although fails in some cases. Still it benefits from a simple approach as well as very little need for calibration. The third method provides the most accurate method of the three for the position of the redundant elbow although it also fails in some cases. This method however requires a long calibration session for each user. The last two methods allow for the calibration data to be used in latter session, thus reducing considerably the calibration required. In combination with a virtual reality system, these processes allow for the real-time animation of an articulated figure to drive avatars in virtual environments or for low quality animation on a low-end computer.
- Full Text:
- «
- ‹
- 1
- ›
- »