An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
- Authors: Hedges, Mitchell Lawrence
- Date: 2016
- Subjects: Human-computer interaction , Acoustic localization , Sound -- Equipment and supplies , Acoustical engineering , Surround-sound systems , Wireless sensor nodes
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4724 , http://hdl.handle.net/10962/d1020615
- Description: This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
- Full Text:
- Date Issued: 2016
User interface design guidelines for digital television virtual remote controls
- Authors: Wentzel, Alicia Veronica
- Date: 2016
- Subjects: Remote control , User interfaces (Computer systems) , Television broadcasting , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:1158 , http://hdl.handle.net/10962/d1020617
- Description: The remote control is a pivotal component in households worldwide. It helps users enjoy leisurely television (TV) viewing. The remote control has various user interfaces that people interact with. For example, the physical user interface includes the shape of the remote and the physical buttons; the logical user interface refers to how the information is laid out; and the graphical user interface refers to the colours and aesthetic features of the remote control. All of the user interfaces together with the context of use, cultural factors, social factors, and prior experiences of the user influences the ways people interact with their remote control and ultimately has an effect on their user experiences. Advances in the broadcasting sector and transformations of the TV physical remote control have compounded the simple remote control into a multifaceted, indispensable device, overcrowded with buttons. The usability and ultimately the user experience of physical remote controls (PRCs) have been affected by the overloaded functionality and small button sizes. The usability issues with current PRCs, the evolution of mobile phones into touchscreen smartphones, and the trend of global companies moving towards virtual remote controls (VRCs) have prompted this research to discover what user interface design features will contribute towards an enhanced user experience for digital TV VRCs. This research used the design science research process model (DSRP), which comprised six steps, to investigate this topic area further. A review of the domain literature pertaining to mobile user experiences (MUX) and all the encompassing factors, mobile human computer interaction (MHCI) and the physical, logical, graphical and natural user interfaces was completed, as well as a review of the literature regarding the usability issues of PRCs and VRCs. A contextual task analysis (CTA) of a single South African digital TV PRC was used to identify how users utilise PRCs to perform tasks, and the usability issues they encountered during the tasks. Brainstorming focus groups were used to understand how to represent certain user interface elements and attempted to source ideas from users about what potential functionality digital TV VRCs should contain. Together with all the other results gathered from the previous chapters amalgamated into a set of user interface design guidelines for digital TV VRCs. The proposed user interface guidelines were used to instantiate a digital TV VRC prototype that underwent usability testing in order to validate the proposed user interface design guidelines. The results of the usability testing revealed that the user interface design guidelines for digital TV VRCs were successful, with the addition of one guideline that was discovered during the usability testing.
- Full Text:
- Date Issued: 2016
- Authors: Wentzel, Alicia Veronica
- Date: 2016
- Subjects: Remote control , User interfaces (Computer systems) , Television broadcasting , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MCom
- Identifier: vital:1158 , http://hdl.handle.net/10962/d1020617
- Description: The remote control is a pivotal component in households worldwide. It helps users enjoy leisurely television (TV) viewing. The remote control has various user interfaces that people interact with. For example, the physical user interface includes the shape of the remote and the physical buttons; the logical user interface refers to how the information is laid out; and the graphical user interface refers to the colours and aesthetic features of the remote control. All of the user interfaces together with the context of use, cultural factors, social factors, and prior experiences of the user influences the ways people interact with their remote control and ultimately has an effect on their user experiences. Advances in the broadcasting sector and transformations of the TV physical remote control have compounded the simple remote control into a multifaceted, indispensable device, overcrowded with buttons. The usability and ultimately the user experience of physical remote controls (PRCs) have been affected by the overloaded functionality and small button sizes. The usability issues with current PRCs, the evolution of mobile phones into touchscreen smartphones, and the trend of global companies moving towards virtual remote controls (VRCs) have prompted this research to discover what user interface design features will contribute towards an enhanced user experience for digital TV VRCs. This research used the design science research process model (DSRP), which comprised six steps, to investigate this topic area further. A review of the domain literature pertaining to mobile user experiences (MUX) and all the encompassing factors, mobile human computer interaction (MHCI) and the physical, logical, graphical and natural user interfaces was completed, as well as a review of the literature regarding the usability issues of PRCs and VRCs. A contextual task analysis (CTA) of a single South African digital TV PRC was used to identify how users utilise PRCs to perform tasks, and the usability issues they encountered during the tasks. Brainstorming focus groups were used to understand how to represent certain user interface elements and attempted to source ideas from users about what potential functionality digital TV VRCs should contain. Together with all the other results gathered from the previous chapters amalgamated into a set of user interface design guidelines for digital TV VRCs. The proposed user interface guidelines were used to instantiate a digital TV VRC prototype that underwent usability testing in order to validate the proposed user interface design guidelines. The results of the usability testing revealed that the user interface design guidelines for digital TV VRCs were successful, with the addition of one guideline that was discovered during the usability testing.
- Full Text:
- Date Issued: 2016
A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills
- Authors: Amanzi, Richard
- Date: 2015
- Subjects: Human activity recognition , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/6204 , vital:21055
- Description: Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect.
- Full Text:
- Date Issued: 2015
- Authors: Amanzi, Richard
- Date: 2015
- Subjects: Human activity recognition , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10948/6204 , vital:21055
- Description: Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect.
- Full Text:
- Date Issued: 2015
Culturally-relevant augmented user interfaces for illiterate and semi-literate users
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
A model for cultivating resistance to social engineering attacks
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
- Authors: Jansson, Kenny
- Date: 2011
- Subjects: Computer security , Data protection , Human-computer interaction
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9744 , http://hdl.handle.net/10948/1588 , Computer security , Data protection , Human-computer interaction
- Description: The human being is commonly considered as being the weakest link in information security. Subsequently, as information is one of the most critical assets in an organization today, it is essential that the human element is considered in deployments of information security countermeasures. However, the human element is often neglected in this regard. Consequently, many criminals are now targeting the user directly to obtain sensitive information instead of spending days or even months trying to hack through systems. Some criminals are targeting users by utilizing various social engineering techniques to deceive the user into disclosing information. For this reason, the users of the Internet and ICT-related technologies are nowadays very vulnerable to various social engineering attacks. As a contribution to increase users’ social engineering awareness, a model – called SERUM – was devised. SERUM aims to cultivate social engineering resistance within a community through exposing the users of the community to ‘fake’ social engineering attacks. The users that react incorrectly to these attacks are instantly notified and requested to participate in an online social engineering awareness program. Thus, users are educated on-demand. The model was implemented as a software system and was utilized to conduct a phishing exercise on all the students of the Nelson Mandela Metropolitan University. The aim of the phishing exercise was to determine whether SERUM is effective in cultivating social engineering resistant behaviour within a community. This phishing exercise proved to be successful and positive results emanated. This indicated that a model like SERUM can indeed be used to educate users regarding phishing attacks.
- Full Text:
- Date Issued: 2011
Using multi-touch interaction techniques to support Collaborative Information Retrieval
- Authors: Sams, Ivan
- Date: 2011
- Subjects: Human-computer interaction , Teams in the workplace -- Data processing , Groupware (Computer software) , Interactive computer systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10491 , http://hdl.handle.net/10948/d1020156
- Description: Collaborative Information Retrieval (CIR) is a branch of Computer Supported Cooperative Work (CSCW). CIR is the process by which people search for and retrieve information, working together and using documents as data sources. Currently, computer support for CIR is limited to single user systems. Collaboration takes place either with users working at different times or in different locations. Multi-touch interaction has recently seen a rise in prominence owing to a reduction in the cost of the technology and increased frequency of use. Multi-touch surface computing allows multiple users to interact at once around a shared display. The aim of this research was to investigate how multi-touch interaction techniques could be used to support CIR effectively in a co-located environment. An application architecture for CIR systems that incorporates multi-touch interaction techniques was proposed. A prototype, called Co-IMBRA, was developed based on this architecture that used multi-touch interaction techniques to support CIR. This prototype allows multiple users to retrieve information, using the Internet as a shared information space. Documents are represented as visual objects that can be manipulated on the multi-touch surface, as well as rated, annotated and added to folders. A user study was undertaken to evaluate Co-IMBRA and determine whether the multi-touch interaction techniques effectively supported CIR. Fifteen teams of two users each participated in the user study. High task completion rates and low task times showed that the system was effective and efficient. High levels of user satisfaction were reported in the post-test questionnaires. Participants rated the system as highly useful and several commented that it promoted collaboration and that they enjoyed the test. The successful implementation of Co-IMBRA provides evidence that multi-touch interaction techniques can effectively support CIR. The results of the user evaluation also enabled recommendations for future research to be made.
- Full Text:
- Date Issued: 2011
- Authors: Sams, Ivan
- Date: 2011
- Subjects: Human-computer interaction , Teams in the workplace -- Data processing , Groupware (Computer software) , Interactive computer systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10491 , http://hdl.handle.net/10948/d1020156
- Description: Collaborative Information Retrieval (CIR) is a branch of Computer Supported Cooperative Work (CSCW). CIR is the process by which people search for and retrieve information, working together and using documents as data sources. Currently, computer support for CIR is limited to single user systems. Collaboration takes place either with users working at different times or in different locations. Multi-touch interaction has recently seen a rise in prominence owing to a reduction in the cost of the technology and increased frequency of use. Multi-touch surface computing allows multiple users to interact at once around a shared display. The aim of this research was to investigate how multi-touch interaction techniques could be used to support CIR effectively in a co-located environment. An application architecture for CIR systems that incorporates multi-touch interaction techniques was proposed. A prototype, called Co-IMBRA, was developed based on this architecture that used multi-touch interaction techniques to support CIR. This prototype allows multiple users to retrieve information, using the Internet as a shared information space. Documents are represented as visual objects that can be manipulated on the multi-touch surface, as well as rated, annotated and added to folders. A user study was undertaken to evaluate Co-IMBRA and determine whether the multi-touch interaction techniques effectively supported CIR. Fifteen teams of two users each participated in the user study. High task completion rates and low task times showed that the system was effective and efficient. High levels of user satisfaction were reported in the post-test questionnaires. Participants rated the system as highly useful and several commented that it promoted collaboration and that they enjoyed the test. The successful implementation of Co-IMBRA provides evidence that multi-touch interaction techniques can effectively support CIR. The results of the user evaluation also enabled recommendations for future research to be made.
- Full Text:
- Date Issued: 2011
An intelligent user interface model for contact centre operations
- Authors: Singh, Akash
- Date: 2007
- Subjects: User interfaces (Computer systems) , Human-computer interaction , Mobile computing , Customer services -- Management , Call centers -- Customer services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10475 , http://hdl.handle.net/10948/d1011399 , User interfaces (Computer systems) , Human-computer interaction , Mobile computing , Customer services -- Management , Call centers -- Customer services
- Description: Contact Centres (CCs) are at the forefront of interaction between an organisation and its customers. Currently, 17 percent of all inbound calls are not resolved on the first call by the first agent attending to that call. This is due to the inability of the contact centre agents (CCAs) to diagnose customer queries and find adequate solutions in an effective and efficient manner. The aim of this research is to develop an intelligent user interface (IUI) model to support and improve CC operations. A literature review of existing IUI architectures, modelbased design and existing CC software together with a field study of CCs has resulted in the design of an IUI model for CCs. The proposed IUI model is described in terms of its architecture, component-level design and interface design. An IUI prototype has been developed as a proof of concept of the proposed IUI model. The IUI prototype was evaluated in order to determine to what extent it supports problem identification and query resolution. User testing, incorporating the use of eye tracking and a post-test questionnaire, was used in order to determine the usability and usefulness of the prototype. The results of this evaluation show that the users were highly satisfied with the task support and query resolution assistance provided by the IUI prototype. This research resulted in the design of an IUI model for the domain of CCs. This model can be used to assist the development of CC applications incorporating IUIs. Use of the proposed IUI model is expected to support and enhance the effectiveness and efficiency of CC operations. Further research is needed to conduct a longitudinal study to determine the impact of IUIs in the CC domain.
- Full Text:
- Date Issued: 2007
- Authors: Singh, Akash
- Date: 2007
- Subjects: User interfaces (Computer systems) , Human-computer interaction , Mobile computing , Customer services -- Management , Call centers -- Customer services
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10475 , http://hdl.handle.net/10948/d1011399 , User interfaces (Computer systems) , Human-computer interaction , Mobile computing , Customer services -- Management , Call centers -- Customer services
- Description: Contact Centres (CCs) are at the forefront of interaction between an organisation and its customers. Currently, 17 percent of all inbound calls are not resolved on the first call by the first agent attending to that call. This is due to the inability of the contact centre agents (CCAs) to diagnose customer queries and find adequate solutions in an effective and efficient manner. The aim of this research is to develop an intelligent user interface (IUI) model to support and improve CC operations. A literature review of existing IUI architectures, modelbased design and existing CC software together with a field study of CCs has resulted in the design of an IUI model for CCs. The proposed IUI model is described in terms of its architecture, component-level design and interface design. An IUI prototype has been developed as a proof of concept of the proposed IUI model. The IUI prototype was evaluated in order to determine to what extent it supports problem identification and query resolution. User testing, incorporating the use of eye tracking and a post-test questionnaire, was used in order to determine the usability and usefulness of the prototype. The results of this evaluation show that the users were highly satisfied with the task support and query resolution assistance provided by the IUI prototype. This research resulted in the design of an IUI model for the domain of CCs. This model can be used to assist the development of CC applications incorporating IUIs. Use of the proposed IUI model is expected to support and enhance the effectiveness and efficiency of CC operations. Further research is needed to conduct a longitudinal study to determine the impact of IUIs in the CC domain.
- Full Text:
- Date Issued: 2007
Designing and implementing a virtual reality interaction framework
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000
- Authors: Rorke, Michael
- Date: 2000
- Subjects: Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4623 , http://hdl.handle.net/10962/d1006491 , Virtual reality , Computer simulation , Human-computer interaction , Computer graphics
- Description: Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
- Full Text:
- Date Issued: 2000
- «
- ‹
- 1
- ›
- »