A toolkit for successful workplace learning analytics at software vendors
- Authors: Whale, Alyssa Morgan
- Date: 2024-04
- Subjects: Computer-assisted instruction , Intelligent tutoring systems , Information visualisation
- Language: English
- Type: Doctoral theses , text
- Identifier: http://hdl.handle.net/10948/64448 , vital:73713
- Description: Software vendors commonly provide digital software training to their stakeholders and therefore are faced with the problem of an influx of data collected from these training/learning initiatives. Every second of every day, data is being collected based on online learning activities and learner behaviour. Thus, online platforms are struggling to cope with the volumes of data that are collected, and companies are finding it difficult to analyse and manage this data in a way that can be beneficial to all stakeholders. The majority of studies investigating learning analytics have been conducted in educational settings. This research aimed to develop and evaluate a toolkit that can be used for successful Workplace Learning Analytics (WLA) at software vendors. The study followed the Design Science Research (DSR) methodology, which was applied in iterative cycles where various components of the toolkit were designed, developed, and evaluated by participants. The real-world-context was a software vendor, ERPCo, which has been struggling to implement WLA successfully with their current Learning Experience Platform (LXP), as well as with their previous platform. Qualitative data was collected using document analysis of key company documents and Focus Group Discussions (FGDs) with employees from ERPCo to explore and confirm different topics and themes. These methods were used to iteratively analyse the As-Is and To-Be situations at ERPCo and to develop and evaluate the proposed WLA Toolkit. The method used to analyse the collected data from the FGDs was the Qualitative Content Analysis (QCA) method. To develop the first component of the toolkit, the Organisation component, the organisational success factors that influence the success of WLA were identified using a Systematic Literature Review (SLR). These factors were discussed and validated in two exploratory FGDs held with employees from ERPCo, one with operational stakeholders and the other with strategic decision makers. The DeLone and McLean Information Systems (D&M IS) Success Model was used to undergird the research as a theory to guide the understanding of the factors influencing the success of WLA. Many of the factors identified in theory were found to be prevalent in the real-world-context, with some additional ones being identified in the FGDs. The most frequent challenges highlighted by participants were related to visibility; readily available high-quality data; flexibility of reporting; complexity of reporting; and effective decision making and insights obtained. Many of these related to the concept of usability issues for both the system and the information, which is specifically related to System Quality or Information Quality from the D&M IS Success Model. The second and third components of the toolkit are the Technology and Applications; and Information components respectively. Therefore, architecture and data management challenges and requirements for these components were analysed. An appropriate WLA architecture was selected and then further customised for use at ERPCo. A third FGD was conducted with employees who had more technical roles in ERPCo. The purpose of this FGD was to provide input on the architecture, technologies and data management challenges and requirements. In the Technology and Applications component of the WLA Toolkit, factors influencing WLA success related to applications and visualisations were considered. An instantiation of this component was demonstrated in the fourth FGD, where learning data from the LXP at ERPCo was collected and a dashboard incorporating recommended visualisation techniques was developed as a proof of concept. In this FGD participants gave feedback on both the dashboard and the toolkit. The artefact of this research is the WLA Toolkit that can be used by practitioners to guide the planning and implementation of WLA in large organisations that use LXP and WLA platforms. Researchers can use the WLA Toolkit to gain a deeper understanding of the required components and factors for successful WLA in software vendors. The research also contributes to the D&M IS Success Model theory in the information economy. In support of this PhD dissertation, the following paper has been published: Whale, A. & Scholtz, B. 2022. A Theoretical Classification of Organizational Success Factors for Workplace Learning Analytics. NEXTCOMP 2022. Mauritius. A draft manuscript for a journal paper was in progress at the time of submitting this thesis. , Thesis (PhD) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics , 2024
- Full Text:
- Date Issued: 2024-04
A mathematics rendering model to support chat-based tutoring
- Authors: Haskins, Bertram Peter
- Date: 2014
- Subjects: Intelligent tutoring systems , Educational innovations , Tutors and tutoring
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:9822 , http://hdl.handle.net/10948/d1020567
- Description: Dr Math is a math tutoring service implemented on the chat application Mxit. The service allows school learners to use their mobile phones to discuss mathematicsrelated topics with human tutors. Using the broad user-base provided by Mxit, the Dr Math service has grown to consist of tens of thousands of registered school learners. The tutors on the service are all volunteers and the learners far outnumber the available tutors at any given time. School learners on the service use a shorthand language-form called microtext, to phrase their queries. Microtext is an informal form of language which consists of a variety of misspellings and symbolic representations, which emerge spontaneously as a result of the idiosyncrasies of a learner. The specific form of microtext found on the Dr Math service contains mathematical questions and example equations, pertaining to the tutoring process. Deciphering the queries, to discover their embedded mathematical content, slows down the tutoring process. This wastes time that could have been spent addressing more learner queries. The microtext language thus creates an unnecessary burden on the tutors. This study describes the development of an automated process for the translation of Dr Math microtext queries into mathematical equations. Using the design science research paradigm as a guide, three artefacts are developed. These artefacts take the form of a construct, a model and an instantiation. The construct represents the creation of new knowledge as it provides greater insight into the contents and structure of the language found on a mobile mathematics tutoring service. The construct serves as the basis for the creation of a model for the translation of microtext queries into mathematical equations, formatted for display in an electronic medium. No such technique currently exists and therefore, the model contributes new knowledge. To validate the model, an instantiation was created to serve as a proof-of-concept. The instantiation applies various concepts and techniques, such as those related to natural language processing, to the learner queries on the Dr Math service. These techniques are employed in order to translate an input microtext statement into a mathematical equation, structured by using mark-up language. The creation of the instantiation thus constitutes a knowledge contribution, as most of these techniques have never been applied to the problem of translating microtext into mathematical equations. For the automated process to have utility, it should perform on a level comparable to that of a human performing a similar translation task. To determine how closely related the results from the automated process are to those of a human, three human participants were asked to perform coding and translation tasks. The results of the human participants were compared to the results of the automated process, across a variety of metrics, including agreement, correlation, precision, recall and others. The results from the human participants served as the baseline values for comparison. The baseline results from the human participants were compared with those of the automated process. Krippendorff’s α was used to determine the level of agreement and Pearson’s correlation coefficient to determine the level of correlation between the results. The agreement between the human participants and the automated process was calculated at a level deemed satisfactory for exploratory research and the level of correlation was calculated as moderate. These values correspond with the calculations made as the human baseline. Furthermore, the automated process was able to meet or improve on all of the human baseline metrics. These results serve to validate that the automated process is able to perform the translation at a level comparable to that of a human. The automated process is available for integration into any requesting application, by means of a publicly accessible web service.
- Full Text:
- Date Issued: 2014
C3TO : a scalable architecture for mobile chat based tutoring
- Authors: Butgereit, Laura Lee
- Date: 2010
- Subjects: Intelligent tutoring systems , Instructional systems -- Design
- Language: English
- Type: Thesis , Masters , MTech
- Identifier: vital:9746 , http://hdl.handle.net/10948/1511 , Intelligent tutoring systems , Instructional systems -- Design
- Description: C³TO (Chatter Call Centre/Tutoring Online) is a scalable architecture to support mobile online tutoring using chat protocols over cell phones. It is the scalability of this architecture which is the primary focus of this dissertation. Much has been written lamenting the state of mathematics education in South Africa. It is not a pretty story. In order to help solve this mathematical crisis, the “Dr Math” research project was started in January, 2007. “Dr Math” strove to assist school pupils with their mathematics homework by providing access to tutors from a nearby university to help them. The school pupils used MXit on their cell phones and the tutors used normal computer workstations. The original “Dr Math” research project expected no more than twenty to thirty school pupils to participate. Unexpectedly thousands of school pupils started asking “Dr Math” to assist them with their mathematics homework. The original software could not scale. The original software could not cater for the thousands of pupils needing help. The scalability problems which existed in the original “Dr Math” project included: hardware scalability issues, software scalability problems, lack of physical office space for tutors, and tutor time being wasted by trivial questions. C³TO tackled these scalability concerns using an innovative three level approach by implementing a technological feature level, a tactical feature level, and a strategic feature level in the C³TO architecture. The technological level included specific components, utilities, and platforms which promoted scalability. The technological level provided the basic building blocks with which to construct a scalable architecture. The tactical level arranged the basic building blocks of the technological level into a scalable architecture. The tactical level provided short term solutions to scalability concerns by providing easy configurability and decision making. The strategic level attempted to answer the pupils questions before they actually arrived at the tutor thereby reducing the load on the human tutors. C³TO was extensively tested and evaluated. C³TO supported thousands of school pupils with their mathematics homework over a period of ten months. C³TO was used to support a small conference. C³TO was used to encourage people to volunteer their time in participation of Mandela Day. C³TO was used to support “Winter School” during the winter school holiday. In all these cases, C³TO proved itself to be scalable.
- Full Text:
- Date Issued: 2010