Trust on the semantic web
- Authors: Cloran, Russell Andrew
- Date: 2007 , 2006-08-07
- Subjects: Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4649 , http://hdl.handle.net/10962/d1006616 , Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Description: The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
- Full Text:
- Date Issued: 2007
- Authors: Cloran, Russell Andrew
- Date: 2007 , 2006-08-07
- Subjects: Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4649 , http://hdl.handle.net/10962/d1006616 , Semantic Web , RDF (Document markup language) , XML (Document markup language) , Knowledge acquisition (Expert systems) , Data protection
- Description: The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
- Full Text:
- Date Issued: 2007
File integrity checking
- Authors: Motara, Yusuf Moosa
- Date: 2006
- Subjects: Linux , Operating systems (Computers) , Database design , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4682 , http://hdl.handle.net/10962/d1007701 , Linux , Operating systems (Computers) , Database design , Computer security
- Description: This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker.
- Full Text:
- Date Issued: 2006
- Authors: Motara, Yusuf Moosa
- Date: 2006
- Subjects: Linux , Operating systems (Computers) , Database design , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4682 , http://hdl.handle.net/10962/d1007701 , Linux , Operating systems (Computers) , Database design , Computer security
- Description: This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker.
- Full Text:
- Date Issued: 2006