An investigation into information security practices implemented by Research and Educational Network of Uganda (RENU) member institution
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
GPF : a framework for general packet classification on GPU co-processors
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- «
- ‹
- 1
- ›
- »