Poacher detection and wildlife counting system
- Brown, Dane L, Schormann, Daniel
- Authors: Brown, Dane L , Schormann, Daniel
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465733 , vital:76636 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378767_Poacher_Detection_and_Wildlife_Counting_System/links/5d6117c7a6fdccc32ccd2cac/Poacher-Detection-and-Wildlife-Counting-System.pdf"
- Description: The illegal hunting of wildlife is a serious problem, causing a large number of animals to approach extinction or worse. Drones provide a viable option for constant surveillance and multiple instances of using drones for this purpose have been tried. However, existing methods predominantly rely on manual surveillance from camera feeds. This paper shows that using either visible or thermal cameras, with modern image processing and machine learning techniques, enables a system to autonomously detect humans, while tracking animals by their identity number (id). The thermal characteristics of special but inexpensive cameras are used for object detection with centroid tracking, and convolutional neural networks are used to classify humans and wildlife. Classification also enables the counting of wildlife by id, which can help game reserves keep track of wildlife.
- Full Text:
- Date Issued: 2019
- Authors: Brown, Dane L , Schormann, Daniel
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465733 , vital:76636 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378767_Poacher_Detection_and_Wildlife_Counting_System/links/5d6117c7a6fdccc32ccd2cac/Poacher-Detection-and-Wildlife-Counting-System.pdf"
- Description: The illegal hunting of wildlife is a serious problem, causing a large number of animals to approach extinction or worse. Drones provide a viable option for constant surveillance and multiple instances of using drones for this purpose have been tried. However, existing methods predominantly rely on manual surveillance from camera feeds. This paper shows that using either visible or thermal cameras, with modern image processing and machine learning techniques, enables a system to autonomously detect humans, while tracking animals by their identity number (id). The thermal characteristics of special but inexpensive cameras are used for object detection with centroid tracking, and convolutional neural networks are used to classify humans and wildlife. Classification also enables the counting of wildlife by id, which can help game reserves keep track of wildlife.
- Full Text:
- Date Issued: 2019
Virtual Gym Instructor
- Authors: Brown, Dane L , Ndleve, Mixo
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465744 , vital:76637 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378603_Virtual_Gym_Instructor/links/5d6118a892851c619d7268c1/Virtual-Gym-Instructor.pdf"
- Description: The fourth industrial revolution and the continuous development of new technologies have presented a golden platter for sedentary living. Noncommunicable diseases such as, cancers, cardiovascular and respiratory deficiencies, and diabetes have reached epidemic levels as a consequence. A traditional gym instructor screens clients to prescribe exercise programs that can help them lower the risk of noncommunicable lifestyle diseases. However, gym instructors often come at a cost and are not always affordable, available or accessible. This research investigated whether modern computing power can be utilized to develop a system in the form of a cost effective alternative exercise program – Virtual Gym Instructor. The system demonstrated perfect realtime object detection and tracking up to four metres away from the camera and produced results for distances up to eight metres away.
- Full Text:
- Date Issued: 2019
- Authors: Brown, Dane L , Ndleve, Mixo
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465744 , vital:76637 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378603_Virtual_Gym_Instructor/links/5d6118a892851c619d7268c1/Virtual-Gym-Instructor.pdf"
- Description: The fourth industrial revolution and the continuous development of new technologies have presented a golden platter for sedentary living. Noncommunicable diseases such as, cancers, cardiovascular and respiratory deficiencies, and diabetes have reached epidemic levels as a consequence. A traditional gym instructor screens clients to prescribe exercise programs that can help them lower the risk of noncommunicable lifestyle diseases. However, gym instructors often come at a cost and are not always affordable, available or accessible. This research investigated whether modern computing power can be utilized to develop a system in the form of a cost effective alternative exercise program – Virtual Gym Instructor. The system demonstrated perfect realtime object detection and tracking up to four metres away from the camera and produced results for distances up to eight metres away.
- Full Text:
- Date Issued: 2019
Efficient Biometric Access Control for Larger Scale Populations
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018
Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
- Authors: Brown, Dane L
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane L
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
Enhanced biometric access control for mobile devices
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465678 , vital:76631
- Description: In the new Digital Economy, mobile devices are increasingly 978-0-620-76756-9being used for tasks that involve sensitive and/or financial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465678 , vital:76631
- Description: In the new Digital Economy, mobile devices are increasingly 978-0-620-76756-9being used for tasks that involve sensitive and/or financial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
Feature-fusion guidelines for image-based multi-modal biometric fusion
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460063 , vital:75889 , xlink:href="https://doi.org/10.18489/sacj.v29i1.436"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a newapproach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprintand palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed inour recent work, are extended by adding a new face segmentation method and the support vector machine classifier.The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vectormachine classifier combined with the new feature selection approach, proposed in our recent work, outperforms otherclassifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknessesas observed in the applied feature processing modules during preliminary experiments. The guidelines are used toimplement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducingthe EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face,MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460063 , vital:75889 , xlink:href="https://doi.org/10.18489/sacj.v29i1.436"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a newapproach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprintand palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed inour recent work, are extended by adding a new face segmentation method and the support vector machine classifier.The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vectormachine classifier combined with the new feature selection approach, proposed in our recent work, outperforms otherclassifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknessesas observed in the applied feature processing modules during preliminary experiments. The guidelines are used toimplement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducingthe EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face,MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
Feature-fusion guidelines for image-based multi-modal biometric fusion
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465689 , vital:76632 , xlink:href="https://hdl.handle.net/10520/EJC-90afb1388"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465689 , vital:76632 , xlink:href="https://hdl.handle.net/10520/EJC-90afb1388"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
Improved Automatic Face Segmentation and Recognition for Applications with Limited Training Data
- Bradshaw, Karen L, Brown, Dane L
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460085 , vital:75891 , ISBN 9783319582740 , https://doi.org/10.1007/978-3-319-58274-0_33
- Description: This paper introduces varied pose angle, a new approach to improve face identification given large pose angles and limited training data. Face landmarks are extracted and used to normalize and segment the face. Our approach does not require face frontalization and achieves consistent results. Results are compared using frontal and non-frontal training images for Eigen and Fisher classification of various face pose angles. Fisher scales better with more training samples only with a high quality dataset. Our approach achieves promising results for three well-known face datasets.
- Full Text:
- Date Issued: 2017
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460085 , vital:75891 , ISBN 9783319582740 , https://doi.org/10.1007/978-3-319-58274-0_33
- Description: This paper introduces varied pose angle, a new approach to improve face identification given large pose angles and limited training data. Face landmarks are extracted and used to normalize and segment the face. Our approach does not require face frontalization and achieves consistent results. Results are compared using frontal and non-frontal training images for Eigen and Fisher classification of various face pose angles. Fisher scales better with more training samples only with a high quality dataset. Our approach achieves promising results for three well-known face datasets.
- Full Text:
- Date Issued: 2017
“Enhanced biometric access control for mobile devices,” in Proceedings of the 20th Southern Africa Telecommunication Networks and Applications Conference
- Bradshaw, Karen L, Brown, Dane L
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460025 , vital:75885 , ISBN 9780620767569
- Description: In the new Digital Economy, mobile devices are increasingly being used for tasks that involve sensitive and/or f inancial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460025 , vital:75885 , ISBN 9780620767569
- Description: In the new Digital Economy, mobile devices are increasingly being used for tasks that involve sensitive and/or f inancial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017