Deep Palmprint Recognition with Alignment and Augmentation of Limited Training Samples
- Authors: Brown, Dane , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/440249 , vital:73760 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
Improved palmprint segmentation for robust identification and verification
- Authors: Brown, Dane , Bradshaw, Karen L
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460576 , vital:75966 , xlink:href="https://doi.org/10.1109/SITIS.2019.00013"
- Description: This paper introduces an improved approach to palmprint segmentation. The approach enables both contact and contactless palmprints to be segmented regardless of constraining finger positions or whether fingers are even depicted within the image. It is compared with related systems, as well as more comprehensive identification tests, that show consistent results across other datasets. Experiments include contact and contactless palmprint images. The proposed system achieves highly accurate classification results, and highlights the importance of effective image segmentation. The proposed system is practical as it is effective with small or large amounts of training data.
- Full Text:
- Date Issued: 2019
Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level
- Authors: Brown, Dane
- Date: 2018
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/63470 , vital:28414
- Description: Multimodal biometrics has become a popular means of overcoming the limitations of unimodal biometric systems. However, the rich information particular to the feature level is of a complex nature and leveraging its potential without overfitting a classifier is not well studied. This research investigates feature-classifier combinations on the fingerprint, face, palmprint, and iris modalities to effectively fuse their feature vectors for a complementary result. The effects of different feature-classifier combinations are thus isolated to identify novel or improved algorithms. A new face segmentation algorithm is shown to increase consistency in nominal and extreme scenarios. Moreover, two novel feature extraction techniques demonstrate better adaptation to dynamic lighting conditions, while reducing feature dimensionality to the benefit of classifiers. A comprehensive set of unimodal experiments are carried out to evaluate both verification and identification performance on a variety of datasets using four classifiers, namely Eigen, Fisher, Local Binary Pattern Histogram and linear Support Vector Machine on various feature extraction methods. The recognition performance of the proposed algorithms are shown to outperform the vast majority of related studies, when using the same dataset under the same test conditions. In the unimodal comparisons presented, the proposed approaches outperform existing systems even when given a handicap such as fewer training samples or data with a greater number of classes. A separate comprehensive set of experiments on feature fusion show that combining modality data provides a substantial increase in accuracy, with only a few exceptions that occur when differences in the image data quality of two modalities are substantial. However, when two poor quality datasets are fused, noticeable gains in recognition performance are realized when using the novel feature extraction approach. Finally, feature-fusion guidelines are proposed to provide the necessary insight to leverage the rich information effectively when fusing multiple biometric modalities at the feature level. These guidelines serve as the foundation to better understand and construct biometric systems that are effective in a variety of applications.
- Full Text:
- Date Issued: 2018
Feature-fusion guidelines for image-based multi-modal biometric fusion
- Authors: Brown, Dane , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460063 , vital:75889 , xlink:href="https://doi.org/10.18489/sacj.v29i1.436"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a newapproach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprintand palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed inour recent work, are extended by adding a new face segmentation method and the support vector machine classifier.The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vectormachine classifier combined with the new feature selection approach, proposed in our recent work, outperforms otherclassifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknessesas observed in the applied feature processing modules during preliminary experiments. The guidelines are used toimplement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducingthe EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face,MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
Improved Automatic Face Segmentation and Recognition for Applications with Limited Training Data
- Authors: Bradshaw, Karen L , Brown, Dane
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460085 , vital:75891 , ISBN 9783319582740 , https://doi.org/10.1007/978-3-319-58274-0_33
- Description: This paper introduces varied pose angle, a new approach to improve face identification given large pose angles and limited training data. Face landmarks are extracted and used to normalize and segment the face. Our approach does not require face frontalization and achieves consistent results. Results are compared using frontal and non-frontal training images for Eigen and Fisher classification of various face pose angles. Fisher scales better with more training samples only with a high quality dataset. Our approach achieves promising results for three well-known face datasets.
- Full Text:
- Date Issued: 2017
“Enhanced biometric access control for mobile devices,” in Proceedings of the 20th Southern Africa Telecommunication Networks and Applications Conference
- Authors: Bradshaw, Karen L , Brown, Dane
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460025 , vital:75885 , ISBN 9780620767569
- Description: In the new Digital Economy, mobile devices are increasingly being used for tasks that involve sensitive and/or f inancial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017