A Practical Use for AI-Generated Images
- Boby, Alden, Brown, Dane L, Connan, James
- Authors: Boby, Alden , Brown, Dane L , Connan, James
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463345 , vital:76401 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-43838-7_12"
- Description: Collecting data for research can be costly and time-consuming, and available methods to speed up the process are limited. This research paper compares real data and AI-generated images for training an object detection model. The study aimed to assess how the utilisation of AI-generated images influences the performance of an object detection model. The study used a popular object detection model, YOLO, and trained it on a dataset with real car images as well as a synthetic dataset generated with a state-of-the-art diffusion model. The results showed that while the model trained on real data performed better on real-world images, the model trained on AI-generated images, in some cases, showed improved performance on certain images and was good enough to function as a licence plate detector on its own. The study highlights the potential of using AI-generated images for data augmentation in object detection models and sheds light on the trade-off between real and synthetic data in the training process. The findings of this study can inform future research in object detection and help practitioners make informed decisions when choosing between real and synthetic data for training object detection models.
- Full Text:
- Authors: Boby, Alden , Brown, Dane L , Connan, James
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463345 , vital:76401 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-43838-7_12"
- Description: Collecting data for research can be costly and time-consuming, and available methods to speed up the process are limited. This research paper compares real data and AI-generated images for training an object detection model. The study aimed to assess how the utilisation of AI-generated images influences the performance of an object detection model. The study used a popular object detection model, YOLO, and trained it on a dataset with real car images as well as a synthetic dataset generated with a state-of-the-art diffusion model. The results showed that while the model trained on real data performed better on real-world images, the model trained on AI-generated images, in some cases, showed improved performance on certain images and was good enough to function as a licence plate detector on its own. The study highlights the potential of using AI-generated images for data augmentation in object detection models and sheds light on the trade-off between real and synthetic data in the training process. The findings of this study can inform future research in object detection and help practitioners make informed decisions when choosing between real and synthetic data for training object detection models.
- Full Text:
Enabling Vehicle Search Through Robust Licence Plate Detection
- Boby, Alden, Brown, Dane L, Connan, James, Marais, Marc, Kuhlane, Luxolo L
- Authors: Boby, Alden , Brown, Dane L , Connan, James , Marais, Marc , Kuhlane, Luxolo L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463372 , vital:76403 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220508"
- Description: Licence plate recognition has many practical applications for security and surveillance. This paper presents a robust licence plate detection system that uses string-matching algorithms to identify a vehicle in data. Object detection models have had limited application in the character recognition domain. The system utilises the YOLO object detection model to perform character recognition to ensure more accurate character predictions. The model incorporates super-resolution techniques to enhance the quality of licence plate images to increase character recognition accuracy. The proposed system can accurately detect license plates in diverse conditions and can handle license plates with varying fonts and backgrounds. The system's effectiveness is demonstrated through experimentation on components of the system, showing promising license plate detection and character recognition accuracy. The overall system works with all the components to track vehicles by matching a target string with detected licence plates in a scene. The system has potential applications in law enforcement, traffic management, and parking systems and can significantly advance surveillance and security through automation.
- Full Text:
- Authors: Boby, Alden , Brown, Dane L , Connan, James , Marais, Marc , Kuhlane, Luxolo L
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463372 , vital:76403 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220508"
- Description: Licence plate recognition has many practical applications for security and surveillance. This paper presents a robust licence plate detection system that uses string-matching algorithms to identify a vehicle in data. Object detection models have had limited application in the character recognition domain. The system utilises the YOLO object detection model to perform character recognition to ensure more accurate character predictions. The model incorporates super-resolution techniques to enhance the quality of licence plate images to increase character recognition accuracy. The proposed system can accurately detect license plates in diverse conditions and can handle license plates with varying fonts and backgrounds. The system's effectiveness is demonstrated through experimentation on components of the system, showing promising license plate detection and character recognition accuracy. The overall system works with all the components to track vehicles by matching a target string with detected licence plates in a scene. The system has potential applications in law enforcement, traffic management, and parking systems and can significantly advance surveillance and security through automation.
- Full Text:
Exploring the Incremental Improvements of YOLOv5 on Tracking and Identifying Great White Sharks in Cape Town
- Kuhlane, Luxolo L, Brown, Dane L, Boby, Alden
- Authors: Kuhlane, Luxolo L , Brown, Dane L , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464107 , vital:76476 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-37963-5_98"
- Description: The information on great white sharks is used by scientists to help better understand the marine organisms and to mitigate any chances of extinction of great white sharks. Sharks play a very important role in the ocean, and their role in the oceans is under-appreciated by the general public, which results in negative attitudes towards sharks. The tracking and identification of sharks are done using manual labour, which is not very accurate and time-consuming. This paper uses a deep learning approach to help identify and track great white sharks in Cape Town. A popular object detecting system used in this paper is YOLO, which is implemented to help identify the great white shark. In conjunction with YOLO, the paper also uses ESRGAN to help upscale low-quality images from the datasets into more high-quality images before being put into the YOLO system. The main focus of this paper is to help train the system; this includes training the system to identify great white sharks in difficult conditions such as murky water or unclear deep-sea conditions.
- Full Text:
- Authors: Kuhlane, Luxolo L , Brown, Dane L , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464107 , vital:76476 , xlink:href="https://link.springer.com/chapter/10.1007/978-3-031-37963-5_98"
- Description: The information on great white sharks is used by scientists to help better understand the marine organisms and to mitigate any chances of extinction of great white sharks. Sharks play a very important role in the ocean, and their role in the oceans is under-appreciated by the general public, which results in negative attitudes towards sharks. The tracking and identification of sharks are done using manual labour, which is not very accurate and time-consuming. This paper uses a deep learning approach to help identify and track great white sharks in Cape Town. A popular object detecting system used in this paper is YOLO, which is implemented to help identify the great white shark. In conjunction with YOLO, the paper also uses ESRGAN to help upscale low-quality images from the datasets into more high-quality images before being put into the YOLO system. The main focus of this paper is to help train the system; this includes training the system to identify great white sharks in difficult conditions such as murky water or unclear deep-sea conditions.
- Full Text:
Spatiotemporal Convolutions and Video Vision Transformers for Signer-Independent Sign Language Recognition
- Marais, Marc, Brown, Dane L, Connan, James, Boby, Alden
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463478 , vital:76412 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220534"
- Description: Sign language is a vital tool of communication for individuals who are deaf or hard of hearing. Sign language recognition (SLR) technology can assist in bridging the communication gap between deaf and hearing individuals. However, existing SLR systems are typically signer-dependent, requiring training data from the specific signer for accurate recognition. This presents a significant challenge for practical use, as collecting data from every possible signer is not feasible. This research focuses on developing a signer-independent isolated SLR system to address this challenge. The system implements two model variants on the signer-independent datasets: an R(2+ I)D spatiotemporal convolutional block and a Video Vision transformer. These models learn to extract features from raw sign language videos from the LSA64 dataset and classify signs without needing handcrafted features, explicit segmentation or pose estimation. Overall, the R(2+1)D model architecture significantly outperformed the ViViT architecture for signer-independent SLR on the LSA64 dataset. The R(2+1)D model achieved a near-perfect accuracy of 99.53% on the unseen test set, with the ViViT model yielding an accuracy of 72.19 %. Proving that spatiotemporal convolutions are effective at signer-independent SLR.
- Full Text:
- Authors: Marais, Marc , Brown, Dane L , Connan, James , Boby, Alden
- Date: 2023
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/463478 , vital:76412 , xlink:href="https://ieeexplore.ieee.org/abstract/document/10220534"
- Description: Sign language is a vital tool of communication for individuals who are deaf or hard of hearing. Sign language recognition (SLR) technology can assist in bridging the communication gap between deaf and hearing individuals. However, existing SLR systems are typically signer-dependent, requiring training data from the specific signer for accurate recognition. This presents a significant challenge for practical use, as collecting data from every possible signer is not feasible. This research focuses on developing a signer-independent isolated SLR system to address this challenge. The system implements two model variants on the signer-independent datasets: an R(2+ I)D spatiotemporal convolutional block and a Video Vision transformer. These models learn to extract features from raw sign language videos from the LSA64 dataset and classify signs without needing handcrafted features, explicit segmentation or pose estimation. Overall, the R(2+1)D model architecture significantly outperformed the ViViT architecture for signer-independent SLR on the LSA64 dataset. The R(2+1)D model achieved a near-perfect accuracy of 99.53% on the unseen test set, with the ViViT model yielding an accuracy of 72.19 %. Proving that spatiotemporal convolutions are effective at signer-independent SLR.
- Full Text:
- «
- ‹
- 1
- ›
- »