Redirigiendo al acceso original de articulo en 16 segundos...
Inicio  /  Information  /  Vol: 14 Par: 10 (2023)  /  Artículo
ARTÍCULO
TITULO

Transfer Learning-Based YOLOv3 Model for Road Dense Object Detection

Chunhua Zhu    
Jiarui Liang and Fei Zhou    

Resumen

Stemming from the overlap of objects and undertraining due to few samples, road dense object detection is confronted with poor object identification performance and the inability to recognize edge objects. Based on this, one transfer learning-based YOLOv3 approach for identifying dense objects on the road has been proposed. Firstly, the Darknet-53 network structure is adopted to obtain a pre-trained YOLOv3 model. Then, the transfer training is introduced as the output layer for the special dataset of 2000 images containing vehicles. In the proposed model, one random function is adapted to initialize and optimize the weights of the transfer training model, which is separately designed from the pre-trained YOLOv3. The object detection classifier replaces the fully connected layer, which further improves the detection effect. The reduced size of the network model can further reduce the training and detection time. As a result, it can be better applied to actual scenarios. The experimental results demonstrate that the object detection accuracy of the presented approach is 87.75% for the Pascal VOC 2007 dataset, which is superior to the traditional YOLOv3 and the YOLOv5 by 4% and 0.59%, respectively. Additionally, the test was carried out using UA-DETRAC, a public road vehicle detection dataset. The object detection accuracy of the presented approach reaches 79.23% in detecting images, which is 4.13% better than the traditional YOLOv3, and compared with the existing relatively new object detection algorithm YOLOv5, the detection accuracy is 1.36% better. Moreover, the detection speed of the proposed YOLOv3 method reaches 31.2 Fps/s in detecting images, which is 7.6 Fps/s faster than the traditional YOLOv3, and compared with the existing new object detection algorithm YOLOv7, the speed is 1.5 Fps/s faster. The proposed YOLOv3 performs 67.36 Bn of floating point operations per second in detecting video, which is obviously less than the traditional YOLOv3 and the newer object detection algorithm YOLOv5.

 Artículos similares

       
 
Shahbaz Sikandar, Rabbia Mahum and AbdulMalik Alsalman    
The multimedia content generated by devices and image processing techniques requires high computation costs to retrieve images similar to the user?s query from the database. An annotation-based traditional system of image retrieval is not coherent becaus... ver más
Revista: Applied Sciences

 
Sibel Kapan and Efnan Sora Gunal    
In phishing attack detection, machine learning-based approaches are more effective than simple blacklisting strategies, as they can adapt to new types of attacks and do not require manual updates. However, for these approaches, the choice of features and... ver más
Revista: Applied Sciences

 
Abigail Copiaco, Leena El Neel, Tasnim Nazzal, Husameldin Mukhtar and Walid Obaid    
This study introduces an innovative all-in-one malware identification model that significantly enhances convenience and resource efficiency in classifying malware across diverse file types. Traditional malware identification methods involve the extractio... ver más
Revista: Applied Sciences

 
Jiawen Li, Yun Yang, Xin Li, Jiahua Sun and Ronghui Li    
Vessel monitoring technology involves the application of remote sensing technologies to detect and identify vessels in various environments, which is critical for monitoring vessel traffic, identifying potential threats, and facilitating maritime safety ... ver más

 
Ulises Manuel Ramirez-Alcocer, Edgar Tello-Leal, Gerardo Romero and Bárbara A. Macías-Hernández    
In this paper, we propose a deep learning-based approach to predict the next event in hospital organizational process models following the guidance of predictive process mining. This method provides value for the planning and allocating of resources sinc... ver más
Revista: Information