Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/f...Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/fog computing traffic surveillance and monitoring systems.Efficient and cost-effective vehicle detection at high accuracy and speed in foggy weather is essential to avoiding road traffic collisions in real-time.To evaluate vision-based vehicle detection performance in foggy weather conditions,state-of-the-art Vehicle Detection in Adverse Weather Nature(DAWN)and Foggy Driving(FD)datasets are self-annotated using the YOLO LABEL tool and customized to four vehicle detection classes:cars,buses,motorcycles,and trucks.The state-of-the-art single-stage deep learning algorithms YOLO-V5,and YOLO-V8 are considered for the task of vehicle detection.Furthermore,YOLO-V5s is enhanced by introducing attention modules Convolutional Block Attention Module(CBAM),Normalized-based Attention Module(NAM),and Simple Attention Module(SimAM)after the SPPF module as well as YOLO-V5l with BiFPN.Their vehicle detection accuracy parameters and running speed is validated on cloud(Google Colab)and edge(local)systems.The mAP50 score of YOLO-V5n is 72.60%,YOLOV5s is 75.20%,YOLO-V5m is 73.40%,and YOLO-V5l is 77.30%;and YOLO-V8n is 60.20%,YOLO-V8s is 73.50%,YOLO-V8m is 73.80%,and YOLO-V8l is 72.60%on DAWN dataset.The mAP50 score of YOLO-V5n is 43.90%,YOLO-V5s is 40.10%,YOLO-V5m is 49.70%,and YOLO-V5l is 57.30%;and YOLO-V8n is 41.60%,YOLO-V8s is 46.90%,YOLO-V8m is 42.90%,and YOLO-V8l is 44.80%on FD dataset.The vehicle detection speed of YOLOV5n is 59 Frame Per Seconds(FPS),YOLO-V5s is 47 FPS,YOLO-V5m is 38 FPS,and YOLO-V5l is 30 FPS;and YOLO-V8n is 185 FPS,YOLO-V8s is 109 FPS,YOLO-V8m is 72 FPS,and YOLO-V8l is 63 FPS on DAWN dataset.The vehicle detection speed of YOLO-V5n is 26 FPS,YOLO-V5s is 24 FPS,YOLO-V5m is 22 FPS,and YOLO-V5l is 17 FPS;and YOLO-V8n is 313 FPS,YOLO-V8s is 182 FPS,YOLO-V8m is 99 FPS,and YOLO-V8l is 60 FPS on FD dataset.YOLO-V5s,YOLO-V5s variants and YOLO-V5l_BiFPN,and YOLO-V8 algorithms are efficient and cost-effective solution for real-time vision-based vehicle detection in foggy weather.展开更多
The quality of the stator winding coil directly affects the performance of the motor.A dual-camera online machine vision detection method to detect whether the coil leads and winding regions were qualified was designe...The quality of the stator winding coil directly affects the performance of the motor.A dual-camera online machine vision detection method to detect whether the coil leads and winding regions were qualified was designed.A vision detection platform was designed to capture individual winding images,and an image processing algorithm was used for image pre-processing,template matching and positioning of the coil lead area to set up a coordinate system.After eliminating image noise by Blob analysis,the improved Canny algorithm was used to detect the location of the coil lead paint stripped region,and the time was reduced by about half compared to the Canny algorithm.The coil winding region was trained with the ShuffleNet V2-YOLOv5s model for the dataset,and the detect file was converted to the Open Neural Network Exchange(ONNX)model for the detection of winding cross features with an average accuracy of 99.0%.The software interface of the detection system was designed to perform qualified discrimination tests on the workpieces,and the detection data were recorded and statistically analyzed.The results showed that the stator winding coil qualified discrimination accuracy reached 96.2%,and the average detection time of a single workpiece was about 300 ms,while YOLOv5s took less than 30 ms.展开更多
The famous F5 algorithm for computing Grobner basis was presented by Faugere in 2002. The original version of F5 is given in programming codes, so it is a bit difficult to understand. In this paper, the F5 algorithm i...The famous F5 algorithm for computing Grobner basis was presented by Faugere in 2002. The original version of F5 is given in programming codes, so it is a bit difficult to understand. In this paper, the F5 algorithm is simplified as F5B in a Buchberger's style such that it is easy to understand and implement. In order to describe F5B, we introduce F5-reduction, which keeps the signature of labeled polynomials unchanged after reduction. The equivalence between F5 and F5B is also shown. At last, some versions of the F5 algorithm are illustrated.展开更多
GVW algorithm was given by Gao, Wang, and Volny in computing a Grobuer bases for ideal in a polynomial ring, which is much faster and more simple than F5. In this paper, the authors generalize GVW algorithm and presen...GVW algorithm was given by Gao, Wang, and Volny in computing a Grobuer bases for ideal in a polynomial ring, which is much faster and more simple than F5. In this paper, the authors generalize GVW algorithm and present an algorithm to compute a Grobner bases for ideal when the coefficient ring is a principal ideal domain. K展开更多
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RG23129).
文摘Vision-based vehicle detection in adverse weather conditions such as fog,haze,and mist is a challenging research area in the fields of autonomous vehicles,collision avoidance,and Internet of Things(IoT)-enabled edge/fog computing traffic surveillance and monitoring systems.Efficient and cost-effective vehicle detection at high accuracy and speed in foggy weather is essential to avoiding road traffic collisions in real-time.To evaluate vision-based vehicle detection performance in foggy weather conditions,state-of-the-art Vehicle Detection in Adverse Weather Nature(DAWN)and Foggy Driving(FD)datasets are self-annotated using the YOLO LABEL tool and customized to four vehicle detection classes:cars,buses,motorcycles,and trucks.The state-of-the-art single-stage deep learning algorithms YOLO-V5,and YOLO-V8 are considered for the task of vehicle detection.Furthermore,YOLO-V5s is enhanced by introducing attention modules Convolutional Block Attention Module(CBAM),Normalized-based Attention Module(NAM),and Simple Attention Module(SimAM)after the SPPF module as well as YOLO-V5l with BiFPN.Their vehicle detection accuracy parameters and running speed is validated on cloud(Google Colab)and edge(local)systems.The mAP50 score of YOLO-V5n is 72.60%,YOLOV5s is 75.20%,YOLO-V5m is 73.40%,and YOLO-V5l is 77.30%;and YOLO-V8n is 60.20%,YOLO-V8s is 73.50%,YOLO-V8m is 73.80%,and YOLO-V8l is 72.60%on DAWN dataset.The mAP50 score of YOLO-V5n is 43.90%,YOLO-V5s is 40.10%,YOLO-V5m is 49.70%,and YOLO-V5l is 57.30%;and YOLO-V8n is 41.60%,YOLO-V8s is 46.90%,YOLO-V8m is 42.90%,and YOLO-V8l is 44.80%on FD dataset.The vehicle detection speed of YOLOV5n is 59 Frame Per Seconds(FPS),YOLO-V5s is 47 FPS,YOLO-V5m is 38 FPS,and YOLO-V5l is 30 FPS;and YOLO-V8n is 185 FPS,YOLO-V8s is 109 FPS,YOLO-V8m is 72 FPS,and YOLO-V8l is 63 FPS on DAWN dataset.The vehicle detection speed of YOLO-V5n is 26 FPS,YOLO-V5s is 24 FPS,YOLO-V5m is 22 FPS,and YOLO-V5l is 17 FPS;and YOLO-V8n is 313 FPS,YOLO-V8s is 182 FPS,YOLO-V8m is 99 FPS,and YOLO-V8l is 60 FPS on FD dataset.YOLO-V5s,YOLO-V5s variants and YOLO-V5l_BiFPN,and YOLO-V8 algorithms are efficient and cost-effective solution for real-time vision-based vehicle detection in foggy weather.
基金National Natural Science Foundation of China(No.U1831123)。
文摘The quality of the stator winding coil directly affects the performance of the motor.A dual-camera online machine vision detection method to detect whether the coil leads and winding regions were qualified was designed.A vision detection platform was designed to capture individual winding images,and an image processing algorithm was used for image pre-processing,template matching and positioning of the coil lead area to set up a coordinate system.After eliminating image noise by Blob analysis,the improved Canny algorithm was used to detect the location of the coil lead paint stripped region,and the time was reduced by about half compared to the Canny algorithm.The coil winding region was trained with the ShuffleNet V2-YOLOv5s model for the dataset,and the detect file was converted to the Open Neural Network Exchange(ONNX)model for the detection of winding cross features with an average accuracy of 99.0%.The software interface of the detection system was designed to perform qualified discrimination tests on the workpieces,and the detection data were recorded and statistically analyzed.The results showed that the stator winding coil qualified discrimination accuracy reached 96.2%,and the average detection time of a single workpiece was about 300 ms,while YOLOv5s took less than 30 ms.
文摘The famous F5 algorithm for computing Grobner basis was presented by Faugere in 2002. The original version of F5 is given in programming codes, so it is a bit difficult to understand. In this paper, the F5 algorithm is simplified as F5B in a Buchberger's style such that it is easy to understand and implement. In order to describe F5B, we introduce F5-reduction, which keeps the signature of labeled polynomials unchanged after reduction. The equivalence between F5 and F5B is also shown. At last, some versions of the F5 algorithm are illustrated.
基金supported by the National Natural Science Foundation of China under Grant Nos.11071062,11271208Scientific Research Fund of Hunan Province Education Department under Grant Nos.10A033,12C0130
文摘GVW algorithm was given by Gao, Wang, and Volny in computing a Grobuer bases for ideal in a polynomial ring, which is much faster and more simple than F5. In this paper, the authors generalize GVW algorithm and present an algorithm to compute a Grobner bases for ideal when the coefficient ring is a principal ideal domain. K