Pneumonia is part of the main diseases causing the death of children.It is generally diagnosed through chest Xray images.With the development of Deep Learning(DL),the diagnosis of pneumonia based on DL has received ex...Pneumonia is part of the main diseases causing the death of children.It is generally diagnosed through chest Xray images.With the development of Deep Learning(DL),the diagnosis of pneumonia based on DL has received extensive attention.However,due to the small difference between pneumonia and normal images,the performance of DL methods could be improved.This research proposes a new fine-grained Convolutional Neural Network(CNN)for children’s pneumonia diagnosis(FG-CPD).Firstly,the fine-grainedCNNclassificationwhich can handle the slight difference in images is investigated.To obtain the raw images from the real-world chest X-ray data,the YOLOv4 algorithm is trained to detect and position the chest part in the raw images.Secondly,a novel attention network is proposed,named SGNet,which integrates the spatial information and channel information of the images to locate the discriminative parts in the chest image for expanding the difference between pneumonia and normal images.Thirdly,the automatic data augmentation method is adopted to increase the diversity of the images and avoid the overfitting of FG-CPD.The FG-CPD has been tested on the public Chest X-ray 2017 dataset,and the results show that it has achieved great effect.Then,the FG-CPD is tested on the real chest X-ray images from children aged 3–12 years ago from Tongji Hospital.The results show that FG-CPD has achieved up to 96.91%accuracy,which can validate the potential of the FG-CPD.展开更多
This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can ident...This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can identifythe type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images withdifferent backgrounds and different sizes and crowding. Machine learning is used to automatically classify andlocate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Thosetypes are of different colors, sizes, and shapes to add variety and complexity to our research. The images datasetof the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasetstrain, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images consideredthe variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shootingangles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image.Different combinations and arrangements between the 11 types were considered. Two different image-capturingdevices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images wereobtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights,angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct theFaster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbonefor YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learningmethod, we optimized the seed detection models. The currently dominant object detection methods, Faster RCNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the FasterR-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy andlow false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as wellas faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under avariety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levelsof seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds incomplex scenarios. This study provides a reference for further seed testing and enumeration applications.展开更多
Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human err...Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human errors and makes these systems less effective.Several approaches have been proposed using trajectory-based,non-object-centric,and deep-learning-based methods.Previous studies have shown that deep learning techniques attain higher accuracy and lower error rates than those of other methods.However,the their performance must be improved.This study explores the state-of-the-art deep learning architecture of convolutional neural networks(CNNs)and inception V4 to detect and recognize violence using video data.In the proposed framework,the keyframe extraction technique eliminates duplicate consecutive frames.This keyframing phase reduces the training data size and hence decreases the computational cost by avoiding duplicate frames.For feature selection and classification tasks,the applied sequential CNN uses one kernel size,whereas the inception v4 CNN uses multiple kernels for different layers of the architecture.For empirical analysis,four widely used standard datasets are used with diverse activities.The results confirm that the proposed approach attains 98%accuracy,reduces the computational cost,and outperforms the existing techniques of violence detection and recognition.展开更多
Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex...Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex targets is still a problem that needs to be solved.The authors of this research would like to present an improved defect detection model for detecting small and complex defect targets in steel surfaces.During steel strip production,mechanical forces and environmental factors cause surface defects of the steel strip.Therefore,the detection of such defects is key to the production of high-quality products.Moreover,surface defects of the steel strip cause great economic losses to the high-tech industry.So far,few studies have explored methods of identifying the defects,and most of the currently available algorithms are not sufficiently effective.Therefore,this study presents an improved real-time metallic surface defect detection model based on You Only Look Once(YOLOv5)specially designed for small networks.For the smaller features of the target,the conventional part is replaced with a depthwise convolution and channel shuffle mechanism.Then assigning weights to Feature Pyramid Networks(FPN)output features and fusing them,increases feature propagation and the network’s characterization ability.The experimental results reveal that the improved proposed model outperforms other comparable models in terms of accuracy and detection time.The precision of the proposed model achieved by mAP@0.5 is 77.5%on the Northeastern University,Dataset(NEU-DET)and 70.18%on the GC10-DET datasets.展开更多
As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine...As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.展开更多
For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the ...For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.展开更多
基金supported in part by the Natural Science Foundation of China(NSFC)underGrant No.51805192,Major Special Science and Technology Project of Hubei Province under Grant No.2020AEA009sponsored by the State Key Laboratory of Digital Manufacturing Equipment and Technology(DMET)of Huazhong University of Science and Technology(HUST)under Grant No.DMETKF2020029.
文摘Pneumonia is part of the main diseases causing the death of children.It is generally diagnosed through chest Xray images.With the development of Deep Learning(DL),the diagnosis of pneumonia based on DL has received extensive attention.However,due to the small difference between pneumonia and normal images,the performance of DL methods could be improved.This research proposes a new fine-grained Convolutional Neural Network(CNN)for children’s pneumonia diagnosis(FG-CPD).Firstly,the fine-grainedCNNclassificationwhich can handle the slight difference in images is investigated.To obtain the raw images from the real-world chest X-ray data,the YOLOv4 algorithm is trained to detect and position the chest part in the raw images.Secondly,a novel attention network is proposed,named SGNet,which integrates the spatial information and channel information of the images to locate the discriminative parts in the chest image for expanding the difference between pneumonia and normal images.Thirdly,the automatic data augmentation method is adopted to increase the diversity of the images and avoid the overfitting of FG-CPD.The FG-CPD has been tested on the public Chest X-ray 2017 dataset,and the results show that it has achieved great effect.Then,the FG-CPD is tested on the real chest X-ray images from children aged 3–12 years ago from Tongji Hospital.The results show that FG-CPD has achieved up to 96.91%accuracy,which can validate the potential of the FG-CPD.
文摘This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can identifythe type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images withdifferent backgrounds and different sizes and crowding. Machine learning is used to automatically classify andlocate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Thosetypes are of different colors, sizes, and shapes to add variety and complexity to our research. The images datasetof the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasetstrain, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images consideredthe variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shootingangles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image.Different combinations and arrangements between the 11 types were considered. Two different image-capturingdevices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images wereobtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights,angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct theFaster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbonefor YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learningmethod, we optimized the seed detection models. The currently dominant object detection methods, Faster RCNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the FasterR-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy andlow false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as wellas faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under avariety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levelsof seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds incomplex scenarios. This study provides a reference for further seed testing and enumeration applications.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)the Soonchunhyang University Research Fund.
文摘Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human errors and makes these systems less effective.Several approaches have been proposed using trajectory-based,non-object-centric,and deep-learning-based methods.Previous studies have shown that deep learning techniques attain higher accuracy and lower error rates than those of other methods.However,the their performance must be improved.This study explores the state-of-the-art deep learning architecture of convolutional neural networks(CNNs)and inception V4 to detect and recognize violence using video data.In the proposed framework,the keyframe extraction technique eliminates duplicate consecutive frames.This keyframing phase reduces the training data size and hence decreases the computational cost by avoiding duplicate frames.For feature selection and classification tasks,the applied sequential CNN uses one kernel size,whereas the inception v4 CNN uses multiple kernels for different layers of the architecture.For empirical analysis,four widely used standard datasets are used with diverse activities.The results confirm that the proposed approach attains 98%accuracy,reduces the computational cost,and outperforms the existing techniques of violence detection and recognition.
文摘Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex targets is still a problem that needs to be solved.The authors of this research would like to present an improved defect detection model for detecting small and complex defect targets in steel surfaces.During steel strip production,mechanical forces and environmental factors cause surface defects of the steel strip.Therefore,the detection of such defects is key to the production of high-quality products.Moreover,surface defects of the steel strip cause great economic losses to the high-tech industry.So far,few studies have explored methods of identifying the defects,and most of the currently available algorithms are not sufficiently effective.Therefore,this study presents an improved real-time metallic surface defect detection model based on You Only Look Once(YOLOv5)specially designed for small networks.For the smaller features of the target,the conventional part is replaced with a depthwise convolution and channel shuffle mechanism.Then assigning weights to Feature Pyramid Networks(FPN)output features and fusing them,increases feature propagation and the network’s characterization ability.The experimental results reveal that the improved proposed model outperforms other comparable models in terms of accuracy and detection time.The precision of the proposed model achieved by mAP@0.5 is 77.5%on the Northeastern University,Dataset(NEU-DET)and 70.18%on the GC10-DET datasets.
文摘As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.
文摘For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.