Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of intersp...Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of interspecies similarity,multi-scale,and background complexity of pests.To address these problems,this study proposes an FD-YOLO pest target detection model.The FD-YOLO model uses a Fully Connected Feature Pyramid Network(FC-FPN)instead of a PANet in the neck,which can adaptively fuse multi-scale information so that the model can retain small-scale target features in the deep layer,enhance large-scale target features in the shallow layer,and enhance the multiplexing of effective features.A dual self-attention module(DSA)is then embedded in the C3 module of the neck,which captures the dependencies between the information in both spatial and channel dimensions,effectively enhancing global features.We selected 16 types of pests that widely damage field crops in the IP102 pest dataset,which were used as our dataset after data supplementation and enhancement.The experimental results showed that FD-YOLO’s mAP@0.5 improved by 6.8%compared to YOLOv5,reaching 82.6%and 19.1%–5%better than other state-of-the-art models.This method provides an effective new approach for detecting similar or multiscale pests in field crops.展开更多
Deep learning-based intelligent recognition algorithms are increasingly recognized for their potential to address the labor-intensive challenge of manual pest detection.However,their deployment on mobile devices has b...Deep learning-based intelligent recognition algorithms are increasingly recognized for their potential to address the labor-intensive challenge of manual pest detection.However,their deployment on mobile devices has been constrained by high computational demands.Here,we developed GBiDC-PEST,a mobile application that incorporates an improved,lightweight detection algorithm based on the You Only Look Once(YOLO)series singlestage architecture,for real-time detection of four tiny pests(wheat mites,sugarcane aphids,wheat aphids,and rice planthoppers).GBiDC-PEST incorporates several innovative modules,including GhostNet for lightweight feature extraction and architecture optimization by reconstructing the backbone,the bi-directional feature pyramid network(BiFPN)for enhanced multiscale feature fusion,depthwise convolution(DWConv)layers to reduce computational load,and the convolutional block attention module(CBAM)to enable precise feature focus.The newly developed GBiDC-PEST was trained and validated using a multitarget agricultural tiny pest dataset(Tpest-3960)that covered various field environments.GBiDC-PEST(2.8 MB)significantly reduced the model size to only 20%of the original model size,offering a smaller size than the YOLO series(v5-v10),higher detection accuracy than YOLOv10n and v10s,and faster detection speed than v8s,v9c,v10m and v10b.In Android deployment experiments,GBiDCPEST demonstrated enhanced performance in detecting pests against complex backgrounds,and the accuracy for wheat mites and rice planthoppers was improved by 4.5-7.5%compared with the original model.The GBiDC-PEST optimization algorithm and its mobile deployment proposed in this study offer a robust technical framework for the rapid,onsite identification and localization of tiny pests.This advancement provides valuable insights for effective pest monitoring,counting,and control in various agricultural settings.展开更多
This study systematically addresses the limitations of traditional pest detection methods and proposes an optimized version of the YOLOv8 object detection model.By integrating the GhostConv convolution module and the ...This study systematically addresses the limitations of traditional pest detection methods and proposes an optimized version of the YOLOv8 object detection model.By integrating the GhostConv convolution module and the C3Ghost module,the Polarized Self-Attention(PSA)mechanism is incorporated to enhance the model’s capacity for extracting pest features.Experimental results demonstrate that the improved YOLOv8+Ghost+PSA model achieves outstanding performance in critical metrics such as precision,recall,and mean Average Precision(mAP),with a computational cost of only 5.3 GFLOPs,making it highly suitable for deployment in resource-constrained agricultural environments.展开更多
Addressing plant diseases and pests is not just crucial;it’s a matter of utmost importance for enhancing crop production and preventing economic losses. Recent advancements in artificial intelligence, machine learnin...Addressing plant diseases and pests is not just crucial;it’s a matter of utmost importance for enhancing crop production and preventing economic losses. Recent advancements in artificial intelligence, machine learning, and deep learning have revolutionised the precision and efficiency of this process, surpassing the limitations of manual identification. This study comprehensively reviews modern computer-based techniques, including recent advances in artificial intelligence, for detecting diseases and pests through images. This paper uniquely categorises methodologies into hyperspectral imaging, non-visualisation techniques, visualisation approaches, modified deep learning architectures, and transformer models, helping researchers gain detailed, insightful understandings. The exhaustive survey of recent works and comparative studies in this domain guides researchers in selecting appropriate and advanced state-of-the-art methods for plant disease and pest detection. Additionally, this paper highlights the consistent superiority of modern AI-based approaches, which often outperform older image analysis methods in terms of speed and accuracy. Further, this survey focuses on the efficiency of vision transformers against well-known deep learning architectures like MobileNetV3, which shows that Hierarchical Vision Transformer (HvT) can achieve accuracy upwards of 99.3% in plant disease detection. The study concludes by addressing the challenges of designing the systems, proposing potential solutions, and outlining directions for future research in this field.展开更多
A novel method for pest detection is proposed based on the theory of multi-fractal spectrum to extract pests on plant leaves.Both local and global information of the image regularity were obtained by multi-fractal ana...A novel method for pest detection is proposed based on the theory of multi-fractal spectrum to extract pests on plant leaves.Both local and global information of the image regularity were obtained by multi-fractal analysis.By applying fractal dimension,the spots on leaf images can be extracted without loosing or introducing any information.The different parts of images are re-analysis by morphology operations to determine the candidate regions of pests.The performance of multi-fractal analysis of whitefly detection is investigated through greenhouse experiments.The experimental results show that the proposed method is robust to noise from light and is not sensitive to the complex environment.展开更多
Image processing,agricultural production,andfield monitoring are essential studies in the researchfield.Plant diseases have an impact on agricultural production and quality.Agricultural disease detection at a preliminar...Image processing,agricultural production,andfield monitoring are essential studies in the researchfield.Plant diseases have an impact on agricultural production and quality.Agricultural disease detection at a preliminary phase reduces economic losses and improves the quality of crops.Manually identifying the agricultural pests is usually evident in plants;also,it takes more time and is an expensive technique.A drone system has been developed to gather photographs over enormous regions such as farm areas and plantations.An atmosphere generates vast amounts of data as it is monitored closely;the evaluation of this big data would increase the production of agricultural production.This paper aims to identify pests in mango trees such as hoppers,mealybugs,inflorescence midges,fruitflies,and stem borers.Because of the massive volumes of large-scale high-dimensional big data collected,it is necessary to reduce the dimensionality of the input for classify-ing images.The community-based cumulative algorithm was used to classify the pests in the existing system.The proposed method uses the Entropy-ELM method with Whale Optimization to improve the classification in detecting pests in agricul-ture.The Entropy-ELM method with the Whale Optimization Algorithm(WOA)is used for feature selection,enhancing mango pests’classification accuracy.Support Vector Machines(SVMs)are especially effective for classifying while users get var-ious classes in which they are interested.They are created as suitable classifiers to categorize any dataset in Big Data effectively.The proposed Entropy-ELM-WOA is more capable compared to the existing systems.展开更多
In complex agricultural environments,cucumber disease identification is confronted with challenges like symptom diversity,environmental interference,and poor detection accuracy.This paper presents the DM-YOLO model,wh...In complex agricultural environments,cucumber disease identification is confronted with challenges like symptom diversity,environmental interference,and poor detection accuracy.This paper presents the DM-YOLO model,which is an enhanced version of the YOLOv8 framework designed to enhance detection accuracy for cucumber diseases.Traditional detection models have a tough time identifying small-scale and overlapping symptoms,especially when critical features are obscured by lighting variations,occlusion,and background noise.The proposed DM-YOLO model combines three innovative modules to enhance detection performance in a collective way.First,the MultiCat module employs a multi-scale feature processing strategy with adaptive pooling,which decomposes input features into large,medium,and small scales.This approach ensures that high-level features are extracted and fused effectively,effectively improving the detection of smaller and complex patterns that are often missed by traditional methods.Second,the ADC2f module incorporates an attention mechanism and deep separable convolution,which allows the model to focus on the most relevant regions of the input features while reducing computational load.The identification and localization of diseases like downy mildew and powdery mildew can be enhanced by this combination in conditions of lighting changes and occlusion.Finally,the C2fe module introduces a Global Context Block that uses attention mechanisms to emphasize essential regions while suppressing those that are not relevant.This design enables the model to capture more contextual information,which improves detection performance in complex backgrounds and small-object scenarios.A custom cucumber disease dataset and the PlantDoc dataset were used for thorough evaluations.Experimental results showed that DM-YOLO achieved a mean Average Precision(mAP50)improvement of 1.2%p on the custom dataset and 3.2%p on the PlantDoc dataset over the baseline YOLOv8.These results highlight the model’s enhanced ability to detect small-scale and overlapping disease symptoms,demonstrating its effectiveness and robustness in diverse agricultural monitoring environments.Compared to the original algorithm,the improved model shows significant progress and demonstrates strong competitiveness when compared to other advanced object detection models.展开更多
Cruciferous vegetables are important edible vegetable crops.However,they are susceptible to various pests during their growth process,which requires real-time and accurate monitoring of these pests for pest forecastin...Cruciferous vegetables are important edible vegetable crops.However,they are susceptible to various pests during their growth process,which requires real-time and accurate monitoring of these pests for pest forecasting and scientific control.Hanging yellow sticky boards is a common way to monitor and trap those pests which are attracted to the yellow color.To achieve real-time,low-cost,intelligent monitoring of these vegetable pests on the boards,we established an intelligent monitoring system consisting of a smart camera,a web platform and a pest detection algorithm deployed on a server.After the operator sets the monitoring preset points and shooting time of the camera on the system platform,the camera in the field can automatically collect images of multiple yellow sticky boards at fixed places and times every day.The pests trapped on the yellow sticky boards in vegetable fields,Plutella xylostella,Phyllotreta striolata and flies,are very small and susceptible to deterioration and breakage,which increases the difficulty of model detection.To solve the problem of poor recognition due to the small size and breaking of the pest bodies,we propose an intelligent pest detection algorithm based on an improved Cascade R-CNN model for three important cruciferous crop pests.The algorithm uses an overlapping sliding window method,an improved Res2Net network as the backbone network,and a recursive feature pyramid network as the neck network.The results of field tests show that the algorithm achieves good detection results for the three target pests on the yellow sticky board images,with precision levels of 96.5,92.2 and 75.0%,and recall levels of 96.6,93.1 and 74.7%,respectively,and an F_(1) value of 0.880.Compared with other algorithms,our algorithm has a significant advantage in its ability to detect small target pests.To accurately obtain the data for the newly added pests each day,a two-stage pest matching algorithm was proposed.The algorithm performed well and achieved results that were highly consistent with manual counting,with a mean error of only 2.2%.This intelligent monitoring system realizes precision,good visualization,and intelligent vegetable pest monitoring,which is of great significance as it provides an effective pest prevention and control option for farmers.展开更多
In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low...In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low resolution,and significant species variation across different growth stages.To improve the model’s ability to generalize and its robustness,we incorporate target background analysis,data augmentation,and processing techniques like Gaussian noise and brightness adjustment.In target detection,increasing the depth of the neural network can lead to the loss of small target information.To overcome this,we introduce the Space-to-Depth Convolution(SPD-Conv)module into the SPD-YOLOv7 framework,replacing certain convolutional layers in the traditional system backbone and head network.This modification helps retain small target features and location information.Additionally,the Efficient Layer Aggregation Network-Wide(ELAN-W)module is combined with the Convolutional Block Attention Module(CBAM)attention mechanism to extract more efficient features.Experimental results show that the enhanced YOLOv7 model achieves an accuracy of 98.38%,with an average accuracy of 99.4%,outperforming the original YOLOv7 model.These improvements represent an increase of 2.46%in accuracy and 3.19%in average accuracy.The results indicate that the enhanced YOLOv7 model is more efficient and real-time,offering valuable insights for maize pest control.展开更多
Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome...Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome the issue of pest detection on crops. We have developed the YOLOCSP-PEST model for Pest localization and classification. With the Cross Stage Partial Network (CSPNET) backbone, the proposed model is a modified version of You Only Look Once Version 7 (YOLOv7) that is intended primarily for pest localization and classification. Our proposed model gives exceptionally good results under conditions that are very challenging for any other comparable models especially conditions where we have issues with the luminance and the orientation of the images. It helps farmers working out on their crops in distant areas to determine any infestation quickly and accurately on their crops which helps in the quality and quantity of the production yield. The model has been trained and tested on 2 datasets namely the IP102 data set and a local crop data set on both of which it has shown exceptional results. It gave us a mean average precision (mAP) of 88.40% along with a precision of 85.55% and a recall of 84.25% on the IP102 dataset meanwhile giving a mAP of 97.18% on the local data set along with a recall of 94.88% and a precision of 97.50%. These findings demonstrate that the proposed model is very effective in detecting real-life scenarios and can help in the production of crops improving the yield quality and quantity at the same time.展开更多
The facility-based production method is an important stage in the development of modern agriculture,lifting natural light and temperature restrictions and helping to improve agricultural production efficiency.To addre...The facility-based production method is an important stage in the development of modern agriculture,lifting natural light and temperature restrictions and helping to improve agricultural production efficiency.To address the problems of difficulty and low accuracy in detecting pests and diseases in the dense production environment of tomato facilities,an online diagnosis platform for tomato plant diseases based on deep learning and cluster fusion was proposed by collecting images of eight major prevalent pests and diseases during the growing period of tomatoes in a facility-based environment.The diagnostic platform consists of three main parts:pest and disease information detection,clustering and decision-making of detection results,and platform diagnostic display.Firstly,based on the You Only Look Once(YOLO)algorithm,the key information of the disease was extracted by adding attention module(CBAM),multi-scale feature fusion was performed using weighted bi-directional feature pyramid network(BiFPN),and the overall construction was designed to be compressed and lightweight;Secondly,the k-means clustering algorithm is used to fuse with the deep learning results to output pest identification decision values to further improve the accuracy of identification applications;Finally,a detection platform was designed and developed using Python,including the front-end,back-end,and database of the system to realize online diagnosis and interaction of tomato plant pests and diseases.The experiment shows that the algorithm detects tomato plant diseases and insect pests with mAP(mean Average Precision)of 92.7%,weights of 12.8 Megabyte(M),inference time of 33.6 ms.Compared with the current mainstream single-stage detection series algorithms,the improved algorithm model has achieved better performance;The accuracy rate of the platform diagnosis output pests and diseases information of 91.2%for images and 95.2%for videos.It is a great significance to tomato pest control research and the development of smart agriculture.展开更多
基金funded by Liaoning Provincial Department of Education Project,Award number JYTMS20230418.
文摘Pest detection techniques are helpful in reducing the frequency and scale of pest outbreaks;however,their application in the actual agricultural production process is still challenging owing to the problems of interspecies similarity,multi-scale,and background complexity of pests.To address these problems,this study proposes an FD-YOLO pest target detection model.The FD-YOLO model uses a Fully Connected Feature Pyramid Network(FC-FPN)instead of a PANet in the neck,which can adaptively fuse multi-scale information so that the model can retain small-scale target features in the deep layer,enhance large-scale target features in the shallow layer,and enhance the multiplexing of effective features.A dual self-attention module(DSA)is then embedded in the C3 module of the neck,which captures the dependencies between the information in both spatial and channel dimensions,effectively enhancing global features.We selected 16 types of pests that widely damage field crops in the IP102 pest dataset,which were used as our dataset after data supplementation and enhancement.The experimental results showed that FD-YOLO’s mAP@0.5 improved by 6.8%compared to YOLOv5,reaching 82.6%and 19.1%–5%better than other state-of-the-art models.This method provides an effective new approach for detecting similar or multiscale pests in field crops.
基金support of the Natural Science Foundation of Jiangsu Province,China(BK20240977)the China Scholarship Council(201606850024)+1 种基金the National High Technology Research and Development Program of China(2016YFD0701003)the Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(SJCX23_1488)。
文摘Deep learning-based intelligent recognition algorithms are increasingly recognized for their potential to address the labor-intensive challenge of manual pest detection.However,their deployment on mobile devices has been constrained by high computational demands.Here,we developed GBiDC-PEST,a mobile application that incorporates an improved,lightweight detection algorithm based on the You Only Look Once(YOLO)series singlestage architecture,for real-time detection of four tiny pests(wheat mites,sugarcane aphids,wheat aphids,and rice planthoppers).GBiDC-PEST incorporates several innovative modules,including GhostNet for lightweight feature extraction and architecture optimization by reconstructing the backbone,the bi-directional feature pyramid network(BiFPN)for enhanced multiscale feature fusion,depthwise convolution(DWConv)layers to reduce computational load,and the convolutional block attention module(CBAM)to enable precise feature focus.The newly developed GBiDC-PEST was trained and validated using a multitarget agricultural tiny pest dataset(Tpest-3960)that covered various field environments.GBiDC-PEST(2.8 MB)significantly reduced the model size to only 20%of the original model size,offering a smaller size than the YOLO series(v5-v10),higher detection accuracy than YOLOv10n and v10s,and faster detection speed than v8s,v9c,v10m and v10b.In Android deployment experiments,GBiDCPEST demonstrated enhanced performance in detecting pests against complex backgrounds,and the accuracy for wheat mites and rice planthoppers was improved by 4.5-7.5%compared with the original model.The GBiDC-PEST optimization algorithm and its mobile deployment proposed in this study offer a robust technical framework for the rapid,onsite identification and localization of tiny pests.This advancement provides valuable insights for effective pest monitoring,counting,and control in various agricultural settings.
文摘This study systematically addresses the limitations of traditional pest detection methods and proposes an optimized version of the YOLOv8 object detection model.By integrating the GhostConv convolution module and the C3Ghost module,the Polarized Self-Attention(PSA)mechanism is incorporated to enhance the model’s capacity for extracting pest features.Experimental results demonstrate that the improved YOLOv8+Ghost+PSA model achieves outstanding performance in critical metrics such as precision,recall,and mean Average Precision(mAP),with a computational cost of only 5.3 GFLOPs,making it highly suitable for deployment in resource-constrained agricultural environments.
文摘Addressing plant diseases and pests is not just crucial;it’s a matter of utmost importance for enhancing crop production and preventing economic losses. Recent advancements in artificial intelligence, machine learning, and deep learning have revolutionised the precision and efficiency of this process, surpassing the limitations of manual identification. This study comprehensively reviews modern computer-based techniques, including recent advances in artificial intelligence, for detecting diseases and pests through images. This paper uniquely categorises methodologies into hyperspectral imaging, non-visualisation techniques, visualisation approaches, modified deep learning architectures, and transformer models, helping researchers gain detailed, insightful understandings. The exhaustive survey of recent works and comparative studies in this domain guides researchers in selecting appropriate and advanced state-of-the-art methods for plant disease and pest detection. Additionally, this paper highlights the consistent superiority of modern AI-based approaches, which often outperform older image analysis methods in terms of speed and accuracy. Further, this survey focuses on the efficiency of vision transformers against well-known deep learning architectures like MobileNetV3, which shows that Hierarchical Vision Transformer (HvT) can achieve accuracy upwards of 99.3% in plant disease detection. The study concludes by addressing the challenges of designing the systems, proposing potential solutions, and outlining directions for future research in this field.
基金supported by the MKE(The Ministry of Knowledge Economy),Korea,under the ITRC(Infor mationTechnology Research Center)support programsupervised by the NIPA(National IT Industry Promotion Agency)(NIPA-2010-C1090-1021-0010)
文摘A novel method for pest detection is proposed based on the theory of multi-fractal spectrum to extract pests on plant leaves.Both local and global information of the image regularity were obtained by multi-fractal analysis.By applying fractal dimension,the spots on leaf images can be extracted without loosing or introducing any information.The different parts of images are re-analysis by morphology operations to determine the candidate regions of pests.The performance of multi-fractal analysis of whitefly detection is investigated through greenhouse experiments.The experimental results show that the proposed method is robust to noise from light and is not sensitive to the complex environment.
文摘Image processing,agricultural production,andfield monitoring are essential studies in the researchfield.Plant diseases have an impact on agricultural production and quality.Agricultural disease detection at a preliminary phase reduces economic losses and improves the quality of crops.Manually identifying the agricultural pests is usually evident in plants;also,it takes more time and is an expensive technique.A drone system has been developed to gather photographs over enormous regions such as farm areas and plantations.An atmosphere generates vast amounts of data as it is monitored closely;the evaluation of this big data would increase the production of agricultural production.This paper aims to identify pests in mango trees such as hoppers,mealybugs,inflorescence midges,fruitflies,and stem borers.Because of the massive volumes of large-scale high-dimensional big data collected,it is necessary to reduce the dimensionality of the input for classify-ing images.The community-based cumulative algorithm was used to classify the pests in the existing system.The proposed method uses the Entropy-ELM method with Whale Optimization to improve the classification in detecting pests in agricul-ture.The Entropy-ELM method with the Whale Optimization Algorithm(WOA)is used for feature selection,enhancing mango pests’classification accuracy.Support Vector Machines(SVMs)are especially effective for classifying while users get var-ious classes in which they are interested.They are created as suitable classifiers to categorize any dataset in Big Data effectively.The proposed Entropy-ELM-WOA is more capable compared to the existing systems.
基金supported by“Regional Innovation Strategy(RIS)”through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(MOE)(2021RIS-003).
文摘In complex agricultural environments,cucumber disease identification is confronted with challenges like symptom diversity,environmental interference,and poor detection accuracy.This paper presents the DM-YOLO model,which is an enhanced version of the YOLOv8 framework designed to enhance detection accuracy for cucumber diseases.Traditional detection models have a tough time identifying small-scale and overlapping symptoms,especially when critical features are obscured by lighting variations,occlusion,and background noise.The proposed DM-YOLO model combines three innovative modules to enhance detection performance in a collective way.First,the MultiCat module employs a multi-scale feature processing strategy with adaptive pooling,which decomposes input features into large,medium,and small scales.This approach ensures that high-level features are extracted and fused effectively,effectively improving the detection of smaller and complex patterns that are often missed by traditional methods.Second,the ADC2f module incorporates an attention mechanism and deep separable convolution,which allows the model to focus on the most relevant regions of the input features while reducing computational load.The identification and localization of diseases like downy mildew and powdery mildew can be enhanced by this combination in conditions of lighting changes and occlusion.Finally,the C2fe module introduces a Global Context Block that uses attention mechanisms to emphasize essential regions while suppressing those that are not relevant.This design enables the model to capture more contextual information,which improves detection performance in complex backgrounds and small-object scenarios.A custom cucumber disease dataset and the PlantDoc dataset were used for thorough evaluations.Experimental results showed that DM-YOLO achieved a mean Average Precision(mAP50)improvement of 1.2%p on the custom dataset and 3.2%p on the PlantDoc dataset over the baseline YOLOv8.These results highlight the model’s enhanced ability to detect small-scale and overlapping disease symptoms,demonstrating its effectiveness and robustness in diverse agricultural monitoring environments.Compared to the original algorithm,the improved model shows significant progress and demonstrates strong competitiveness when compared to other advanced object detection models.
基金supported by the Collaborative Innovation Center Project of Guangdong Academy of Agricultural Sciences,China(XTXM202202).
文摘Cruciferous vegetables are important edible vegetable crops.However,they are susceptible to various pests during their growth process,which requires real-time and accurate monitoring of these pests for pest forecasting and scientific control.Hanging yellow sticky boards is a common way to monitor and trap those pests which are attracted to the yellow color.To achieve real-time,low-cost,intelligent monitoring of these vegetable pests on the boards,we established an intelligent monitoring system consisting of a smart camera,a web platform and a pest detection algorithm deployed on a server.After the operator sets the monitoring preset points and shooting time of the camera on the system platform,the camera in the field can automatically collect images of multiple yellow sticky boards at fixed places and times every day.The pests trapped on the yellow sticky boards in vegetable fields,Plutella xylostella,Phyllotreta striolata and flies,are very small and susceptible to deterioration and breakage,which increases the difficulty of model detection.To solve the problem of poor recognition due to the small size and breaking of the pest bodies,we propose an intelligent pest detection algorithm based on an improved Cascade R-CNN model for three important cruciferous crop pests.The algorithm uses an overlapping sliding window method,an improved Res2Net network as the backbone network,and a recursive feature pyramid network as the neck network.The results of field tests show that the algorithm achieves good detection results for the three target pests on the yellow sticky board images,with precision levels of 96.5,92.2 and 75.0%,and recall levels of 96.6,93.1 and 74.7%,respectively,and an F_(1) value of 0.880.Compared with other algorithms,our algorithm has a significant advantage in its ability to detect small target pests.To accurately obtain the data for the newly added pests each day,a two-stage pest matching algorithm was proposed.The algorithm performed well and achieved results that were highly consistent with manual counting,with a mean error of only 2.2%.This intelligent monitoring system realizes precision,good visualization,and intelligent vegetable pest monitoring,which is of great significance as it provides an effective pest prevention and control option for farmers.
文摘In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low resolution,and significant species variation across different growth stages.To improve the model’s ability to generalize and its robustness,we incorporate target background analysis,data augmentation,and processing techniques like Gaussian noise and brightness adjustment.In target detection,increasing the depth of the neural network can lead to the loss of small target information.To overcome this,we introduce the Space-to-Depth Convolution(SPD-Conv)module into the SPD-YOLOv7 framework,replacing certain convolutional layers in the traditional system backbone and head network.This modification helps retain small target features and location information.Additionally,the Efficient Layer Aggregation Network-Wide(ELAN-W)module is combined with the Convolutional Block Attention Module(CBAM)attention mechanism to extract more efficient features.Experimental results show that the enhanced YOLOv7 model achieves an accuracy of 98.38%,with an average accuracy of 99.4%,outperforming the original YOLOv7 model.These improvements represent an increase of 2.46%in accuracy and 3.19%in average accuracy.The results indicate that the enhanced YOLOv7 model is more efficient and real-time,offering valuable insights for maize pest control.
基金supported by King Saud University,Riyadh,Saudi Arabia,through the Researchers Supporting Project under Grant RSPD2025R697.
文摘Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome the issue of pest detection on crops. We have developed the YOLOCSP-PEST model for Pest localization and classification. With the Cross Stage Partial Network (CSPNET) backbone, the proposed model is a modified version of You Only Look Once Version 7 (YOLOv7) that is intended primarily for pest localization and classification. Our proposed model gives exceptionally good results under conditions that are very challenging for any other comparable models especially conditions where we have issues with the luminance and the orientation of the images. It helps farmers working out on their crops in distant areas to determine any infestation quickly and accurately on their crops which helps in the quality and quantity of the production yield. The model has been trained and tested on 2 datasets namely the IP102 data set and a local crop data set on both of which it has shown exceptional results. It gave us a mean average precision (mAP) of 88.40% along with a precision of 85.55% and a recall of 84.25% on the IP102 dataset meanwhile giving a mAP of 97.18% on the local data set along with a recall of 94.88% and a precision of 97.50%. These findings demonstrate that the proposed model is very effective in detecting real-life scenarios and can help in the production of crops improving the yield quality and quantity at the same time.
基金the National Key Research and Development Program of China Project(Grant No.2021YFD 2000700)the Foundation for University Youth Key Teacher of Henan Province(Grant No.2019GGJS075)the Natural Science Foundation of Henan Province(Grant No.202300410124).
文摘The facility-based production method is an important stage in the development of modern agriculture,lifting natural light and temperature restrictions and helping to improve agricultural production efficiency.To address the problems of difficulty and low accuracy in detecting pests and diseases in the dense production environment of tomato facilities,an online diagnosis platform for tomato plant diseases based on deep learning and cluster fusion was proposed by collecting images of eight major prevalent pests and diseases during the growing period of tomatoes in a facility-based environment.The diagnostic platform consists of three main parts:pest and disease information detection,clustering and decision-making of detection results,and platform diagnostic display.Firstly,based on the You Only Look Once(YOLO)algorithm,the key information of the disease was extracted by adding attention module(CBAM),multi-scale feature fusion was performed using weighted bi-directional feature pyramid network(BiFPN),and the overall construction was designed to be compressed and lightweight;Secondly,the k-means clustering algorithm is used to fuse with the deep learning results to output pest identification decision values to further improve the accuracy of identification applications;Finally,a detection platform was designed and developed using Python,including the front-end,back-end,and database of the system to realize online diagnosis and interaction of tomato plant pests and diseases.The experiment shows that the algorithm detects tomato plant diseases and insect pests with mAP(mean Average Precision)of 92.7%,weights of 12.8 Megabyte(M),inference time of 33.6 ms.Compared with the current mainstream single-stage detection series algorithms,the improved algorithm model has achieved better performance;The accuracy rate of the platform diagnosis output pests and diseases information of 91.2%for images and 95.2%for videos.It is a great significance to tomato pest control research and the development of smart agriculture.