Accurately identifying crop pests and diseases ensures agricultural productivity and safety.Although current YOLO-based detection models offer real-time capabilities,their conventional convolutional layers involve hig...Accurately identifying crop pests and diseases ensures agricultural productivity and safety.Although current YOLO-based detection models offer real-time capabilities,their conventional convolutional layers involve high computational redundancy and a fixed receptive field,making it challenging to capture local details and global semantics in complex scenarios simultaneously.This leads to significant issues like missed detections of small targets and heightened sensitivity to background interference.To address these challenges,this paper proposes a lightweight adaptive detection network—StarSpark-AdaptiveNet(SSANet),which optimizes features through a dual-module collaborative mechanism.Specifically,the StarNet module utilizes Depthwise separable convolutions(DW-Conv)and dynamic star operations to establish multi-stage feature extraction pathways,enhancing local detail perception within a lightweight framework.Moreover,the Multi-scale Adaptive Spatial Attention Gate(MASAG)module integrates cross-layer feature fusion and dynamic weight allocation to capture multi-scale global contextual information,effectively suppressing background noise.These modules jointly form a“local enhancement-global calibration”bidirectional optimization mechanism,significantly improving the model’s adaptability to complex disease patterns.Furthermore,the proposed Scale-based Dynamic Loss(SD Loss)dynamically adjusts the weight of scale and localization losses,improving regression stability and localization accuracy,especially for small targets.Experiments on the eggplant fruit disease dataset demonstrate that SSANet achieves an mAP50 of 83.9%and a detection speed of 273.5 FPS with only 2.11 M parameters and 5.1 GFLOPs computational cost,outperforming the baseline YOLO11 model by reducing parameters by 18.1%,increasing mAP50 by 1.3%,and improving inference speed by 9.1%.Ablation studies further confirm the effectiveness and complementarity of the modules.SSANet offers a high-accuracy,low-cost solution suitable for real-time pest and disease detection in crops,facilitating edge device deployment and promoting precision agriculture.展开更多
[Objective]Crop line extraction is critical for improving the efficiency of autonomous agricultural machines in the field.However,traditional detection methods struggle to maintain high accuracy and efficiency under c...[Objective]Crop line extraction is critical for improving the efficiency of autonomous agricultural machines in the field.However,traditional detection methods struggle to maintain high accuracy and efficiency under challenging conditions,such as strong light exposure and weed interference.The aims are to develop an effective crop line extraction method by combining YOLOv8-G,Affinity Propagation,and the Least Squares method to enhance detection accuracy and performance in complex field environments.[Methods]The proposed method employs machine vision techniques to address common field challenges.YOLOv8-G,an improved object detection algorithm that combines YOLOv8 and Ghost‐NetV2 for lightweight,high-speed performance,was used to detect the central points of crops.These points were then clustered using the Affinity Propagation algorithm,followed by the application of the Least Squares method to extract the crop lines.Comparative tests were conducted to evaluate multiple backbone networks within the YOLOv8 framework,and ablation studies were performed to validate the enhancements made in YOLOv8-G.[Results and Discussions]The performance of the proposed method was compared with classical object detection and clustering algorithms.The YOLOv8-G algorithm achieved average precision(AP)values of 98.22%,98.15%,and 97.32%for corn detection at 7,14,and 21 days after emergence,respectively.Additionally,the crop line extraction accuracy across all stages was 96.52%.These results demonstrate the model's ability to maintain high detection accuracy despite challenging conditions in the field.[Conclusions]The proposed crop line extraction method effectively addresses field challenges such as lighting and weed interference,enabling rapid and accurate crop identification.This approach supports the automatic navigation of agricultural machinery,offering significant improvements in the precision and efficiency of field operations.展开更多
The agriculture sector has an immense potential to improve the requirement of food and supplies healthy and nutritious food.Crop insect detection is a challenging task for farmers as a significant portion of the crops...The agriculture sector has an immense potential to improve the requirement of food and supplies healthy and nutritious food.Crop insect detection is a challenging task for farmers as a significant portion of the crops are damaged,and the quality is degraded due to the pest attack.Traditional insect identification has the drawback of requiring well-trained tax-onomists to identify insects based on morphological features accurately.Experiments were conducted for classification on nine and 24 insect classes of Wang and Xie dataset using the shape features and applying machine learning techniques such as artificial neural net-works(ANN),support vector machine(SVM),k-nearest neighbors(KNN),naive bayes(NB)and convolutional neural network(CNN)model.This paper presents the insect pest detec-tion algorithm that consists of foreground extraction and contour identification to detect the insects for Wang,Xie,Deng,and IP102 datasets in a highly complex background.The 9-fold cross-validation was applied to improve the performance of the classification mod-els.The highest classification rate of 91.5%and 90%was achieved for nine and 24 class insects using the CNN model.The detection performance was accomplished with less com-putation time for Wang,Xie,Deng,and IP102 datasets using insect pest detection algo-rithm.The comparison results with the state-of-the-art classification algorithms exhibited considerable improvement in classification accuracy,computation time perfor-mance while apply more efficiently in field crops to recognize the insects.The results of classification accuracy are used to recognize the crop insects in the early stages and reduce the time to enhance the crop yield and crop quality in agriculture.展开更多
Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-e...Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-effective solution.However,current mainstream methods for maize crop row detection often rely on highly specialized,manually devised heuristic rules,limiting the scalability of these methods.To simplify the solution and enhance its universality,we propose an innovative crop row annotation strategy.This strategy,by simulating the strip-like structure of the crop row's central area,effectively avoids interference from lateral growth of crop leaves.Based on this,we developed a deep learning network with a dual-branch architecture,InstaCropNet,which achieves end-to-end segmentation of crop row instances.Subsequently,through the row anchor segmen-tation technique,we accurately locate the positions of different crop row instances and perform line fitting.Experimental results demonstrate that our method has an average angular deviation of no more than 2°,and the accuracy of crop row detection reaches 96.5%.展开更多
Mapping hazelnut orchards can facilitate land planning and utilization policies,supporting the development of cooperative precision farming systems.The present work faces the detection of hazelnut crops using optical ...Mapping hazelnut orchards can facilitate land planning and utilization policies,supporting the development of cooperative precision farming systems.The present work faces the detection of hazelnut crops using optical and radar remote sensing data.A comparative study of Machine Learning techniques is presented.The system proposed utilizes multi-temporal data from the Sentinel-1 and Sentinel-2 datasets extracted over several years and processed with cloud tools.We provide a dataset of 62,982 labeled samples,with 16,561 samples belonging to the‘hazelnut’class and 46,421 samples belonging to the‘other’class,collected in 8 heterogeneous geograph-ical areas of the Viterbo province.Two different comparative tests are conducted:firstly,we use a Nested 5-Fold Cross-Validation methodology to train,optimize,and compare different Machine Learning algorithms on a single area.In a second experiment,the algorithms were trained on one area and tested on the remaining seven geo-graphical areas.The developed study demonstrates how AI analysis applied to Sentinel-1 and Sentinel-2 data is a valid technology for hazelnut mapping.From the results,it emerges that Random Forest is the classifier with the highest generalizability,achieving the best performance in the second test with an accuracy of 96%and an F1 score of 91%for the‘hazelnut’class.展开更多
The objective of this research was to develop an uncut crop edge detection system for a combine harvester.A laser rangefinder(LF)was selected as a primary sensor,combined with a pan-tilt unit(PTU)and an inertial measu...The objective of this research was to develop an uncut crop edge detection system for a combine harvester.A laser rangefinder(LF)was selected as a primary sensor,combined with a pan-tilt unit(PTU)and an inertial measurement unit(IMU).Three-dimensional field information can be obtained when the PTU rotates the laser rangefinder in the vertical plane.A field profile was modeled by analyzing range data.Otsu’s method was used to detect the crop edge position on each scanning profile,and the least squares method was applied to fit the uncut crop edge.Fundamental performance of the system was first evaluated under laboratory conditions.Then,validation experiments were conducted under both static and dynamic conditions in a wheat field during harvesting season.To verify the error of the detection system,the real position of the edge was measured by GPS for accuracy evaluation.The results showed an average lateral error of±12 cm,with a Root-Mean-Square Error(RMSE)of 3.01 cm for the static test,and an average lateral error of±25 cm,with an RMSE of 10.15 cm for the dynamic test.The proposed laser rangefinder-based uncut crop edge detection system exhibited a satisfactory performance for edge detection under different conditions in the field,and can provide reliable information for further study.展开更多
Unmanned aerial vehicle(UAV)photography has become the main power system inspection method;however,automated fault detection remains a major challenge.Conventional algorithms encounter difficulty in processing all the...Unmanned aerial vehicle(UAV)photography has become the main power system inspection method;however,automated fault detection remains a major challenge.Conventional algorithms encounter difficulty in processing all the detected objects in the power transmission lines simultaneously.The object detection method involving deep learning provides a new method for fault detection.However,the traditional non-maximum suppression(NMS)algorithm fails to delete redundant annotations when dealing with objects having two labels such as insulators and dampers.In this study,we propose an area-based non-maximum suppression(A-NMS)algorithm to solve the problem of one object having multiple labels.The A-NMS algorithm is used in the fusion stage of cropping detection to detect small objects.Experiments prove that A-NMS and cropping detection achieve a mean average precision and recall of 88.58%and 91.23%,respectively,in case of the aerial image datasets and realize multi-object fault detection in aerial images.展开更多
Crop rows detection in maize fields remains a challenging problem due to variation in illumination and weeds interference under field conditions.This study proposed an algorithm for detecting crop rows based on adapti...Crop rows detection in maize fields remains a challenging problem due to variation in illumination and weeds interference under field conditions.This study proposed an algorithm for detecting crop rows based on adaptive multi-region of interest(multi-ROI).First,the image was segmented into crop and soil and divided into several horizontally labeled strips.Feature points were located in the first image strip and initial ROI was determined.Then,the ROI window was shifted upward.For the next image strip,the operations for the previous strip were repeated until multiple ROIs were obtained.Finally,the least square method was carried out to extract navigation lines and detection lines in multi-ROI.The detection accuracy of the method was 95.3%.The average computation time was 240.8 ms.The results suggest that the proposed method has generally favorable performance and can meet the real-time and accuracy requirements for field navigation.展开更多
This study compares the spectral sensitivity of remotely sensed satellite images,used for the detection of archaeological remains.This comparison was based on the relative spectral response(RSR)Filters of each sensor....This study compares the spectral sensitivity of remotely sensed satellite images,used for the detection of archaeological remains.This comparison was based on the relative spectral response(RSR)Filters of each sensor.Spectral signatures profiles were obtained using the GER-1500 field spectroradiometer under clear sky conditions for eight different targets.These field spectral signature curves were simulated to ALOS,ASTER,IKONOS,Landsat 7-ETM-,Landsat 4-TM,Landsat 5-TM and SPOT 5.Red and near infrared(NIR)bandwidth reflectance were re-calculated to each one of these sensors using appropriate RSR Filters.Moreover,the normalised difference vegetation index(NDVI)and simple ratio(SR)vegetation profiles were analysed in order to evaluate their sensitivity to sensors spectral filters.The results have shown that IKONOS RSR filters can better distinguish buried archaeological remains as a result of difference in healthy and stress vegetation(approximately 18%difference in reflectance of the red and NIR band and nearly 0.07 to the NDVI profile).In comparison,all the other sensors showed similar results and sensitivities.This difference of IKONOS sensor might be a result of its spectral characteristics(bandwidths and RSR filters)since they are different from the rest of sensors compared in this study.展开更多
Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in compute...Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in computer vision(CV).The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications.These image-based applications outperform expert evaluation in controlled environments,and now they are being progressively included in non-controlled field applications.A novel solution based on deep learning techniques in combination with image processingmethods is proposed to tackle the estimate of plant damage in the field.The proposed solution is a two-stage algorithm.In a first stage,the single plants in the plots are detected by an object detection YOLO based model.Then a regression model is applied to estimate the damage of each individual plant.The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.The crop detection model achieves a mean precision average of 91%with a mAP@0.50 of 0.99 and a mAP@0.95 of 0.91 for oilseed rape specifically.The regression model to estimate up to 60%of damage degree in single plants achieves a MAE of 7.11,and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts.Models are deployed in a docker,and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.展开更多
Accurate extraction of crop row is very important for automation of agricultural production.Crop rows are required for accurate machine guidance in agricultural production such as fertilization,plant protection,weedin...Accurate extraction of crop row is very important for automation of agricultural production.Crop rows are required for accurate machine guidance in agricultural production such as fertilization,plant protection,weeding and harvesting.In this study,an efficient crop row detection algorithm called Crop-BiSeNet V2 was proposed,which combined BiSeNet V2 with a spatial convolutional neural network.The proposed Crop-BiSeNet V2 detected crop rows in color images without the use of threshold and other pre-information such as number of rows.A data set had 2697 maize crop images was constructed in challenging field trial conditions such as variable light,shadows,presence of weeds,and irregular crop shape.The proposed system was experimentally determined to overcome the interference of different complex scenes.And it can be applied to crop rows of different numbers,straight lines and curves.Different analyses were performed to check the robustness of the algorithm.Comparing this algorithm with the Fully Convolutional Networks(FCN)algorithm,it exhibited superior performance and saved 84.85 ms.The accuracy rate reached 0.9811,and the detection speed reached 65.54 ms/frame.The Crop-BiSeNet V2 algorithm proposed in this study show strong generalization performance for seedling crop row recognition.It provides high-reliability technical support for crop row detection research and assists in the study of intelligent field operation machinery navigation.展开更多
基金suported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(NRF-2022R1A2C2012243).
文摘Accurately identifying crop pests and diseases ensures agricultural productivity and safety.Although current YOLO-based detection models offer real-time capabilities,their conventional convolutional layers involve high computational redundancy and a fixed receptive field,making it challenging to capture local details and global semantics in complex scenarios simultaneously.This leads to significant issues like missed detections of small targets and heightened sensitivity to background interference.To address these challenges,this paper proposes a lightweight adaptive detection network—StarSpark-AdaptiveNet(SSANet),which optimizes features through a dual-module collaborative mechanism.Specifically,the StarNet module utilizes Depthwise separable convolutions(DW-Conv)and dynamic star operations to establish multi-stage feature extraction pathways,enhancing local detail perception within a lightweight framework.Moreover,the Multi-scale Adaptive Spatial Attention Gate(MASAG)module integrates cross-layer feature fusion and dynamic weight allocation to capture multi-scale global contextual information,effectively suppressing background noise.These modules jointly form a“local enhancement-global calibration”bidirectional optimization mechanism,significantly improving the model’s adaptability to complex disease patterns.Furthermore,the proposed Scale-based Dynamic Loss(SD Loss)dynamically adjusts the weight of scale and localization losses,improving regression stability and localization accuracy,especially for small targets.Experiments on the eggplant fruit disease dataset demonstrate that SSANet achieves an mAP50 of 83.9%and a detection speed of 273.5 FPS with only 2.11 M parameters and 5.1 GFLOPs computational cost,outperforming the baseline YOLO11 model by reducing parameters by 18.1%,increasing mAP50 by 1.3%,and improving inference speed by 9.1%.Ablation studies further confirm the effectiveness and complementarity of the modules.SSANet offers a high-accuracy,low-cost solution suitable for real-time pest and disease detection in crops,facilitating edge device deployment and promoting precision agriculture.
文摘[Objective]Crop line extraction is critical for improving the efficiency of autonomous agricultural machines in the field.However,traditional detection methods struggle to maintain high accuracy and efficiency under challenging conditions,such as strong light exposure and weed interference.The aims are to develop an effective crop line extraction method by combining YOLOv8-G,Affinity Propagation,and the Least Squares method to enhance detection accuracy and performance in complex field environments.[Methods]The proposed method employs machine vision techniques to address common field challenges.YOLOv8-G,an improved object detection algorithm that combines YOLOv8 and Ghost‐NetV2 for lightweight,high-speed performance,was used to detect the central points of crops.These points were then clustered using the Affinity Propagation algorithm,followed by the application of the Least Squares method to extract the crop lines.Comparative tests were conducted to evaluate multiple backbone networks within the YOLOv8 framework,and ablation studies were performed to validate the enhancements made in YOLOv8-G.[Results and Discussions]The performance of the proposed method was compared with classical object detection and clustering algorithms.The YOLOv8-G algorithm achieved average precision(AP)values of 98.22%,98.15%,and 97.32%for corn detection at 7,14,and 21 days after emergence,respectively.Additionally,the crop line extraction accuracy across all stages was 96.52%.These results demonstrate the model's ability to maintain high detection accuracy despite challenging conditions in the field.[Conclusions]The proposed crop line extraction method effectively addresses field challenges such as lighting and weed interference,enabling rapid and accurate crop identification.This approach supports the automatic navigation of agricultural machinery,offering significant improvements in the precision and efficiency of field operations.
文摘The agriculture sector has an immense potential to improve the requirement of food and supplies healthy and nutritious food.Crop insect detection is a challenging task for farmers as a significant portion of the crops are damaged,and the quality is degraded due to the pest attack.Traditional insect identification has the drawback of requiring well-trained tax-onomists to identify insects based on morphological features accurately.Experiments were conducted for classification on nine and 24 insect classes of Wang and Xie dataset using the shape features and applying machine learning techniques such as artificial neural net-works(ANN),support vector machine(SVM),k-nearest neighbors(KNN),naive bayes(NB)and convolutional neural network(CNN)model.This paper presents the insect pest detec-tion algorithm that consists of foreground extraction and contour identification to detect the insects for Wang,Xie,Deng,and IP102 datasets in a highly complex background.The 9-fold cross-validation was applied to improve the performance of the classification mod-els.The highest classification rate of 91.5%and 90%was achieved for nine and 24 class insects using the CNN model.The detection performance was accomplished with less com-putation time for Wang,Xie,Deng,and IP102 datasets using insect pest detection algo-rithm.The comparison results with the state-of-the-art classification algorithms exhibited considerable improvement in classification accuracy,computation time perfor-mance while apply more efficiently in field crops to recognize the insects.The results of classification accuracy are used to recognize the crop insects in the early stages and reduce the time to enhance the crop yield and crop quality in agriculture.
基金Anhui Provincial University Research Program(2023AH040138)the National Natural Science Foundation of China(32271998)(52075092)for providing financial support for the research.
文摘Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields.Among various navigation techniques,visual navigation using widely available RGB images is a cost-effective solution.However,current mainstream methods for maize crop row detection often rely on highly specialized,manually devised heuristic rules,limiting the scalability of these methods.To simplify the solution and enhance its universality,we propose an innovative crop row annotation strategy.This strategy,by simulating the strip-like structure of the crop row's central area,effectively avoids interference from lateral growth of crop leaves.Based on this,we developed a deep learning network with a dual-branch architecture,InstaCropNet,which achieves end-to-end segmentation of crop row instances.Subsequently,through the row anchor segmen-tation technique,we accurately locate the positions of different crop row instances and perform line fitting.Experimental results demonstrate that our method has an average angular deviation of no more than 2°,and the accuracy of crop row detection reaches 96.5%.
文摘Mapping hazelnut orchards can facilitate land planning and utilization policies,supporting the development of cooperative precision farming systems.The present work faces the detection of hazelnut crops using optical and radar remote sensing data.A comparative study of Machine Learning techniques is presented.The system proposed utilizes multi-temporal data from the Sentinel-1 and Sentinel-2 datasets extracted over several years and processed with cloud tools.We provide a dataset of 62,982 labeled samples,with 16,561 samples belonging to the‘hazelnut’class and 46,421 samples belonging to the‘other’class,collected in 8 heterogeneous geograph-ical areas of the Viterbo province.Two different comparative tests are conducted:firstly,we use a Nested 5-Fold Cross-Validation methodology to train,optimize,and compare different Machine Learning algorithms on a single area.In a second experiment,the algorithms were trained on one area and tested on the remaining seven geo-graphical areas.The developed study demonstrates how AI analysis applied to Sentinel-1 and Sentinel-2 data is a valid technology for hazelnut mapping.From the results,it emerges that Random Forest is the classifier with the highest generalizability,achieving the best performance in the second test with an accuracy of 96%and an F1 score of 91%for the‘hazelnut’class.
基金the fund of China Scholarship Council,Chinese Universities Scientific Fund(ZD2013015)the research Fund for the Doctoral Program of Higher Education of China(20130204110020).
文摘The objective of this research was to develop an uncut crop edge detection system for a combine harvester.A laser rangefinder(LF)was selected as a primary sensor,combined with a pan-tilt unit(PTU)and an inertial measurement unit(IMU).Three-dimensional field information can be obtained when the PTU rotates the laser rangefinder in the vertical plane.A field profile was modeled by analyzing range data.Otsu’s method was used to detect the crop edge position on each scanning profile,and the least squares method was applied to fit the uncut crop edge.Fundamental performance of the system was first evaluated under laboratory conditions.Then,validation experiments were conducted under both static and dynamic conditions in a wheat field during harvesting season.To verify the error of the detection system,the real position of the edge was measured by GPS for accuracy evaluation.The results showed an average lateral error of±12 cm,with a Root-Mean-Square Error(RMSE)of 3.01 cm for the static test,and an average lateral error of±25 cm,with an RMSE of 10.15 cm for the dynamic test.The proposed laser rangefinder-based uncut crop edge detection system exhibited a satisfactory performance for edge detection under different conditions in the field,and can provide reliable information for further study.
基金the National Grid Corporation Headquarters Science and Technology Project:Key Technology Research,Equipment Development and Engineering Demonstration of Artificial Smart Drived Electric Vehicle Smart Travel Service(No.52020118000G).
文摘Unmanned aerial vehicle(UAV)photography has become the main power system inspection method;however,automated fault detection remains a major challenge.Conventional algorithms encounter difficulty in processing all the detected objects in the power transmission lines simultaneously.The object detection method involving deep learning provides a new method for fault detection.However,the traditional non-maximum suppression(NMS)algorithm fails to delete redundant annotations when dealing with objects having two labels such as insulators and dampers.In this study,we propose an area-based non-maximum suppression(A-NMS)algorithm to solve the problem of one object having multiple labels.The A-NMS algorithm is used in the fusion stage of cropping detection to detect small objects.Experiments prove that A-NMS and cropping detection achieve a mean average precision and recall of 88.58%and 91.23%,respectively,in case of the aerial image datasets and realize multi-object fault detection in aerial images.
基金The authors acknowledge that the research was financially supported by the National Key Research and Development Program of China(Grant No.2017YFD0700902)the University Synergy Innovation Program of Anhui Province(Grant No.GXXT-2020-011).
文摘Crop rows detection in maize fields remains a challenging problem due to variation in illumination and weeds interference under field conditions.This study proposed an algorithm for detecting crop rows based on adaptive multi-region of interest(multi-ROI).First,the image was segmented into crop and soil and divided into several horizontally labeled strips.Feature points were located in the first image strip and initial ROI was determined.Then,the ROI window was shifted upward.For the next image strip,the operations for the previous strip were repeated until multiple ROIs were obtained.Finally,the least square method was carried out to extract navigation lines and detection lines in multi-ROI.The detection accuracy of the method was 95.3%.The average computation time was 240.8 ms.The results suggest that the proposed method has generally favorable performance and can meet the real-time and accuracy requirements for field navigation.
文摘This study compares the spectral sensitivity of remotely sensed satellite images,used for the detection of archaeological remains.This comparison was based on the relative spectral response(RSR)Filters of each sensor.Spectral signatures profiles were obtained using the GER-1500 field spectroradiometer under clear sky conditions for eight different targets.These field spectral signature curves were simulated to ALOS,ASTER,IKONOS,Landsat 7-ETM-,Landsat 4-TM,Landsat 5-TM and SPOT 5.Red and near infrared(NIR)bandwidth reflectance were re-calculated to each one of these sensors using appropriate RSR Filters.Moreover,the normalised difference vegetation index(NDVI)and simple ratio(SR)vegetation profiles were analysed in order to evaluate their sensitivity to sensors spectral filters.The results have shown that IKONOS RSR filters can better distinguish buried archaeological remains as a result of difference in healthy and stress vegetation(approximately 18%difference in reflectance of the red and NIR band and nearly 0.07 to the NDVI profile).In comparison,all the other sensors showed similar results and sensitivities.This difference of IKONOS sensor might be a result of its spectral characteristics(bandwidths and RSR filters)since they are different from the rest of sensors compared in this study.
文摘Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in computer vision(CV).The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications.These image-based applications outperform expert evaluation in controlled environments,and now they are being progressively included in non-controlled field applications.A novel solution based on deep learning techniques in combination with image processingmethods is proposed to tackle the estimate of plant damage in the field.The proposed solution is a two-stage algorithm.In a first stage,the single plants in the plots are detected by an object detection YOLO based model.Then a regression model is applied to estimate the damage of each individual plant.The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.The crop detection model achieves a mean precision average of 91%with a mAP@0.50 of 0.99 and a mAP@0.95 of 0.91 for oilseed rape specifically.The regression model to estimate up to 60%of damage degree in single plants achieves a MAE of 7.11,and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts.Models are deployed in a docker,and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.
基金National Key R&D Program of China(Grant No.2021YFB3901302)Shandong Province,China(Grant No.2021YFB3901300).
文摘Accurate extraction of crop row is very important for automation of agricultural production.Crop rows are required for accurate machine guidance in agricultural production such as fertilization,plant protection,weeding and harvesting.In this study,an efficient crop row detection algorithm called Crop-BiSeNet V2 was proposed,which combined BiSeNet V2 with a spatial convolutional neural network.The proposed Crop-BiSeNet V2 detected crop rows in color images without the use of threshold and other pre-information such as number of rows.A data set had 2697 maize crop images was constructed in challenging field trial conditions such as variable light,shadows,presence of weeds,and irregular crop shape.The proposed system was experimentally determined to overcome the interference of different complex scenes.And it can be applied to crop rows of different numbers,straight lines and curves.Different analyses were performed to check the robustness of the algorithm.Comparing this algorithm with the Fully Convolutional Networks(FCN)algorithm,it exhibited superior performance and saved 84.85 ms.The accuracy rate reached 0.9811,and the detection speed reached 65.54 ms/frame.The Crop-BiSeNet V2 algorithm proposed in this study show strong generalization performance for seedling crop row recognition.It provides high-reliability technical support for crop row detection research and assists in the study of intelligent field operation machinery navigation.