Crop-yield is a crucial metric in agriculture,essential for effective sector management and improving the overall production process.This indicator is heavily influenced by numerous environmental factors,particularly ...Crop-yield is a crucial metric in agriculture,essential for effective sector management and improving the overall production process.This indicator is heavily influenced by numerous environmental factors,particularly those related to soil and climate,which present a challenging task due to the complex interactions involved.In this paper,we introduce a novel integrated neurosymbolic framework that combines knowledge-based approaches with sensor data for crop-yield prediction.This framework merges predictions from vectors generated by modeling environmental factors using a newly developed ontology focused on key elements and evaluates this ontology using quantitative methods,specifically representation learning techniques,along with predictions derived from remote sensing imagery.We tested our proposed methodology on a public dataset centered on corn,aiming to predict crop-yield.Our developed smart model achieved promising results in terms of crop-yield prediction,with a root mean squared error(RMSE)of 1.72,outperforming the baseline models.The ontologybased approach achieved an RMSE of 1.73,while the remote sensing-based method yielded an RMSE of 1.77.This confirms the superior performance of our proposed approach over those using single modalities.This in-tegrated neurosymbolic approach demonstrates that the fusion of statistical and symbolic artificial intelligence(AI)represents a significant advancement in agricultural applications.It is particularly effective for crop-yield prediction at the field scale,thus facilitating more informed decision-making in advanced agricultural prac-tices.Additionally,it is acknowledged that results might be further improved by incorporating more detailed ontological knowledge and testing the model with higher-resolution imagery to enhance prediction accuracy.展开更多
This paper presents a holistic multimodal approach for real-time anomaly detection and classification in largescale photovoltaic plants.The approach encompasses segmentation,geolocation,and classification of individua...This paper presents a holistic multimodal approach for real-time anomaly detection and classification in largescale photovoltaic plants.The approach encompasses segmentation,geolocation,and classification of individual photovoltaic modules.A fine-tuned Yolov7 model was trained for the individual module’s segmentation of both modalities;RGB and IR images.The localization of individual solar panels relies on photogrammetric measurements to facilitate maintenance operations.The localization process also links extracted images of the same panel using their geographical coordinates and preprocesses them for the multimodal model input.The study also focuses on optimizing pre-trained models using Bayesian search to improve and fine-tune them with our dataset.The dataset was collected from different systems and technologies within our research platform.It has been curated into 1841 images and classified into five anomaly classes.Grad-CAM,an explainable AI tool,is utilized to compare the use of multimodality to a single modality.Finally,for real-time optimization,the ONNX format was used to optimize the model further for deployment in real-time.The improved ConvNext-Tiny model performed well in both modalities,with 99%precision,recall,and F1-score for binary classification and 85%for multi-class classification.In terms of latency,the segmentation models have an inference time of 14 ms and 12 ms for RGB and IR images and 24 ms for detection and classification.The proposed holistic approach includes a built-in feedback loop to ensure the model’s robustness against domain shifts in the production environment.展开更多
基金partially funded by the JSPS KAKENHI Grant Number JP22K18004.
文摘Crop-yield is a crucial metric in agriculture,essential for effective sector management and improving the overall production process.This indicator is heavily influenced by numerous environmental factors,particularly those related to soil and climate,which present a challenging task due to the complex interactions involved.In this paper,we introduce a novel integrated neurosymbolic framework that combines knowledge-based approaches with sensor data for crop-yield prediction.This framework merges predictions from vectors generated by modeling environmental factors using a newly developed ontology focused on key elements and evaluates this ontology using quantitative methods,specifically representation learning techniques,along with predictions derived from remote sensing imagery.We tested our proposed methodology on a public dataset centered on corn,aiming to predict crop-yield.Our developed smart model achieved promising results in terms of crop-yield prediction,with a root mean squared error(RMSE)of 1.72,outperforming the baseline models.The ontologybased approach achieved an RMSE of 1.73,while the remote sensing-based method yielded an RMSE of 1.77.This confirms the superior performance of our proposed approach over those using single modalities.This in-tegrated neurosymbolic approach demonstrates that the fusion of statistical and symbolic artificial intelligence(AI)represents a significant advancement in agricultural applications.It is particularly effective for crop-yield prediction at the field scale,thus facilitating more informed decision-making in advanced agricultural prac-tices.Additionally,it is acknowledged that results might be further improved by incorporating more detailed ontological knowledge and testing the model with higher-resolution imagery to enhance prediction accuracy.
基金Digitalized PV plant project financed by OCP group[Reference:Digital and smart photovoltaic power plant].
文摘This paper presents a holistic multimodal approach for real-time anomaly detection and classification in largescale photovoltaic plants.The approach encompasses segmentation,geolocation,and classification of individual photovoltaic modules.A fine-tuned Yolov7 model was trained for the individual module’s segmentation of both modalities;RGB and IR images.The localization of individual solar panels relies on photogrammetric measurements to facilitate maintenance operations.The localization process also links extracted images of the same panel using their geographical coordinates and preprocesses them for the multimodal model input.The study also focuses on optimizing pre-trained models using Bayesian search to improve and fine-tune them with our dataset.The dataset was collected from different systems and technologies within our research platform.It has been curated into 1841 images and classified into five anomaly classes.Grad-CAM,an explainable AI tool,is utilized to compare the use of multimodality to a single modality.Finally,for real-time optimization,the ONNX format was used to optimize the model further for deployment in real-time.The improved ConvNext-Tiny model performed well in both modalities,with 99%precision,recall,and F1-score for binary classification and 85%for multi-class classification.In terms of latency,the segmentation models have an inference time of 14 ms and 12 ms for RGB and IR images and 24 ms for detection and classification.The proposed holistic approach includes a built-in feedback loop to ensure the model’s robustness against domain shifts in the production environment.