Atherosclerotic diseases is a diffuse process that involves the coronaries, carotids, renals and all other peripheral arteries owing to the systemic nature of atherosclerotic pathophysiology. This systemic precipitant...Atherosclerotic diseases is a diffuse process that involves the coronaries, carotids, renals and all other peripheral arteries owing to the systemic nature of atherosclerotic pathophysiology. This systemic precipitants that promote aggressive atherogenesis have been confirmed in multiple studies showing a relationship between atherosclerotic disease in one vascular bed with disease in another. However, the strength of this relationship varies from patient to patient. Thus, the practical utility of the diffuse nature of atheresclerosis is questionable. Ge and colleagues have proposed the use of left main (LM)coronary artery disease as a potential marker for left anterior descending (lAD) atherosclerotic disease. At first thought, this seems useless since the evaluation of the LM (by angiography or IVUS) can just as easily be performed in the LAD so why bother searching for such a surrogate? However, newer (non-invasive) imaging modalifies are making great gains and will be able to reliably image the LM sooner than the LAD (especially the distal LAD) so such a surrogate could have practical applications.展开更多
The Jain logic (Nay) addresses concerns elicited by sense experience of observable and measurable reality. Reality is what it is, and it exists independent of the observer. The last Jain Tirthankar Mahaveer suggeste...The Jain logic (Nay) addresses concerns elicited by sense experience of observable and measurable reality. Reality is what it is, and it exists independent of the observer. The last Jain Tirthankar Mahaveer suggested that organisms interact with such realities for survival needs and become concerned about the consequences. He suggested a code of conduct for reality-based behaviors to address concerns. Perceptions and impressions provide measures (praman) of information in sense experience, and with other evidence guide choices and decisions to act and bear consequences. Ethical behaviors rooted in reality have desirable consequences, and inconsistent and contradictory behaviors are undesirable consequences. Omniscience (God, Brahm) is discarded as a self-referential ad hoc construct inconsistent and contradictory to real world behaviors. This article is survey of assumptions and models to represent, interpret, and validate knowledge that begins with logical deduction for inference (anuman) based on evidence from sense experience (Jain 2011). Secular and atheistic thrust of thought and practice encourages reasoning and open-ended search with affirmed assertions and independent evidence. Individual identity (atm) emerges with consistent behaviors to overcome fallibility and unreliability by minimizing doubt (Syad-Saptbhangi Nay). The first Tirthankar Rishabh Nath (ca. 2700 BC) suggested that the content (sat) of real and abstract objects and concerns during a change is conserved as the net balance of the inputs and outputs (Tatia 1994). Identity and content of assertions and evidence is also conserved during logical manipulations for reasoning. Each assertion and its negation are to be affirmed with independent evidence, and lack of evidence for presence is not necessarily the evidence for either non-absence or non-existence.展开更多
Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel perf...Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.展开更多
Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the ...Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.展开更多
The primary objective of this study is to measure fluoride levels in groundwater samples using machine learning approaches alongside traditional and fuzzy logic models based health risk assessment in the hard rock Arj...The primary objective of this study is to measure fluoride levels in groundwater samples using machine learning approaches alongside traditional and fuzzy logic models based health risk assessment in the hard rock Arjunanadi River basin,South India.Fluoride levels in the study area vary between 0.1 and 3.10 mg/L,with 32 samples exceeding the World Health Organization(WHO)standard of 1.5 mg/L.Hydrogeochemical analyses(Durov and Gibbs)clearly show that the overall water chemistry is primarily influenced by simple dissolution,mixing,and rock-water interactions,indicating that geogenic sources are the predominant contributors to fluoride in the study area.Around 446.5 km^(2)is considered at risk.In predictive analysis,five Machine Learning(ML)models were used,with the AdaBoost model performing better than the other models,achieving 96%accuracy and 4%error rate.The Traditional Health Risk Assessment(THRA)results indicate that 65%of samples pose highly susceptible for dental fluorosis,while 12%of samples pose highly susceptible for skeletal fluorosis in young age groups.The Fuzzy Inference System(FIS)model effectively manages ambiguity and linguistic factors,which are crucial when addressing health risks linked to groundwater fluoride contamination.In this model,input variables include fluoride concentration,individual age,and ingestion rate,while output variables consist of dental caries risk,dental fluorosis,and skeletal fluorosis.The overall results indicate that increased ingestion rates and prolonged exposure to contaminated water make adults and the elderly people vulnerable to dental and skeletal fluorosis,along with very young and young age groups.This study is an essential resource for local authorities,healthcare officials,and communities,aiding in the mitigation of health risks associated with groundwater contamination and enhancing quality of life through improved water management and health risk assessment,aligning with Sustainable Development Goals(SDGs)3 and 6,thereby contributing to a cleaner and healthier society.展开更多
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’...Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.展开更多
In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes...In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes that the projectile dispersion of ammunition is a two-dimensional joint normal distribution,and proposes a new Bayesian inference method of ammunition hit probability based on normal-inverse Wishart distribution.Firstly,the conjugate joint prior distribution of the projectile dispersion characteristic parameters is determined to be a normal inverse Wishart distribution,and the hyperparameters in the prior distribution are estimated by simulation experimental data and historical measured data.Secondly,the field test data is integrated with the Bayesian formula to obtain the joint posterior distribution of the projectile dispersion characteristic parameters,and then the hit probability of the ammunition is estimated.Finally,compared with the binomial distribution method,the method in this paper can consider the dispersion information of ammunition projectiles,and the hit probability information is more fully utilized.The hit probability results are closer to the field shooting test samples.This method has strong applicability and is conducive to obtaining more accurate hit probability estimation results.展开更多
Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult f...Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult for the current ROP models to guarantee the prediction accuracy and the robustness of the models at the same time.To address the current issues,a new ROP prediction model was developed in this study,which considers ROP as a time series signal(ROP signal).The model is based on the time convolutional network(TCN)framework and integrates ensemble empirical modal decomposition(EEMD)and Bayesian network causal inference(BN),the model is named EEMD-BN-TCN.Within the proposed model,the EEMD decomposes the original ROP signal into multiple sets of sub-signals.The BN determines the causal relationship between the sub-signals and the key physical parameters(weight on bit and revolutions per minute)and carries out preliminary reconstruction of the sub-signals based on the causal relationship.The TCN predicts signals reconstructed by BN.When applying this model to an actual production well,the average absolute percentage error of the EEMD-BN-TCN prediction decreased from 18.4%with TCN to 9.2%.In addition,compared with other models,the EEMD-BN-TCN can improve the decomposition signal of ROP by regulating weight on bit and revolutions per minute,ultimately enhancing ROP.展开更多
Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operatio...Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operations,owing to their malleability and versatility.However,the computation-intensive and latency-sensitive natures of CNNs present a formidable obstacle to their deployment on resource-constrained UAVs.Some early studies have explored a hybrid approach that dynamically switches between lightweight and complex models to balance accuracy and latency.However,they often overlook scenarios involving multiple concurrent CNN streams,where competition for resources between streams can substantially impact latency and overall system performance.In this paper,we first investigate the deployment of both lightweight and complex models for multiple CNN streams in UAV swarm.Specifically,we formulate an optimization problem to minimize the total latency across multiple CNN streams,under the constraints on UAV memory and the accuracy requirement of each stream.To address this problem,we propose an algorithm called Adaptive Model Switching of collaborative inference for MultiCNN streams(AMSM)to identify the inference strategy with a low latency.Simulation results demonstrate that the proposed AMSM algorithm consistently achieves the lowest latency while meeting the accuracy requirements compared to benchmark algorithms.展开更多
This study investigated forest recovery in the Atlantic Rainforest and Rupestrian Grassland of Brazil using the diffusive-logistic growth(DLG)model.This model simulates vegetation growth in the two mountain biomes con...This study investigated forest recovery in the Atlantic Rainforest and Rupestrian Grassland of Brazil using the diffusive-logistic growth(DLG)model.This model simulates vegetation growth in the two mountain biomes considering spatial location,time,and two key parameters:diffusion rate and growth rate.A Bayesian framework is employed to analyze the model's parameters and assess prediction uncertainties.Satellite imagery from 1992 and 2022 was used for model calibration and validation.By solving the DLG model using the finite difference method,we predicted a 6.6%–51.1%increase in vegetation density for the Atlantic Rainforest and a 5.3%–99.9%increase for the Rupestrian Grassland over 30 years,with the latter showing slower recovery but achieving a better model fit(lower RMSE)compared to the Atlantic Rainforest.The Bayesian approach revealed well-defined parameter distributions and lower parameter values for the Rupestrian Grassland,supporting the slower recovery prediction.Importantly,the model achieved good agreement with observed vegetation patterns in unseen validation data for both biomes.While there were minor spatial variations in accuracy,the overall distributions of predicted and observed vegetation density were comparable.Furthermore,this study highlights the importance of considering uncertainty in model predictions.Bayesian inference allowed us to quantify this uncertainty,demonstrating that the model's performance can vary across locations.Our approach provides valuable insights into forest regeneration process uncertainties,enabling comparisons of modeled scenarios at different recovery stages for better decision-making in these critical mountain biomes.展开更多
Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken us...Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken using component failure rate data to predict PTC for a full stroke test and a partial stroke test.Given the subjective and uncertain aspects of the FMEDA approach,specifically the selection of component failure rates and the determination of the probability of detecting failure modes,a Fuzzy Inference System(FIS)was proposed to manage the data,addressing the inherent uncertainties.Fuzzy inference systems have been used previously for various FMEA type assessments,but this is the first time an FIS has been employed for use with FMEDA.ESDV PTC values were generated from both the standard FMEDA and the fuzzy-FMEDA approaches using data provided by FMEDA experts.This work demonstrates that fuzzy inference systems can address the subjectivity inherent in FMEDA data,enabling reliable estimates of ESDV proof test coverage for both full and partial stroke tests.This facilitates optimized maintenance planning while ensuring safety is not compromised.展开更多
Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-len...Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-length fields,and the accurate identification of field boundaries has a great impact on the subsequent analysis and final performance.Hence,this paper proposes a new protocol segmentation method based on Information-theoretic statistical analysis for binary protocols by formulating the field segmentation of unsupervised binary protocols as a probabilistic inference problem and modeling its uncertainty.Specifically,we design four related constructions between entropy changes and protocol field segmentation,introduce random variables,and construct joint probability distributions with traffic sample observations.Probabilistic inference is then performed to identify the possible protocol segmentation points.Extensive trials on nine common public and industrial control protocols show that the proposed method yields higher-quality protocol segmentation results.展开更多
Understanding the characteristics and driving factors behind changes in vegetation ecosystem resilience is crucial for mitigating both current and future impacts of climate change. Despite recent advances in resilienc...Understanding the characteristics and driving factors behind changes in vegetation ecosystem resilience is crucial for mitigating both current and future impacts of climate change. Despite recent advances in resilience research, significant knowledge gaps remain regarding the drivers of resilience changes. In this study, we investigated the dynamics of ecosystem resilience across China and identified potential driving factors using the kernel normalized difference vegetation index(kNDVI) from 2000 to 2020. Our results indicate that vegetation resilience in China has exhibited an increasing trend over the past two decades, with a notable breakpoint occurring around 2012. We found that precipitation was the dominant driver of changes in ecosystem resilience, accounting for 35.82% of the variation across China, followed by monthly average maximum temperature(Tmax) and vapor pressure deficit(VPD), which explained 28.95% and 28.31% of the variation, respectively. Furthermore, we revealed that daytime and nighttime warming has asymmetric impacts on vegetation resilience, with temperature factors such as Tmin and Tmax becoming more influential, while the importance of precipitation slightly decreases after the resilience change point. Overall, our study highlights the key roles of water availability and temperature in shaping vegetation resilience and underscores the asymmetric effects of daytime and nighttime warming on ecosystem resilience.展开更多
Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and so...Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.展开更多
Full waveform inversion methods evaluate the properties of subsurface media by minimizing the misfit between synthetic and observed data.However,these methods omit measurement errors and physical assumptions in modeli...Full waveform inversion methods evaluate the properties of subsurface media by minimizing the misfit between synthetic and observed data.However,these methods omit measurement errors and physical assumptions in modeling,resulting in several problems in practical applications.In particular,full waveform inversion methods are very sensitive to erroneous observations(outliers)that violate the Gauss–Markov theorem.Herein,we propose a method for addressing spurious observations or outliers.Specifically,we remove outliers by inverting the synthetic data using the local convexity of the Gaussian distribution.To achieve this,we apply a waveform-like noise model based on a specific covariance matrix definition.Finally,we build an inversion problem based on the updated data,which is consistent with the wavefield reconstruction inversion method.Overall,we report an alternative optimization inversion problem for data containing outliers.The proposed method is robust because it uses uncertainties.This method enables accurate inversion,even when based on noisy models or a wrong wavelet.展开更多
Congestion control is an inherent challenge of V2X(Vehicle to Everything)technologies.Due to the use of a broadcasting mechanism,channel congestion becomes severe with the increase in vehicle density.The researchers s...Congestion control is an inherent challenge of V2X(Vehicle to Everything)technologies.Due to the use of a broadcasting mechanism,channel congestion becomes severe with the increase in vehicle density.The researchers suggested reducing the frequency of packet dissemination to relieve congestion,which caused a rise in road driving risk.Obviously,high-risk vehicles should be able to send messages timely to alarm surrounding vehicles.Therefore,packet dissemination frequency should be set according to the corresponding vehicle’s risk level,which is hard to evaluate.In this paper,a two-stage fuzzy inference model is constructed to evaluate a vehicle’s risk level,while a congestion control algorithm DRG-DCC(Driving Risk Game-Distributed Congestion Control)is proposed.Moreover,HPSO is employed to find optimal solutions.The simulation results show that the proposed method adjusts the transmission frequency based on driving risk,effectively striking a balance between transmission delay and channel busy rate.展开更多
The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These con...The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These concerns have spurred a growing demand for dataset copyright auditing techniques,which aim to detect and verify potential infringements in the training data of commercial AI systems.This paper presents a survey of existing auditing solutions,categorizing them across key dimensions:data modality,model training stage,data overlap scenarios,and model access levels.We highlight major trends,including the prevalence of black-box auditing methods and the emphasis on fine-tuning rather than pre-training.Through an in-depth analysis of 12 representative works,we extract four key observations that reveal the limitations of current methods.Furthermore,we identify three open challenges and propose future directions for robust,multimodal,and scalable auditing solutions.Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.展开更多
Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step...Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step-forward in causal inference is expected.To address these,we conducted a longitudinal study with three repeated measurements involving 201 participants in Beijing,among which 100 eligible participants were included for the present study.Twenty-three PFAS and four lipid indicators were assessed at each visit.We used linear mixed models and quantile g-computation models to investigate associations between PFAS and blood lipid levels.A latent class growth model described PFAS serum exposure patterns,and a generalized linear model demonstrated associations between these patterns and lipid variability.Our study found that PFDA was associated with increased TC(β=0.083,95%CI:0.011,0.155)and HDL-C(β=0.106,95%CI:0.034,0.178).The PFAS mixture also showed a positive relationship with TC(β=0.06,95%CI:0.02,0.10),with PFDA contributing most positively.Compared to the low trajectory group,the middle trajectory group for PFDA was associated with VIM of TC(β=0.756,95%CI:0.153,1.359).Furthermore,PFDA showed biological gradientswith lipid metabolism.This is the first repeated-measures study to identify the impact of PFAS serum exposure pattern on the lipid metabolism and the first to estimate the association between PFAS and blood lipid levels in middle-aged and elderly Chinese and reinforce the evidence of their causal relationship through epidemiological studies.展开更多
Background The annotation of fashion images is a significantly important task in the fashion industry as well as social media and e-commerce.However,owing to the complexity and diversity of fashion images,this task en...Background The annotation of fashion images is a significantly important task in the fashion industry as well as social media and e-commerce.However,owing to the complexity and diversity of fashion images,this task entails multiple challenges,including the lack of fine-grained captions and confounders caused by dataset bias.Specifically,confounders often cause models to learn spurious correlations,thereby reducing their generalization capabilities.Method In this work,we propose the Deconfounded Fashion Image Captioning(DFIC)framework,which first uses multimodal retrieval to enrich the predicted captions of clothing,and then constructs a detailed causal graph using causal inference in the decoder to perform deconfounding.Multimodal retrieval is used to obtain semantic words related to image features,which are input into the decoder as prompt words to enrich sentence descriptions.In the decoder,causal inference is applied to disentangle visual and semantic features while concurrently eliminating visual and language confounding.Results Overall,our method can not only effectively enrich the captions of target images,but also greatly reduce confounders caused by the dataset.To verify the effectiveness of the proposed framework,the model was experimentally verified using the FACAD dataset.展开更多
In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light ...In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light to provide heating,lighting,hot water,electricity and even cooling for homes,businesses,and industries.Therefore,ground-level solar radiation data is important for these applications.Thus,our work aims to use a mathematical modeling tool to predict solar irradiation.For this purpose,we are interested in the application of the Adaptive Neuro Fuzzy Inference System.Through this type of artificial neural system,10 models were developed,based on meteorological data such as the Day number(Nj),Ambient temperature(T),Relative Humidity(Hr),Wind speed(WS),Wind direction(WD),Declination(δ),Irradiation outside the atmosphere(Goh),Maximum temperature(Tmax),Minimum temperature(Tmin).These models have been tested by different static indicators to choose the most suitable one for the estimation of the daily global solar radiation.This study led us to choose the M8 model,which takes Nj,T,Hr,δ,Ws,Wd,G0,and S0 as input variables because it presents the best performance either in the learning phase(R^(2)=0.981,RMSE=0.107 kW/m^(2),MAE=0.089 kW/m2)or in the validation phase(R^(2)=0.979,RMSE=0.117 kW/m^(2),MAE=0.101 kW/m^(2)).展开更多
文摘Atherosclerotic diseases is a diffuse process that involves the coronaries, carotids, renals and all other peripheral arteries owing to the systemic nature of atherosclerotic pathophysiology. This systemic precipitants that promote aggressive atherogenesis have been confirmed in multiple studies showing a relationship between atherosclerotic disease in one vascular bed with disease in another. However, the strength of this relationship varies from patient to patient. Thus, the practical utility of the diffuse nature of atheresclerosis is questionable. Ge and colleagues have proposed the use of left main (LM)coronary artery disease as a potential marker for left anterior descending (lAD) atherosclerotic disease. At first thought, this seems useless since the evaluation of the LM (by angiography or IVUS) can just as easily be performed in the LAD so why bother searching for such a surrogate? However, newer (non-invasive) imaging modalifies are making great gains and will be able to reliably image the LM sooner than the LAD (especially the distal LAD) so such a surrogate could have practical applications.
文摘The Jain logic (Nay) addresses concerns elicited by sense experience of observable and measurable reality. Reality is what it is, and it exists independent of the observer. The last Jain Tirthankar Mahaveer suggested that organisms interact with such realities for survival needs and become concerned about the consequences. He suggested a code of conduct for reality-based behaviors to address concerns. Perceptions and impressions provide measures (praman) of information in sense experience, and with other evidence guide choices and decisions to act and bear consequences. Ethical behaviors rooted in reality have desirable consequences, and inconsistent and contradictory behaviors are undesirable consequences. Omniscience (God, Brahm) is discarded as a self-referential ad hoc construct inconsistent and contradictory to real world behaviors. This article is survey of assumptions and models to represent, interpret, and validate knowledge that begins with logical deduction for inference (anuman) based on evidence from sense experience (Jain 2011). Secular and atheistic thrust of thought and practice encourages reasoning and open-ended search with affirmed assertions and independent evidence. Individual identity (atm) emerges with consistent behaviors to overcome fallibility and unreliability by minimizing doubt (Syad-Saptbhangi Nay). The first Tirthankar Rishabh Nath (ca. 2700 BC) suggested that the content (sat) of real and abstract objects and concerns during a change is conserved as the net balance of the inputs and outputs (Tatia 1994). Identity and content of assertions and evidence is also conserved during logical manipulations for reasoning. Each assertion and its negation are to be affirmed with independent evidence, and lack of evidence for presence is not necessarily the evidence for either non-absence or non-existence.
文摘Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.
基金Supported by the National Natural Science Foundation of China(12061017,12361055)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.
基金the Anusandhan National Research Foundation(ANRF),New Delhi[Erstwhile,Science and Engineering Research Board(SERB)]Department of Science and Technology(DST)(Government of India)(File No.:CRG/2022/002618 Dated:22.08.2023)for providing the grant and support to carry out this work effectively.
文摘The primary objective of this study is to measure fluoride levels in groundwater samples using machine learning approaches alongside traditional and fuzzy logic models based health risk assessment in the hard rock Arjunanadi River basin,South India.Fluoride levels in the study area vary between 0.1 and 3.10 mg/L,with 32 samples exceeding the World Health Organization(WHO)standard of 1.5 mg/L.Hydrogeochemical analyses(Durov and Gibbs)clearly show that the overall water chemistry is primarily influenced by simple dissolution,mixing,and rock-water interactions,indicating that geogenic sources are the predominant contributors to fluoride in the study area.Around 446.5 km^(2)is considered at risk.In predictive analysis,five Machine Learning(ML)models were used,with the AdaBoost model performing better than the other models,achieving 96%accuracy and 4%error rate.The Traditional Health Risk Assessment(THRA)results indicate that 65%of samples pose highly susceptible for dental fluorosis,while 12%of samples pose highly susceptible for skeletal fluorosis in young age groups.The Fuzzy Inference System(FIS)model effectively manages ambiguity and linguistic factors,which are crucial when addressing health risks linked to groundwater fluoride contamination.In this model,input variables include fluoride concentration,individual age,and ingestion rate,while output variables consist of dental caries risk,dental fluorosis,and skeletal fluorosis.The overall results indicate that increased ingestion rates and prolonged exposure to contaminated water make adults and the elderly people vulnerable to dental and skeletal fluorosis,along with very young and young age groups.This study is an essential resource for local authorities,healthcare officials,and communities,aiding in the mitigation of health risks associated with groundwater contamination and enhancing quality of life through improved water management and health risk assessment,aligning with Sustainable Development Goals(SDGs)3 and 6,thereby contributing to a cleaner and healthier society.
文摘Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.
基金supported by the National Natural Science Foundation of China(No.71501183).
文摘In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes that the projectile dispersion of ammunition is a two-dimensional joint normal distribution,and proposes a new Bayesian inference method of ammunition hit probability based on normal-inverse Wishart distribution.Firstly,the conjugate joint prior distribution of the projectile dispersion characteristic parameters is determined to be a normal inverse Wishart distribution,and the hyperparameters in the prior distribution are estimated by simulation experimental data and historical measured data.Secondly,the field test data is integrated with the Bayesian formula to obtain the joint posterior distribution of the projectile dispersion characteristic parameters,and then the hit probability of the ammunition is estimated.Finally,compared with the binomial distribution method,the method in this paper can consider the dispersion information of ammunition projectiles,and the hit probability information is more fully utilized.The hit probability results are closer to the field shooting test samples.This method has strong applicability and is conducive to obtaining more accurate hit probability estimation results.
基金the financial support by the National Natural Science Foundation of China(Grant No.U24B2029)the Key Projects of the National Natural Science Foundation of China(Grant No.52334001)+1 种基金the Strategic Cooperation Technology Projects of CNPC and CUPB(Grand No.ZLZX2020-02)the China University of Petroleum,Beijing(Grand No.ZX20230042)。
文摘Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult for the current ROP models to guarantee the prediction accuracy and the robustness of the models at the same time.To address the current issues,a new ROP prediction model was developed in this study,which considers ROP as a time series signal(ROP signal).The model is based on the time convolutional network(TCN)framework and integrates ensemble empirical modal decomposition(EEMD)and Bayesian network causal inference(BN),the model is named EEMD-BN-TCN.Within the proposed model,the EEMD decomposes the original ROP signal into multiple sets of sub-signals.The BN determines the causal relationship between the sub-signals and the key physical parameters(weight on bit and revolutions per minute)and carries out preliminary reconstruction of the sub-signals based on the causal relationship.The TCN predicts signals reconstructed by BN.When applying this model to an actual production well,the average absolute percentage error of the EEMD-BN-TCN prediction decreased from 18.4%with TCN to 9.2%.In addition,compared with other models,the EEMD-BN-TCN can improve the decomposition signal of ROP by regulating weight on bit and revolutions per minute,ultimately enhancing ROP.
基金supported by the National Natural Science Foundation of China(No.61931011)the Jiangsu Provincial Key Research and Development Program,China(No.BE2021013-4)the Fundamental Research Project in University Characteristic Disciplines,China(No.ILF240071A24)。
文摘Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operations,owing to their malleability and versatility.However,the computation-intensive and latency-sensitive natures of CNNs present a formidable obstacle to their deployment on resource-constrained UAVs.Some early studies have explored a hybrid approach that dynamically switches between lightweight and complex models to balance accuracy and latency.However,they often overlook scenarios involving multiple concurrent CNN streams,where competition for resources between streams can substantially impact latency and overall system performance.In this paper,we first investigate the deployment of both lightweight and complex models for multiple CNN streams in UAV swarm.Specifically,we formulate an optimization problem to minimize the total latency across multiple CNN streams,under the constraints on UAV memory and the accuracy requirement of each stream.To address this problem,we propose an algorithm called Adaptive Model Switching of collaborative inference for MultiCNN streams(AMSM)to identify the inference strategy with a low latency.Simulation results demonstrate that the proposed AMSM algorithm consistently achieves the lowest latency while meeting the accuracy requirements compared to benchmark algorithms.
基金financial support from the Brazilian National Council for Scientific and Technological Development(CNPq)and the Federal University of Ouro PretoFinancial support from the Minas Gerais Research Foundation(FAPEMIG)under grant number APQ-06559-24 is also gratefully acknowledged。
文摘This study investigated forest recovery in the Atlantic Rainforest and Rupestrian Grassland of Brazil using the diffusive-logistic growth(DLG)model.This model simulates vegetation growth in the two mountain biomes considering spatial location,time,and two key parameters:diffusion rate and growth rate.A Bayesian framework is employed to analyze the model's parameters and assess prediction uncertainties.Satellite imagery from 1992 and 2022 was used for model calibration and validation.By solving the DLG model using the finite difference method,we predicted a 6.6%–51.1%increase in vegetation density for the Atlantic Rainforest and a 5.3%–99.9%increase for the Rupestrian Grassland over 30 years,with the latter showing slower recovery but achieving a better model fit(lower RMSE)compared to the Atlantic Rainforest.The Bayesian approach revealed well-defined parameter distributions and lower parameter values for the Rupestrian Grassland,supporting the slower recovery prediction.Importantly,the model achieved good agreement with observed vegetation patterns in unseen validation data for both biomes.While there were minor spatial variations in accuracy,the overall distributions of predicted and observed vegetation density were comparable.Furthermore,this study highlights the importance of considering uncertainty in model predictions.Bayesian inference allowed us to quantify this uncertainty,demonstrating that the model's performance can vary across locations.Our approach provides valuable insights into forest regeneration process uncertainties,enabling comparisons of modeled scenarios at different recovery stages for better decision-making in these critical mountain biomes.
文摘Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken using component failure rate data to predict PTC for a full stroke test and a partial stroke test.Given the subjective and uncertain aspects of the FMEDA approach,specifically the selection of component failure rates and the determination of the probability of detecting failure modes,a Fuzzy Inference System(FIS)was proposed to manage the data,addressing the inherent uncertainties.Fuzzy inference systems have been used previously for various FMEA type assessments,but this is the first time an FIS has been employed for use with FMEDA.ESDV PTC values were generated from both the standard FMEDA and the fuzzy-FMEDA approaches using data provided by FMEDA experts.This work demonstrates that fuzzy inference systems can address the subjectivity inherent in FMEDA data,enabling reliable estimates of ESDV proof test coverage for both full and partial stroke tests.This facilitates optimized maintenance planning while ensuring safety is not compromised.
文摘Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-length fields,and the accurate identification of field boundaries has a great impact on the subsequent analysis and final performance.Hence,this paper proposes a new protocol segmentation method based on Information-theoretic statistical analysis for binary protocols by formulating the field segmentation of unsupervised binary protocols as a probabilistic inference problem and modeling its uncertainty.Specifically,we design four related constructions between entropy changes and protocol field segmentation,introduce random variables,and construct joint probability distributions with traffic sample observations.Probabilistic inference is then performed to identify the possible protocol segmentation points.Extensive trials on nine common public and industrial control protocols show that the proposed method yields higher-quality protocol segmentation results.
基金National Key Research and Development Program,No.2021xjkk0303。
文摘Understanding the characteristics and driving factors behind changes in vegetation ecosystem resilience is crucial for mitigating both current and future impacts of climate change. Despite recent advances in resilience research, significant knowledge gaps remain regarding the drivers of resilience changes. In this study, we investigated the dynamics of ecosystem resilience across China and identified potential driving factors using the kernel normalized difference vegetation index(kNDVI) from 2000 to 2020. Our results indicate that vegetation resilience in China has exhibited an increasing trend over the past two decades, with a notable breakpoint occurring around 2012. We found that precipitation was the dominant driver of changes in ecosystem resilience, accounting for 35.82% of the variation across China, followed by monthly average maximum temperature(Tmax) and vapor pressure deficit(VPD), which explained 28.95% and 28.31% of the variation, respectively. Furthermore, we revealed that daytime and nighttime warming has asymmetric impacts on vegetation resilience, with temperature factors such as Tmin and Tmax becoming more influential, while the importance of precipitation slightly decreases after the resilience change point. Overall, our study highlights the key roles of water availability and temperature in shaping vegetation resilience and underscores the asymmetric effects of daytime and nighttime warming on ecosystem resilience.
基金supported by the National Natural Science Foundation of China(Nos.62176122 and 62061146002).
文摘Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.
基金National Natural Science Foundation of China under Grant 42276055National Key Research and Development Program under Grant 2022YFC2803503Fundamental Research Funds for the Central Universities under Grant 202262008.
文摘Full waveform inversion methods evaluate the properties of subsurface media by minimizing the misfit between synthetic and observed data.However,these methods omit measurement errors and physical assumptions in modeling,resulting in several problems in practical applications.In particular,full waveform inversion methods are very sensitive to erroneous observations(outliers)that violate the Gauss–Markov theorem.Herein,we propose a method for addressing spurious observations or outliers.Specifically,we remove outliers by inverting the synthetic data using the local convexity of the Gaussian distribution.To achieve this,we apply a waveform-like noise model based on a specific covariance matrix definition.Finally,we build an inversion problem based on the updated data,which is consistent with the wavefield reconstruction inversion method.Overall,we report an alternative optimization inversion problem for data containing outliers.The proposed method is robust because it uses uncertainties.This method enables accurate inversion,even when based on noisy models or a wrong wavelet.
基金supported by the special key project of Chongqing Technology Innovation and Application Development under Grant No.cstc2021jscx-gksbX0057the Special Major Project of Chongqing Technology Innovation and Application Development under Grant No.CSTB2022TIADSTX0003.
文摘Congestion control is an inherent challenge of V2X(Vehicle to Everything)technologies.Due to the use of a broadcasting mechanism,channel congestion becomes severe with the increase in vehicle density.The researchers suggested reducing the frequency of packet dissemination to relieve congestion,which caused a rise in road driving risk.Obviously,high-risk vehicles should be able to send messages timely to alarm surrounding vehicles.Therefore,packet dissemination frequency should be set according to the corresponding vehicle’s risk level,which is hard to evaluate.In this paper,a two-stage fuzzy inference model is constructed to evaluate a vehicle’s risk level,while a congestion control algorithm DRG-DCC(Driving Risk Game-Distributed Congestion Control)is proposed.Moreover,HPSO is employed to find optimal solutions.The simulation results show that the proposed method adjusts the transmission frequency based on driving risk,effectively striking a balance between transmission delay and channel busy rate.
基金supported in part by NSFC under Grant Nos.62402379,U22A2029 and U24A20237.
文摘The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These concerns have spurred a growing demand for dataset copyright auditing techniques,which aim to detect and verify potential infringements in the training data of commercial AI systems.This paper presents a survey of existing auditing solutions,categorizing them across key dimensions:data modality,model training stage,data overlap scenarios,and model access levels.We highlight major trends,including the prevalence of black-box auditing methods and the emphasis on fine-tuning rather than pre-training.Through an in-depth analysis of 12 representative works,we extract four key observations that reveal the limitations of current methods.Furthermore,we identify three open challenges and propose future directions for robust,multimodal,and scalable auditing solutions.Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.
基金supported by the National Natural Science Foundation of China(No.82404365)the Noncommunicable Chronic Diseases-National Science and Technology Major Project(No.2023ZD0513200)+7 种基金China Medical Board(No.15-230)China Postdoctoral Science Foundation(Nos.2023M730317and 2023T160066)the Fundamental Research Funds for the Central Universities(No.3332023042)the Open Project of Hebei Key Laboratory of Environment and Human Health(No.202301)the National Key Research and Development Program of China(No.2022YFC3703000)the Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences(No.2022-JKCS-11)the CAMS Innovation Fund for Medical Sciences(No.2022-I2M-JB-003)the Programs of the National Natural Science Foundation of China(No.21976050).
文摘Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step-forward in causal inference is expected.To address these,we conducted a longitudinal study with three repeated measurements involving 201 participants in Beijing,among which 100 eligible participants were included for the present study.Twenty-three PFAS and four lipid indicators were assessed at each visit.We used linear mixed models and quantile g-computation models to investigate associations between PFAS and blood lipid levels.A latent class growth model described PFAS serum exposure patterns,and a generalized linear model demonstrated associations between these patterns and lipid variability.Our study found that PFDA was associated with increased TC(β=0.083,95%CI:0.011,0.155)and HDL-C(β=0.106,95%CI:0.034,0.178).The PFAS mixture also showed a positive relationship with TC(β=0.06,95%CI:0.02,0.10),with PFDA contributing most positively.Compared to the low trajectory group,the middle trajectory group for PFDA was associated with VIM of TC(β=0.756,95%CI:0.153,1.359).Furthermore,PFDA showed biological gradientswith lipid metabolism.This is the first repeated-measures study to identify the impact of PFAS serum exposure pattern on the lipid metabolism and the first to estimate the association between PFAS and blood lipid levels in middle-aged and elderly Chinese and reinforce the evidence of their causal relationship through epidemiological studies.
文摘Background The annotation of fashion images is a significantly important task in the fashion industry as well as social media and e-commerce.However,owing to the complexity and diversity of fashion images,this task entails multiple challenges,including the lack of fine-grained captions and confounders caused by dataset bias.Specifically,confounders often cause models to learn spurious correlations,thereby reducing their generalization capabilities.Method In this work,we propose the Deconfounded Fashion Image Captioning(DFIC)framework,which first uses multimodal retrieval to enrich the predicted captions of clothing,and then constructs a detailed causal graph using causal inference in the decoder to perform deconfounding.Multimodal retrieval is used to obtain semantic words related to image features,which are input into the decoder as prompt words to enrich sentence descriptions.In the decoder,causal inference is applied to disentangle visual and semantic features while concurrently eliminating visual and language confounding.Results Overall,our method can not only effectively enrich the captions of target images,but also greatly reduce confounders caused by the dataset.To verify the effectiveness of the proposed framework,the model was experimentally verified using the FACAD dataset.
文摘In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light to provide heating,lighting,hot water,electricity and even cooling for homes,businesses,and industries.Therefore,ground-level solar radiation data is important for these applications.Thus,our work aims to use a mathematical modeling tool to predict solar irradiation.For this purpose,we are interested in the application of the Adaptive Neuro Fuzzy Inference System.Through this type of artificial neural system,10 models were developed,based on meteorological data such as the Day number(Nj),Ambient temperature(T),Relative Humidity(Hr),Wind speed(WS),Wind direction(WD),Declination(δ),Irradiation outside the atmosphere(Goh),Maximum temperature(Tmax),Minimum temperature(Tmin).These models have been tested by different static indicators to choose the most suitable one for the estimation of the daily global solar radiation.This study led us to choose the M8 model,which takes Nj,T,Hr,δ,Ws,Wd,G0,and S0 as input variables because it presents the best performance either in the learning phase(R^(2)=0.981,RMSE=0.107 kW/m^(2),MAE=0.089 kW/m2)or in the validation phase(R^(2)=0.979,RMSE=0.117 kW/m^(2),MAE=0.101 kW/m^(2)).