This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for ...This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.展开更多
Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method...Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method for representing observational uncertainty and develops a two-step approximate Bayesian computation(ABC)framework using time-series data.Within the ABC framework,Euclidean and Bhattacharyya distances are employed as uncertainty quantification metrics to delineate approximate likelihood functions in the initial and subsequent steps,respectively.A novel variational Bayesian Monte Carlo method is introduced to efficiently apply the ABC framework amidst observational uncertainty,resulting in rapid convergence and accurate parameter estimation with minimal iterations.The efficacy of the proposed updating strategy is validated by its application to a shear frame model excited by seismic wave and an aviation pump force sensor for thermal output analysis.The results affirm the efficiency,robustness,and practical applicability of the proposed method.展开更多
The total nitrogen(TN)is a major factor contributing to eutrophication and is a crucial parameter in assessing surface water quality.Accurate and rapid methods are crucial for determining the TN content in water.Herei...The total nitrogen(TN)is a major factor contributing to eutrophication and is a crucial parameter in assessing surface water quality.Accurate and rapid methods are crucial for determining the TN content in water.Herein,a fast,highly sensitive,and pollution-free approach is proposed,which combines ultraviolet(UV)absorption spectroscopy with Bayesian optimized least squares support vector machine(LSSVM)for detecting TN content in water.Water samples collected from sampling points near the Yangtze River basin in Chongqing of China were analyzed using national standard methods to measure TN content as reference values.The prediction of TN content in water was achieved by integrating the UV absorption spectra of water samples with LSSVM.To make the model quickly and accurately select the optimal parameters to improve the accuracy of the prediction model,the Bayesian optimization(BO)algorithm was used to optimize the parameters of the LSSVM.Results show that the prediction model performs well in predicting TN concentration,with a high coefficient of prediction determination(R^(2)=0.9413)and a low root mean square error of prediction(RMSE=0.0779 mg/L).Comparative analysis with previous studies indicates that the model used in this paper achieves lower prediction errors and superior predictive performance.展开更多
Recently,machine learning has become a powerful tool for predicting nuclear charge radius RC,providing novel insights into complex physical phenomena.This study employs a continuous Bayesian probability(CBP)estimator ...Recently,machine learning has become a powerful tool for predicting nuclear charge radius RC,providing novel insights into complex physical phenomena.This study employs a continuous Bayesian probability(CBP)estimator and Bayesian model averaging(BMA)to optimize the predictions of RCfrom sophisticated theoretical models.The CBP estimator treats the residual between the theoretical and experimental values of RCas a continuous variable and derives its posterior probability density function(PDF)from Bayesian theory.The BMA method assigns weights to models based on their predictive performance for benchmark nuclei,thereby accounting for the unique strengths of each model.In global optimization,the CBP estimator improved the predictive accuracy of the three theoretical models by approximately 60%.The extrapolation analyses consistently achieved an improvement rate of approximately 45%,demonstrating the robustness of the CBP estimator.Furthermore,the combination of the CBP and BMA methods reduces the standard deviation to below 0.02 fm,effectively reproducing the pronounced shell effects on RCof the Ca and Sr isotope chains.The studies in this paper propose an efficient method to accurately describe RCof unknown nuclei,with potential applications in research on other nuclear properties.展开更多
Although quantum Bayesian networks provide a promising paradigm for multi-agent decision-making,their practical application faces two challenges in the noisy intermediate-scale quantum(NISQ)era.Limited qubit resources...Although quantum Bayesian networks provide a promising paradigm for multi-agent decision-making,their practical application faces two challenges in the noisy intermediate-scale quantum(NISQ)era.Limited qubit resources restrict direct application to large-scale inference tasks.Additionally,no quantum methods are currently available for multi-agent collaborative decision-making.To address these,we propose a hybrid quantum–classical multi-agent decision-making framework based on hierarchical Bayesian networks,comprising two novel methods.The first one is a hybrid quantum–classical inference method based on hierarchical Bayesian networks.It decomposes large-scale hierarchical Bayesian networks into modular subnetworks.The inference for each subnetwork can be performed on NISQ devices,and the intermediate results are converted into classical messages for cross-layer transmission.The second one is a multi-agent decision-making method using the variational quantum eigensolver(VQE)in the influence diagram.This method models the collaborative decision-making with the influence diagram and encodes the expected utility of diverse actions into a Hamiltonian and subsequently determines the intra-group optimal action efficiently.Experimental validation on the IonQ quantum simulator demonstrates that the hierarchical method outperforms the non-hierarchical method at the functional inference level,and the VQE method can obtain the optimal strategy exactly at the collaborative decision-making level.Our research not only extends the application of quantum computing to multi-agent decision-making but also provides a practical solution for the NISQ era.展开更多
The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden node...This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.展开更多
Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on ...Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.展开更多
In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenario...In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenarios with non-Gaussian noise and heavy-tailed outliers.The proposed design modifies the extended Kalman filter(EKF)for the global navigation satellite system(GNSS),integrating the maximum correntropy criterion(MCC)and the variational Bayesian(VB)method.This adaptive algorithm effectively reduces non-line-of-sight(NLOS)reception contamination and improves estimation accuracy,particularly in time-varying GNSS measurements.Experimental results show that the proposed method significantly outperforms conventional approaches in estimation accuracy under heavy-tailed outliers and non-Gaussian noise.By combining MCC with VB approximation for real-time noise covariance estimation using fixed-point iteration,the VBMCEKF achieves superior filtering performance in challenging GNSS conditions.The method’s adaptability and precision make it ideal for improving satellite navigation performance in stochastic environments.展开更多
The Pamir Plateau,at the northwestern margin of the Tibetan Plateau,is a key region for investigating continental collision and plateau uplifting.To probe its deep structure,we collected seismic data from 263 stations...The Pamir Plateau,at the northwestern margin of the Tibetan Plateau,is a key region for investigating continental collision and plateau uplifting.To probe its deep structure,we collected seismic data from 263 stations across 11 research projects.We applied cross-correlation to noise data and extracted surface wave dispersion data from cross-correlation functions.The extracted dispersion data were subsequently inverted using a 3-D transdimensional Bayesian inversion method(rj-3 DMcMC).The inversion result reveals several crustal low-velocity zones(LVZs).Their formation is likely related to crustal thickening,the exposure of gneiss domes,and thicker sedimentary sequences compared to surrounding areas.In the lower crust and upper mantle,the LVZs in southern Pamir and southeastern Karakoram evolve into high-velocity zones,which expand northeastward with increasing depth.This suggests northward underthrusting of the Indian Plate.We also analyzed the Moho using both the standard deviation of S-wave velocity and the S-wave velocity structure.Results show that significant variations in velocity standard deviation reliably delineate the Moho interface.展开更多
Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian O...Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian Optimization utilizes Gaussian Process (GP) as its core model and employs an adaptive search strategy to efficiently explore and identify optimal combinations of operational parameters within a limited parameter space, thereby enabling rapid optimization of the required parameters to achieve the target droplet size. Traditional methods typically rely on manually selecting a series of operational parameters and conducting multiple simulations to gradually approach the target droplet size. This process is time-consuming and prone to getting trapped in local optima. In contrast, Bayesian Optimization adaptively adjusts its search strategy, significantly reducing computational costs and effectively exploring global optima, thus greatly improving optimization efficiency. Additionally, the study investigates the impact of rectangular rib structures within the T-junction microchannel on droplet generation, revealing how the channel geometry influences droplet formation and size. After determining the target droplet size, we further applied Bayesian Optimization to refine the rib geometry. The integration of Bayesian Optimization with computational fluid dynamics (CFD) offers a promising tool and provides new insights into the optimal design of microfluidic devices.展开更多
A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves...A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves precise adjustment of the network structure by constructing a preliminary random network model and introducing small-world network characteristics and combines L1 norm minimization regularization techniques to control model complexity and optimize the inference process of variable dependencies.In the experiment of game network reconstruction,when the success rate of the L1 norm minimization model’s existence connection reconstruction reaches 100%,the minimum data required is about 40%,while the minimum data required for a sparse Bayesian learning network is about 45%.In terms of operational efficiency,the running time for minimizing the L1 normis basically maintained at 1.0 s,while the success rate of connection reconstruction increases significantly with an increase in data volume,reaching a maximum of 13.2 s.Meanwhile,in the case of a signal-to-noise ratio of 10 dB,the L1 model achieves a 100% success rate in the reconstruction of existing connections,while the sparse Bayesian network had the highest success rate of 90% in the reconstruction of non-existent connections.In the analysis of actual cases,the maximum lift and drop track of the research method is 0.08 m.The mean square error is 5.74 cm^(2).The results indicate that this norm minimization-based method has good performance in data efficiency and model stability,effectively reducing the impact of outliers on the reconstruction results to more accurately reflect the actual situation.展开更多
Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB ...Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB incidence were mapped using heat maps and hierarchical clustering.Socioenvironmental influencing factors were evaluated using a Bayesian spatiotemporal conditional autoregressive(ST-CAR)model.Results Annual incidence of TB in Guangdong decreased from 91.85/100,000 in 2010 to 53.06/100,000in 2019.Spatial hotspots were found in northeastern Guangdong,particularly in Heyuan,Shanwei,and Shantou,while Shenzhen,Dongguan,and Foshan had the lowest rates in the Pearl River Delta.The STCAR model showed that the TB risk was lower with higher per capita Gross Domestic Product(GDP)[Relative Risk(RR),0.91;95%Confidence Interval(CI):0.86–0.98],more the ratio of licensed physicians and physician(RR,0.94;95%CI:0.90-0.98),and higher per capita public expenditure(RR,0.94;95%CI:0.90–0.97),with a marginal effect of population density(RR,0.86;95%CI:0.86–1.00).Conclusion The incidence of TB in Guangdong varies spatially and temporally.Areas with poor economic conditions and insufficient healthcare resources are at an increased risk of TB infection.Strategies focusing on equitable health resource distribution and economic development are the key to TB control.展开更多
Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030...Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030.Methods:The burden of esophageal cancer in Jiangsu in 2019 was estimated using 54 cancer registries’data selected from Jiangsu Cancer Registry.Incident cases of 16 cancer registries were applied for the temporal trend from 2009 to 2019.The burden of esophageal cancer by 2030 was projected using the Bayesian age-period-cohort(BAPC)model.Results:About 24,886 new cases of esophageal cancer(17,233 males and 7,653 females)occurred in Jiangsu in 2019.Rural regions of Jiangsu had the highest incidence rate.The age-standardized incidence rate(ASIR,per 100,000 population)of esophageal cancer in Jiangsu decreased from 27.72 per 100,000 in 2009 to 14.18 per 100,000 in 2019.The BAPC model showed that the ASIR would decline from 13.01 per 100,000 in 2020 to 4.88 per 100,000 in 2030.Conclusions:According to the data,esophageal cancer incidence rates were predicted to decline until 2030,yet the disease burden is still significant in Jiangsu.The existing approaches to prevention and control are effective and need to be maintained.展开更多
Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,ha...Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.展开更多
Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data ...Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
基金supported by Istanbul Technical University(Project No.45698)supported through the“Young Researchers’Career Development Project-training of doctoral students”of the Croatian Science Foundation.
文摘This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.
基金supported by the National Natural Science Foundation of China(Grant No.U23B20105).
文摘Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method for representing observational uncertainty and develops a two-step approximate Bayesian computation(ABC)framework using time-series data.Within the ABC framework,Euclidean and Bhattacharyya distances are employed as uncertainty quantification metrics to delineate approximate likelihood functions in the initial and subsequent steps,respectively.A novel variational Bayesian Monte Carlo method is introduced to efficiently apply the ABC framework amidst observational uncertainty,resulting in rapid convergence and accurate parameter estimation with minimal iterations.The efficacy of the proposed updating strategy is validated by its application to a shear frame model excited by seismic wave and an aviation pump force sensor for thermal output analysis.The results affirm the efficiency,robustness,and practical applicability of the proposed method.
基金supported by the National Natural Science Foundation of China(Nos.32171627 and 62105252)the Science and Technology Research Program of Chongqing Municipal Education Commission(No.KJZD-M202200602)the Hangzhou Science and Technology Development Project(No.202204T04).
文摘The total nitrogen(TN)is a major factor contributing to eutrophication and is a crucial parameter in assessing surface water quality.Accurate and rapid methods are crucial for determining the TN content in water.Herein,a fast,highly sensitive,and pollution-free approach is proposed,which combines ultraviolet(UV)absorption spectroscopy with Bayesian optimized least squares support vector machine(LSSVM)for detecting TN content in water.Water samples collected from sampling points near the Yangtze River basin in Chongqing of China were analyzed using national standard methods to measure TN content as reference values.The prediction of TN content in water was achieved by integrating the UV absorption spectra of water samples with LSSVM.To make the model quickly and accurately select the optimal parameters to improve the accuracy of the prediction model,the Bayesian optimization(BO)algorithm was used to optimize the parameters of the LSSVM.Results show that the prediction model performs well in predicting TN concentration,with a high coefficient of prediction determination(R^(2)=0.9413)and a low root mean square error of prediction(RMSE=0.0779 mg/L).Comparative analysis with previous studies indicates that the model used in this paper achieves lower prediction errors and superior predictive performance.
基金supported by the National Natural Science Foundation of China(Nos.12475135,12035011,and 12475119)the Shandong Provincial Natural Science Foundation,China(No.ZR2020MA096)the Fundamental Research Funds for the Central Universities(No.22CX03017A)。
文摘Recently,machine learning has become a powerful tool for predicting nuclear charge radius RC,providing novel insights into complex physical phenomena.This study employs a continuous Bayesian probability(CBP)estimator and Bayesian model averaging(BMA)to optimize the predictions of RCfrom sophisticated theoretical models.The CBP estimator treats the residual between the theoretical and experimental values of RCas a continuous variable and derives its posterior probability density function(PDF)from Bayesian theory.The BMA method assigns weights to models based on their predictive performance for benchmark nuclei,thereby accounting for the unique strengths of each model.In global optimization,the CBP estimator improved the predictive accuracy of the three theoretical models by approximately 60%.The extrapolation analyses consistently achieved an improvement rate of approximately 45%,demonstrating the robustness of the CBP estimator.Furthermore,the combination of the CBP and BMA methods reduces the standard deviation to below 0.02 fm,effectively reproducing the pronounced shell effects on RCof the Ca and Sr isotope chains.The studies in this paper propose an efficient method to accurately describe RCof unknown nuclei,with potential applications in research on other nuclear properties.
基金supported by the National Natural Science Foundation of China(Grant Nos.62473371 and 61673389)。
文摘Although quantum Bayesian networks provide a promising paradigm for multi-agent decision-making,their practical application faces two challenges in the noisy intermediate-scale quantum(NISQ)era.Limited qubit resources restrict direct application to large-scale inference tasks.Additionally,no quantum methods are currently available for multi-agent collaborative decision-making.To address these,we propose a hybrid quantum–classical multi-agent decision-making framework based on hierarchical Bayesian networks,comprising two novel methods.The first one is a hybrid quantum–classical inference method based on hierarchical Bayesian networks.It decomposes large-scale hierarchical Bayesian networks into modular subnetworks.The inference for each subnetwork can be performed on NISQ devices,and the intermediate results are converted into classical messages for cross-layer transmission.The second one is a multi-agent decision-making method using the variational quantum eigensolver(VQE)in the influence diagram.This method models the collaborative decision-making with the influence diagram and encodes the expected utility of diverse actions into a Hamiltonian and subsequently determines the intra-group optimal action efficiently.Experimental validation on the IonQ quantum simulator demonstrates that the hierarchical method outperforms the non-hierarchical method at the functional inference level,and the VQE method can obtain the optimal strategy exactly at the collaborative decision-making level.Our research not only extends the application of quantum computing to multi-agent decision-making but also provides a practical solution for the NISQ era.
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
基金supported by National key research and development program(No.2022YFA1602404)the National Natural Science Foundation of China(Nos.12388102,12275338,12005280)the Key Laboratory of Nuclear Data foundation(No.JCKY2022201C152)。
文摘This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.
基金support from the National Natural Science Foundation of China(Grant Nos:52379103 and 52279103)the Natural Science Foundation of Shandong Province(Grant No:ZR2023YQ049).
文摘Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.
基金supported by the National Science and Technology Council,Taiwan under grants NSTC 111-2221-E-019-047 and NSTC 112-2221-E-019-030.
文摘In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenarios with non-Gaussian noise and heavy-tailed outliers.The proposed design modifies the extended Kalman filter(EKF)for the global navigation satellite system(GNSS),integrating the maximum correntropy criterion(MCC)and the variational Bayesian(VB)method.This adaptive algorithm effectively reduces non-line-of-sight(NLOS)reception contamination and improves estimation accuracy,particularly in time-varying GNSS measurements.Experimental results show that the proposed method significantly outperforms conventional approaches in estimation accuracy under heavy-tailed outliers and non-Gaussian noise.By combining MCC with VB approximation for real-time noise covariance estimation using fixed-point iteration,the VBMCEKF achieves superior filtering performance in challenging GNSS conditions.The method’s adaptability and precision make it ideal for improving satellite navigation performance in stochastic environments.
基金supported by the National Natural Science Foundation of China(Grant No.42174126)the Alliance of International Science Organizations(ANSO)Project(Grant No.ANSO-CR-PP2022-04)+1 种基金the Deep Earth Probe and Mineral Resources Exploration National Science and Technology Major Project(Grant Nos.2024ZD1002206,2024ZD1002201)Key R&D Program of Xinjiang Uyghur Autonomous Region(Grant No.2024B03013-2)。
文摘The Pamir Plateau,at the northwestern margin of the Tibetan Plateau,is a key region for investigating continental collision and plateau uplifting.To probe its deep structure,we collected seismic data from 263 stations across 11 research projects.We applied cross-correlation to noise data and extracted surface wave dispersion data from cross-correlation functions.The extracted dispersion data were subsequently inverted using a 3-D transdimensional Bayesian inversion method(rj-3 DMcMC).The inversion result reveals several crustal low-velocity zones(LVZs).Their formation is likely related to crustal thickening,the exposure of gneiss domes,and thicker sedimentary sequences compared to surrounding areas.In the lower crust and upper mantle,the LVZs in southern Pamir and southeastern Karakoram evolve into high-velocity zones,which expand northeastward with increasing depth.This suggests northward underthrusting of the Indian Plate.We also analyzed the Moho using both the standard deviation of S-wave velocity and the S-wave velocity structure.Results show that significant variations in velocity standard deviation reliably delineate the Moho interface.
基金support from National Key Research and Development Program of China(2023YFC3905400)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA0490102)National Natural Science Foundation of China(22178354,2242100322408374).
文摘Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian Optimization utilizes Gaussian Process (GP) as its core model and employs an adaptive search strategy to efficiently explore and identify optimal combinations of operational parameters within a limited parameter space, thereby enabling rapid optimization of the required parameters to achieve the target droplet size. Traditional methods typically rely on manually selecting a series of operational parameters and conducting multiple simulations to gradually approach the target droplet size. This process is time-consuming and prone to getting trapped in local optima. In contrast, Bayesian Optimization adaptively adjusts its search strategy, significantly reducing computational costs and effectively exploring global optima, thus greatly improving optimization efficiency. Additionally, the study investigates the impact of rectangular rib structures within the T-junction microchannel on droplet generation, revealing how the channel geometry influences droplet formation and size. After determining the target droplet size, we further applied Bayesian Optimization to refine the rib geometry. The integration of Bayesian Optimization with computational fluid dynamics (CFD) offers a promising tool and provides new insights into the optimal design of microfluidic devices.
基金supported by the Scientific and Technological Developing Scheme of Jilin Province,China(No.20240101371JC)the National Natural Science Foundation of China(No.62107008).
文摘A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves precise adjustment of the network structure by constructing a preliminary random network model and introducing small-world network characteristics and combines L1 norm minimization regularization techniques to control model complexity and optimize the inference process of variable dependencies.In the experiment of game network reconstruction,when the success rate of the L1 norm minimization model’s existence connection reconstruction reaches 100%,the minimum data required is about 40%,while the minimum data required for a sparse Bayesian learning network is about 45%.In terms of operational efficiency,the running time for minimizing the L1 normis basically maintained at 1.0 s,while the success rate of connection reconstruction increases significantly with an increase in data volume,reaching a maximum of 13.2 s.Meanwhile,in the case of a signal-to-noise ratio of 10 dB,the L1 model achieves a 100% success rate in the reconstruction of existing connections,while the sparse Bayesian network had the highest success rate of 90% in the reconstruction of non-existent connections.In the analysis of actual cases,the maximum lift and drop track of the research method is 0.08 m.The mean square error is 5.74 cm^(2).The results indicate that this norm minimization-based method has good performance in data efficiency and model stability,effectively reducing the impact of outliers on the reconstruction results to more accurately reflect the actual situation.
基金supported by the Guangdong Provincial Clinical Research Center for Tuberculosis(No.2020B1111170014)。
文摘Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB incidence were mapped using heat maps and hierarchical clustering.Socioenvironmental influencing factors were evaluated using a Bayesian spatiotemporal conditional autoregressive(ST-CAR)model.Results Annual incidence of TB in Guangdong decreased from 91.85/100,000 in 2010 to 53.06/100,000in 2019.Spatial hotspots were found in northeastern Guangdong,particularly in Heyuan,Shanwei,and Shantou,while Shenzhen,Dongguan,and Foshan had the lowest rates in the Pearl River Delta.The STCAR model showed that the TB risk was lower with higher per capita Gross Domestic Product(GDP)[Relative Risk(RR),0.91;95%Confidence Interval(CI):0.86–0.98],more the ratio of licensed physicians and physician(RR,0.94;95%CI:0.90-0.98),and higher per capita public expenditure(RR,0.94;95%CI:0.90–0.97),with a marginal effect of population density(RR,0.86;95%CI:0.86–1.00).Conclusion The incidence of TB in Guangdong varies spatially and temporally.Areas with poor economic conditions and insufficient healthcare resources are at an increased risk of TB infection.Strategies focusing on equitable health resource distribution and economic development are the key to TB control.
文摘Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030.Methods:The burden of esophageal cancer in Jiangsu in 2019 was estimated using 54 cancer registries’data selected from Jiangsu Cancer Registry.Incident cases of 16 cancer registries were applied for the temporal trend from 2009 to 2019.The burden of esophageal cancer by 2030 was projected using the Bayesian age-period-cohort(BAPC)model.Results:About 24,886 new cases of esophageal cancer(17,233 males and 7,653 females)occurred in Jiangsu in 2019.Rural regions of Jiangsu had the highest incidence rate.The age-standardized incidence rate(ASIR,per 100,000 population)of esophageal cancer in Jiangsu decreased from 27.72 per 100,000 in 2009 to 14.18 per 100,000 in 2019.The BAPC model showed that the ASIR would decline from 13.01 per 100,000 in 2020 to 4.88 per 100,000 in 2030.Conclusions:According to the data,esophageal cancer incidence rates were predicted to decline until 2030,yet the disease burden is still significant in Jiangsu.The existing approaches to prevention and control are effective and need to be maintained.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Technology Development Program(RS-2023-00264489)funded by the Ministry of SMEs and Startups(MSS,Republic of Korea).
文摘Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.
基金supported by the Natural Science Foundation of Tianjin (No. 22JCQNJC01070)the National Natural Science Foundation of China (No. 42404079)the Key Project of Tianjin Earthquake Agency (No. Zd202402)。
文摘Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.