Rock fragmentation is an important indicator for assessing the quality of blasting operations.However,accurate prediction of rock fragmentation after blasting is challenging due to the complicated blasting parameters ...Rock fragmentation is an important indicator for assessing the quality of blasting operations.However,accurate prediction of rock fragmentation after blasting is challenging due to the complicated blasting parameters and rock properties.For this reason,optimized by the Bayesian optimization algorithm(BOA),four hybrid machine learning models,including random forest,adaptive boosting,gradient boosting,and extremely randomized trees,were developed in this study.A total of 102 data sets with seven input parameters(spacing-to-burden ratio,hole depth-to-burden ratio,burden-to-hole diameter ratio,stemming length-to-burden ratio,powder factor,in situ block size,and elastic modulus)and one output parameter(rock fragment mean size,X_(50))were adopted to train and validate the predictive models.The root mean square error(RMSE),the mean absolute error(MAE),and the coefficient of determination(R^(2))were used as the evaluation metrics.The evaluation results demonstrated that the hybrid models showed superior performance than the standalone models.The hybrid model consisting of gradient boosting and BOA(GBoost-BOA)achieved the best prediction results compared with the other hybrid models,with the highest R^(2)value of 0.96 and the smallest values of RMSE and MAE of 0.03 and 0.02,respectively.Furthermore,sensitivity analysis was carried out to study the effects of input variables on rock fragmentation.In situ block size(XB),elastic modulus(E),and stemming length-to-burden ratio(T/B)were set as the main influencing factors.The proposed hybrid model provided a reliable prediction result and thus could be considered an alternative approach for rock fragment prediction in mining engineering.展开更多
This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden node...This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.展开更多
Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on ...Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.展开更多
To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on...To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A threelevel function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.展开更多
In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenario...In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenarios with non-Gaussian noise and heavy-tailed outliers.The proposed design modifies the extended Kalman filter(EKF)for the global navigation satellite system(GNSS),integrating the maximum correntropy criterion(MCC)and the variational Bayesian(VB)method.This adaptive algorithm effectively reduces non-line-of-sight(NLOS)reception contamination and improves estimation accuracy,particularly in time-varying GNSS measurements.Experimental results show that the proposed method significantly outperforms conventional approaches in estimation accuracy under heavy-tailed outliers and non-Gaussian noise.By combining MCC with VB approximation for real-time noise covariance estimation using fixed-point iteration,the VBMCEKF achieves superior filtering performance in challenging GNSS conditions.The method’s adaptability and precision make it ideal for improving satellite navigation performance in stochastic environments.展开更多
Cognitive bias,stemming from electronic measurement error and variability in human perception,exists in cognitive electronic warfare and affects the outcomes of conflicts.In this paper,the dynamic game approach is emp...Cognitive bias,stemming from electronic measurement error and variability in human perception,exists in cognitive electronic warfare and affects the outcomes of conflicts.In this paper,the dynamic game approach is employed to develop a model for cognitive bias induced by incomplete information and measurement errors in cognitive radar countermeasures.The payoffs for both parties are calculated using the radar's anti-jamming strategy matrix A and the jammer's jamming strategy matrix B.With perfect Bayesian equilibrium,a dynamic radar countermeasure model is established,and the impact of cognitive bias is analyzed.Drawing inspiration from the cognitive bias analysis method used in stock market trading,a cognitive bias model for cognitive radar countermeasures is introduced,and its correctness is mathematically proved.A gaming scenario involving the AN/SPY-1 radar and a smart jammer is set up to analyze the influence of cognitive bias on game outcomes.Simulation results validate the effectiveness of the proposed method.展开更多
Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method...Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method for representing observational uncertainty and develops a two-step approximate Bayesian computation(ABC)framework using time-series data.Within the ABC framework,Euclidean and Bhattacharyya distances are employed as uncertainty quantification metrics to delineate approximate likelihood functions in the initial and subsequent steps,respectively.A novel variational Bayesian Monte Carlo method is introduced to efficiently apply the ABC framework amidst observational uncertainty,resulting in rapid convergence and accurate parameter estimation with minimal iterations.The efficacy of the proposed updating strategy is validated by its application to a shear frame model excited by seismic wave and an aviation pump force sensor for thermal output analysis.The results affirm the efficiency,robustness,and practical applicability of the proposed method.展开更多
The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian O...Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian Optimization utilizes Gaussian Process (GP) as its core model and employs an adaptive search strategy to efficiently explore and identify optimal combinations of operational parameters within a limited parameter space, thereby enabling rapid optimization of the required parameters to achieve the target droplet size. Traditional methods typically rely on manually selecting a series of operational parameters and conducting multiple simulations to gradually approach the target droplet size. This process is time-consuming and prone to getting trapped in local optima. In contrast, Bayesian Optimization adaptively adjusts its search strategy, significantly reducing computational costs and effectively exploring global optima, thus greatly improving optimization efficiency. Additionally, the study investigates the impact of rectangular rib structures within the T-junction microchannel on droplet generation, revealing how the channel geometry influences droplet formation and size. After determining the target droplet size, we further applied Bayesian Optimization to refine the rib geometry. The integration of Bayesian Optimization with computational fluid dynamics (CFD) offers a promising tool and provides new insights into the optimal design of microfluidic devices.展开更多
A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves...A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves precise adjustment of the network structure by constructing a preliminary random network model and introducing small-world network characteristics and combines L1 norm minimization regularization techniques to control model complexity and optimize the inference process of variable dependencies.In the experiment of game network reconstruction,when the success rate of the L1 norm minimization model’s existence connection reconstruction reaches 100%,the minimum data required is about 40%,while the minimum data required for a sparse Bayesian learning network is about 45%.In terms of operational efficiency,the running time for minimizing the L1 normis basically maintained at 1.0 s,while the success rate of connection reconstruction increases significantly with an increase in data volume,reaching a maximum of 13.2 s.Meanwhile,in the case of a signal-to-noise ratio of 10 dB,the L1 model achieves a 100% success rate in the reconstruction of existing connections,while the sparse Bayesian network had the highest success rate of 90% in the reconstruction of non-existent connections.In the analysis of actual cases,the maximum lift and drop track of the research method is 0.08 m.The mean square error is 5.74 cm^(2).The results indicate that this norm minimization-based method has good performance in data efficiency and model stability,effectively reducing the impact of outliers on the reconstruction results to more accurately reflect the actual situation.展开更多
Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB ...Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB incidence were mapped using heat maps and hierarchical clustering.Socioenvironmental influencing factors were evaluated using a Bayesian spatiotemporal conditional autoregressive(ST-CAR)model.Results Annual incidence of TB in Guangdong decreased from 91.85/100,000 in 2010 to 53.06/100,000in 2019.Spatial hotspots were found in northeastern Guangdong,particularly in Heyuan,Shanwei,and Shantou,while Shenzhen,Dongguan,and Foshan had the lowest rates in the Pearl River Delta.The STCAR model showed that the TB risk was lower with higher per capita Gross Domestic Product(GDP)[Relative Risk(RR),0.91;95%Confidence Interval(CI):0.86–0.98],more the ratio of licensed physicians and physician(RR,0.94;95%CI:0.90-0.98),and higher per capita public expenditure(RR,0.94;95%CI:0.90–0.97),with a marginal effect of population density(RR,0.86;95%CI:0.86–1.00).Conclusion The incidence of TB in Guangdong varies spatially and temporally.Areas with poor economic conditions and insufficient healthcare resources are at an increased risk of TB infection.Strategies focusing on equitable health resource distribution and economic development are the key to TB control.展开更多
Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030...Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030.Methods:The burden of esophageal cancer in Jiangsu in 2019 was estimated using 54 cancer registries’data selected from Jiangsu Cancer Registry.Incident cases of 16 cancer registries were applied for the temporal trend from 2009 to 2019.The burden of esophageal cancer by 2030 was projected using the Bayesian age-period-cohort(BAPC)model.Results:About 24,886 new cases of esophageal cancer(17,233 males and 7,653 females)occurred in Jiangsu in 2019.Rural regions of Jiangsu had the highest incidence rate.The age-standardized incidence rate(ASIR,per 100,000 population)of esophageal cancer in Jiangsu decreased from 27.72 per 100,000 in 2009 to 14.18 per 100,000 in 2019.The BAPC model showed that the ASIR would decline from 13.01 per 100,000 in 2020 to 4.88 per 100,000 in 2030.Conclusions:According to the data,esophageal cancer incidence rates were predicted to decline until 2030,yet the disease burden is still significant in Jiangsu.The existing approaches to prevention and control are effective and need to be maintained.展开更多
Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weight...Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weighted scale-free community network and susceptible-infected-recovered(SIR)model.To solve the problem of difficulty in describing the changes in the structure and collaboration mode of the system under external factors,a two-dimensional Monte Carlo method and an improved dynamic Bayesian network are used to simulate the impact of external environmental factors on multi-agent systems.A collaborative information flow path optimization algorithm for agents under environmental factors is designed based on the Dijkstra algorithm.A method for evaluating system interoperability is designed based on simulation experiments,providing reference for the construction planning and optimization of organizational application of the system.Finally,the feasibility of the method is verified through case studies.展开更多
Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,ha...Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.展开更多
Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data ...Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.展开更多
基金National Natural Science Foundation of China,Grant/Award Number:52374153。
文摘Rock fragmentation is an important indicator for assessing the quality of blasting operations.However,accurate prediction of rock fragmentation after blasting is challenging due to the complicated blasting parameters and rock properties.For this reason,optimized by the Bayesian optimization algorithm(BOA),four hybrid machine learning models,including random forest,adaptive boosting,gradient boosting,and extremely randomized trees,were developed in this study.A total of 102 data sets with seven input parameters(spacing-to-burden ratio,hole depth-to-burden ratio,burden-to-hole diameter ratio,stemming length-to-burden ratio,powder factor,in situ block size,and elastic modulus)and one output parameter(rock fragment mean size,X_(50))were adopted to train and validate the predictive models.The root mean square error(RMSE),the mean absolute error(MAE),and the coefficient of determination(R^(2))were used as the evaluation metrics.The evaluation results demonstrated that the hybrid models showed superior performance than the standalone models.The hybrid model consisting of gradient boosting and BOA(GBoost-BOA)achieved the best prediction results compared with the other hybrid models,with the highest R^(2)value of 0.96 and the smallest values of RMSE and MAE of 0.03 and 0.02,respectively.Furthermore,sensitivity analysis was carried out to study the effects of input variables on rock fragmentation.In situ block size(XB),elastic modulus(E),and stemming length-to-burden ratio(T/B)were set as the main influencing factors.The proposed hybrid model provided a reliable prediction result and thus could be considered an alternative approach for rock fragment prediction in mining engineering.
基金supported by National key research and development program(No.2022YFA1602404)the National Natural Science Foundation of China(Nos.12388102,12275338,12005280)the Key Laboratory of Nuclear Data foundation(No.JCKY2022201C152)。
文摘This study investigates photonuclear reaction(γ,n)cross-sections using Bayesian neural network(BNN)analysis.After determining the optimal network architecture,which features two hidden layers,each with 50 hidden nodes,training was conducted for 30,000 iterations to ensure comprehensive data capture.By analyzing the distribution of absolute errors positively correlated with the cross-section for the isotope 159Tb,as well as the relative errors unrelated to the cross-section,we confirmed that the network effectively captured the data features without overfitting.Comparison with the TENDL-2021 Database demonstrated the BNN's reliability in fitting photonuclear cross-sections with lower average errors.The predictions for nuclei with single and double giant dipole resonance peak cross-sections,the accurate determination of the photoneutron reaction threshold in the low-energy region,and the precise description of trends in the high-energy cross-sections further demonstrate the network's generalization ability on the validation set.This can be attributed to the consistency of the training data.By using consistent training sets from different laboratories,Bayesian neural networks can predict nearby unknown cross-sections based on existing laboratory data,thereby estimating the potential differences between other laboratories'existing data and their own measurement results.Experimental measurements of photonuclear reactions on the newly constructed SLEGS beamline will contribute to clarifying the differences in cross-sections within the existing data.
基金support from the National Natural Science Foundation of China(Grant Nos:52379103 and 52279103)the Natural Science Foundation of Shandong Province(Grant No:ZR2023YQ049).
文摘Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.
基金The National Key Research and Development Program of China(No.2023YFC3805003)。
文摘To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A threelevel function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.
基金supported by the National Science and Technology Council,Taiwan under grants NSTC 111-2221-E-019-047 and NSTC 112-2221-E-019-030.
文摘In this paper,an advanced satellite navigation filter design,referred to as the Variational Bayesian Maximum Correntropy Extended Kalman Filter(VBMCEKF),is introduced to enhance robustness and adaptability in scenarios with non-Gaussian noise and heavy-tailed outliers.The proposed design modifies the extended Kalman filter(EKF)for the global navigation satellite system(GNSS),integrating the maximum correntropy criterion(MCC)and the variational Bayesian(VB)method.This adaptive algorithm effectively reduces non-line-of-sight(NLOS)reception contamination and improves estimation accuracy,particularly in time-varying GNSS measurements.Experimental results show that the proposed method significantly outperforms conventional approaches in estimation accuracy under heavy-tailed outliers and non-Gaussian noise.By combining MCC with VB approximation for real-time noise covariance estimation using fixed-point iteration,the VBMCEKF achieves superior filtering performance in challenging GNSS conditions.The method’s adaptability and precision make it ideal for improving satellite navigation performance in stochastic environments.
文摘Cognitive bias,stemming from electronic measurement error and variability in human perception,exists in cognitive electronic warfare and affects the outcomes of conflicts.In this paper,the dynamic game approach is employed to develop a model for cognitive bias induced by incomplete information and measurement errors in cognitive radar countermeasures.The payoffs for both parties are calculated using the radar's anti-jamming strategy matrix A and the jammer's jamming strategy matrix B.With perfect Bayesian equilibrium,a dynamic radar countermeasure model is established,and the impact of cognitive bias is analyzed.Drawing inspiration from the cognitive bias analysis method used in stock market trading,a cognitive bias model for cognitive radar countermeasures is introduced,and its correctness is mathematically proved.A gaming scenario involving the AN/SPY-1 radar and a smart jammer is set up to analyze the influence of cognitive bias on game outcomes.Simulation results validate the effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(Grant No.U23B20105).
文摘Engineering tests can yield inaccurate data due to instrument errors,human factors,and environmental interference,introducing uncertainty in numerical model updating.This study employs the probability-box(p-box)method for representing observational uncertainty and develops a two-step approximate Bayesian computation(ABC)framework using time-series data.Within the ABC framework,Euclidean and Bhattacharyya distances are employed as uncertainty quantification metrics to delineate approximate likelihood functions in the initial and subsequent steps,respectively.A novel variational Bayesian Monte Carlo method is introduced to efficiently apply the ABC framework amidst observational uncertainty,resulting in rapid convergence and accurate parameter estimation with minimal iterations.The efficacy of the proposed updating strategy is validated by its application to a shear frame model excited by seismic wave and an aviation pump force sensor for thermal output analysis.The results affirm the efficiency,robustness,and practical applicability of the proposed method.
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
基金support from National Key Research and Development Program of China(2023YFC3905400)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA0490102)National Natural Science Foundation of China(22178354,2242100322408374).
文摘Integrating Bayesian Optimization with Volume of Fluid (VOF) simulations, this work aims to optimize the operational conditions and geometric parameters of T-junction microchannels for target droplet sizes. Bayesian Optimization utilizes Gaussian Process (GP) as its core model and employs an adaptive search strategy to efficiently explore and identify optimal combinations of operational parameters within a limited parameter space, thereby enabling rapid optimization of the required parameters to achieve the target droplet size. Traditional methods typically rely on manually selecting a series of operational parameters and conducting multiple simulations to gradually approach the target droplet size. This process is time-consuming and prone to getting trapped in local optima. In contrast, Bayesian Optimization adaptively adjusts its search strategy, significantly reducing computational costs and effectively exploring global optima, thus greatly improving optimization efficiency. Additionally, the study investigates the impact of rectangular rib structures within the T-junction microchannel on droplet generation, revealing how the channel geometry influences droplet formation and size. After determining the target droplet size, we further applied Bayesian Optimization to refine the rib geometry. The integration of Bayesian Optimization with computational fluid dynamics (CFD) offers a promising tool and provides new insights into the optimal design of microfluidic devices.
基金supported by the Scientific and Technological Developing Scheme of Jilin Province,China(No.20240101371JC)the National Natural Science Foundation of China(No.62107008).
文摘A Bayesian network reconstruction method based on norm minimization is proposed to address the sparsity and iterative divergence issues in network reconstruction caused by noise and missing values.This method achieves precise adjustment of the network structure by constructing a preliminary random network model and introducing small-world network characteristics and combines L1 norm minimization regularization techniques to control model complexity and optimize the inference process of variable dependencies.In the experiment of game network reconstruction,when the success rate of the L1 norm minimization model’s existence connection reconstruction reaches 100%,the minimum data required is about 40%,while the minimum data required for a sparse Bayesian learning network is about 45%.In terms of operational efficiency,the running time for minimizing the L1 normis basically maintained at 1.0 s,while the success rate of connection reconstruction increases significantly with an increase in data volume,reaching a maximum of 13.2 s.Meanwhile,in the case of a signal-to-noise ratio of 10 dB,the L1 model achieves a 100% success rate in the reconstruction of existing connections,while the sparse Bayesian network had the highest success rate of 90% in the reconstruction of non-existent connections.In the analysis of actual cases,the maximum lift and drop track of the research method is 0.08 m.The mean square error is 5.74 cm^(2).The results indicate that this norm minimization-based method has good performance in data efficiency and model stability,effectively reducing the impact of outliers on the reconstruction results to more accurately reflect the actual situation.
基金supported by the Guangdong Provincial Clinical Research Center for Tuberculosis(No.2020B1111170014)。
文摘Objective To investigate the spatiotemporal patterns and socioeconomic factors influencing the incidence of tuberculosis(TB)in the Guangdong Province between 2010 and 2019.Method Spatial and temporal variations in TB incidence were mapped using heat maps and hierarchical clustering.Socioenvironmental influencing factors were evaluated using a Bayesian spatiotemporal conditional autoregressive(ST-CAR)model.Results Annual incidence of TB in Guangdong decreased from 91.85/100,000 in 2010 to 53.06/100,000in 2019.Spatial hotspots were found in northeastern Guangdong,particularly in Heyuan,Shanwei,and Shantou,while Shenzhen,Dongguan,and Foshan had the lowest rates in the Pearl River Delta.The STCAR model showed that the TB risk was lower with higher per capita Gross Domestic Product(GDP)[Relative Risk(RR),0.91;95%Confidence Interval(CI):0.86–0.98],more the ratio of licensed physicians and physician(RR,0.94;95%CI:0.90-0.98),and higher per capita public expenditure(RR,0.94;95%CI:0.90–0.97),with a marginal effect of population density(RR,0.86;95%CI:0.86–1.00).Conclusion The incidence of TB in Guangdong varies spatially and temporally.Areas with poor economic conditions and insufficient healthcare resources are at an increased risk of TB infection.Strategies focusing on equitable health resource distribution and economic development are the key to TB control.
文摘Objective:Esophageal cancer has made a great contribution to the cancer burden in Jiangsu Province,East China.This study was aimed at reporting esophageal cancer incidence trend in 2009-2019 and its prediction to 2030.Methods:The burden of esophageal cancer in Jiangsu in 2019 was estimated using 54 cancer registries’data selected from Jiangsu Cancer Registry.Incident cases of 16 cancer registries were applied for the temporal trend from 2009 to 2019.The burden of esophageal cancer by 2030 was projected using the Bayesian age-period-cohort(BAPC)model.Results:About 24,886 new cases of esophageal cancer(17,233 males and 7,653 females)occurred in Jiangsu in 2019.Rural regions of Jiangsu had the highest incidence rate.The age-standardized incidence rate(ASIR,per 100,000 population)of esophageal cancer in Jiangsu decreased from 27.72 per 100,000 in 2009 to 14.18 per 100,000 in 2019.The BAPC model showed that the ASIR would decline from 13.01 per 100,000 in 2020 to 4.88 per 100,000 in 2030.Conclusions:According to the data,esophageal cancer incidence rates were predicted to decline until 2030,yet the disease burden is still significant in Jiangsu.The existing approaches to prevention and control are effective and need to be maintained.
基金supported by the Key R&D Projects in Jiangsu Province(BE2021729)the Key Primary Research Project of Primary Strengthening Program(KYZYJKKCJC23001).
文摘Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weighted scale-free community network and susceptible-infected-recovered(SIR)model.To solve the problem of difficulty in describing the changes in the structure and collaboration mode of the system under external factors,a two-dimensional Monte Carlo method and an improved dynamic Bayesian network are used to simulate the impact of external environmental factors on multi-agent systems.A collaborative information flow path optimization algorithm for agents under environmental factors is designed based on the Dijkstra algorithm.A method for evaluating system interoperability is designed based on simulation experiments,providing reference for the construction planning and optimization of organizational application of the system.Finally,the feasibility of the method is verified through case studies.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Technology Development Program(RS-2023-00264489)funded by the Ministry of SMEs and Startups(MSS,Republic of Korea).
文摘Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.
基金supported by the Natural Science Foundation of Tianjin (No. 22JCQNJC01070)the National Natural Science Foundation of China (No. 42404079)the Key Project of Tianjin Earthquake Agency (No. Zd202402)。
文摘Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.