When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding bia...When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.展开更多
Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,...Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,evolutionary studies,and are also frequently used in the oil and gas industry.Although some research has focused on automating the classification of foraminifera images,few have addressed the uncertainty in these classifications.Although foraminifera classification is not a safety-critical task,estimating uncertainty is crucial to avoid misclassifications that could overlook rare and ecologically significant species that are informative indicators of the environment in which they lived.Uncertainty estimation in deep learning has gained significant attention and many methods have been developed.However,evaluating the performance of these methods in practical settings remains a challenge.To create a benchmark for uncertainty estimation in the classification of foraminifera,we administered a multiple choice questionnaire containing classification tasks to four senior geologists.By analyzing their responses,we generated human-derived uncertainty estimates for a test set of 260 images of foraminifera and sediment grains.These uncertainty estimates served as a baseline for comparison when training neural networks in classification.We then trained multiple deep neural networks using a range of uncertainty quantification methods to classify and state the uncertainty about the classifications.The results of the deep learning uncertainty quantification methods were then analyzed and compared with the human benchmark,to see how the methods performed individually and how the methods aligned with humans.Our results show that human-level performance can be achieved with deep learning and that test-time data augmentation and ensembling can help improve both uncertainty estimation and classification performance.Our results also show that human uncertainty estimates are helpful indicators for detecting classification errors and that deep learning-based uncertainty estimates can improve calibration and classification accuracy.展开更多
Measurement uncertainty plays an important role in laser tracking measurement analyses. In the present work, the guides to the expression of uncertainty in measurement(GUM) uncertainty framework(GUF) and its supplemen...Measurement uncertainty plays an important role in laser tracking measurement analyses. In the present work, the guides to the expression of uncertainty in measurement(GUM) uncertainty framework(GUF) and its supplement, the Monte Carlo method, were used to estimate the uncertainty of task-specific laser tracker measurements. First, the sources of error in laser tracker measurement were analyzed in detail, including instruments, measuring network fusion, measurement strategies, measurement process factors(such as the operator), measurement environment, and task-specific data processing. Second, the GUM and Monte Carlo methods and their application to laser tracker measurement were presented. Finally, a case study involving the uncertainty estimation of a cylindricity measurement process using the GUF and Monte Carlo methods was illustrated. The expanded uncertainty results(at 95% confidence levels) obtained with the Monte Carlo method are 0.069 mm(least-squares criterion) and 0.062 mm(minimum zone criterion), respectively, while with the GUM uncertainty framework, none but the result of least-squares criterion can be got, which is 0.071 mm. Thus, the GUM uncertainty framework slightly underestimates the overall uncertainty by 10%. The results demonstrate that the two methods have different characteristics in task-specific uncertainty evaluations of laser tracker measurements. The results indicate that the Monte Carlo method is a practical tool for applying the principle of propagation of distributions and does not depend on the assumptions and limitations required by the law of propagation of uncertainties(GUF). These features of the Monte Carlo method reduce the risk of an unreliable measurement of uncertainty estimation, particularly in cases of complicated measurement models, without the need to evaluate partial derivatives. In addition, the impact of sampling strategy and evaluation method on the uncertainty of the measurement results can also be taken into account with Monte Carlo method, which plays a guiding role in measurement planning.展开更多
Impedance eduction methods have been developed for decades to meet the increasing need for high-quality impedance data in the design and optimization of acoustic liners.To this end,it is important to fully investigate...Impedance eduction methods have been developed for decades to meet the increasing need for high-quality impedance data in the design and optimization of acoustic liners.To this end,it is important to fully investigate the uncertainty problem,to which only limited attention has been devoted so far.This paper considers the possibility of acoustically-induced structural vibration as a nonnegligible uncertainty or error source in impedance eduction experiments.As the frequency moves away from the resonant frequency,with the increase in the value of cavity reactance,the acoustic particle velocity inside liner orifices possibly decreases to the extent comparable to the vibration velocity of liner facing sheet.Thus,the acoustically-induced vibration,although generally being weak except at the inherent structural frequencies,may considerably affect the impedance eduction results near the anti-resonant frequency where the liner has poor absorption.To demonstrate the effect of structural vibration,the vibration velocity of liner facing sheet is estimated from the experimentally educed admittance of the liner samples whose orifices are sealed with tape.Further,a three-dimensional numerical model is set up,in which normal particle velocity is introduced over the solid portion of liner facing sheet to imitate structural vibration,rather than directly solving the acoustic-structural coupling problem.As shown by the results,the vibration of liner facing sheet,whose velocity is as small as estimated by the experiment,can result in anomalous deviation of the educed impedance from the impedance model near the anti-resonant frequency.The trend that the anomalous deviation varies with frequency is numerically captured.展开更多
Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters accordi...Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.展开更多
In this paper,we develop an entropy-conservative discontinuous Galerkin(DG)method for the shallow water(SW)equation with random inputs.One of the most popular methods for uncertainty quantifcation is the generalized P...In this paper,we develop an entropy-conservative discontinuous Galerkin(DG)method for the shallow water(SW)equation with random inputs.One of the most popular methods for uncertainty quantifcation is the generalized Polynomial Chaos(gPC)approach which we consider in the following manuscript.We apply the stochastic Galerkin(SG)method to the stochastic SW equations.Using the SG approach in the stochastic hyperbolic SW system yields a purely deterministic system that is not necessarily hyperbolic anymore.The lack of the hyperbolicity leads to ill-posedness and stability issues in numerical simulations.By transforming the system using Roe variables,the hyperbolicity can be ensured and an entropy-entropy fux pair is known from a recent investigation by Gerster and Herty(Commun.Comput.Phys.27(3):639–671,2020).We use this pair and determine a corresponding entropy fux potential.Then,we construct entropy conservative numerical twopoint fuxes for this augmented system.By applying these new numerical fuxes in a nodal DG spectral element method(DGSEM)with fux diferencing ansatz,we obtain a provable entropy conservative(dissipative)scheme.In numerical experiments,we validate our theoretical fndings.展开更多
The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytica...The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.展开更多
Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vi...Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.展开更多
With the increasing integration of large-scale distributed energy resources into the grid,traditional distribution network optimization and dispatch methods struggle to address the challenges posed by both generation ...With the increasing integration of large-scale distributed energy resources into the grid,traditional distribution network optimization and dispatch methods struggle to address the challenges posed by both generation and load.Accounting for these issues,this paper proposes a multi-timescale coordinated optimization dispatch method for distribution networks.First,the probability box theory was employed to determine the uncertainty intervals of generation and load forecasts,based on which,the requirements for flexibility dispatch and capacity constraints of the grid were calculated and analyzed.Subsequently,a multi-timescale optimization framework was constructed,incorporating the generation and load forecast uncertainties.This framework included optimization models for dayahead scheduling,intra-day optimization,and real-time adjustments,aiming to meet flexibility needs across different timescales and improve the economic efficiency of the grid.Furthermore,an improved soft actor-critic algorithm was introduced to enhance the uncertainty exploration capability.Utilizing a centralized training and decentralized execution framework,a multi-agent SAC network model was developed to improve the decision-making efficiency of the agents.Finally,the effectiveness and superiority of the proposed method were validated using a modified IEEE-33 bus test system.展开更多
Response analysis of structures involving non-probabilistic uncertain parameters can be closely related to optimization.This paper provides a review on optimization-based methods for uncertainty analysis,with focusing...Response analysis of structures involving non-probabilistic uncertain parameters can be closely related to optimization.This paper provides a review on optimization-based methods for uncertainty analysis,with focusing attention on specific properties of adopted numerical optimization approaches.We collect and discuss the methods based on nonlinear programming,semidefinite programming,mixed-integer programming,mathematical programming with complementarity constraints,difference-of-convex programming,optimization methods using surrogate models and machine learning techniques,and metaheuristics.As a closely related topic,we also overview the methods for assessing structural robustness using non-probabilistic uncertainty modeling.We conclude the paper by drawing several remarks through this review.展开更多
Cropland nitrate leaching is the major nitrogen(N) loss pathway, and it contributes significantly to water pollution. However, cropland nitrate leaching estimates show great uncertainty due to variations in input data...Cropland nitrate leaching is the major nitrogen(N) loss pathway, and it contributes significantly to water pollution. However, cropland nitrate leaching estimates show great uncertainty due to variations in input datasets and estimation methods. Here, we presented a re-evaluation of Chinese cropland nitrate leaching, and identified and quantified the sources of uncertainty by integrating three cropland area datasets, three N input datasets, and three estimation methods. The results revealed that nitrate leaching from Chinese cropland averaged 6.7±0.6 Tg N yr^(-1)in 2010, ranging from 2.9 to 15.8 Tg N yr^(-1)across 27 different estimates. The primary contributor to the uncertainty was the estimation method, accounting for 45.1%, followed by the interaction of N input dataset and estimation method at 24.4%. The results of this study emphasize the need for adopting a robust estimation method and improving the compatibility between the estimation method and N input dataset to effectively reduce uncertainty. This analysis provides valuable insights for accurately estimating cropland nitrate leaching and contributes to ongoing efforts that address water pollution concerns.展开更多
In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error...In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error between the numerical solution and the exact solution is obtained,and then compared with the error formed by the difference method,it is concluded that the Lagrange interpolation method is more effective in solving the variable coefficient ordinary differential equation.展开更多
To study the uncertainty quantification of resonant states in open quantum systems,we developed a Bayesian framework by integrating a reduced basis method(RBM)emulator with the Gamow coupled-channel(GCC)approach.The R...To study the uncertainty quantification of resonant states in open quantum systems,we developed a Bayesian framework by integrating a reduced basis method(RBM)emulator with the Gamow coupled-channel(GCC)approach.The RBM,constructed via eigenvector continuation and trained on both bound and resonant configurations,enables the fast and accurate emulation of resonance properties across the parameter space.To identify the physical resonant states from the emulator’s output,we introduce an overlap-based selection technique that effectively isolates true solutions from background artifacts.By applying this framework to unbound nucleus ^(6)Be,we quantified the model uncertainty in the predicted complex energies.The results demonstrate relative errors of 17.48%in the real part and 8.24%in the imaginary part,while achieving a speedup of four orders of magnitude compared with the full GCC calculations.To further investigate the asymptotic behavior of the resonant-state wavefunctions within the RBM framework,we employed a Lippmann–Schwinger(L–S)-based correction scheme.This approach not only improves the consistency between eigenvalues and wavefunctions but also enables a seamless extension from real-space training data to the complex energy plane.By bridging the gap between bound-state and continuum regimes,the L–S correction significantly enhances the emulator’s capability to accurately capture continuum structures in open quantum systems.展开更多
Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.Howev...Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.However,the widened hydraulic excitation frequency may satisfy the blade resonance due to the time variation in the velocity and angle of attack of the ocean current,even resulting in blade fatigue and destructively interfering with grid stability.A key parameter that determines the resonance amplitude of the blade is the hydrodynamic damping ratio(HDR).However,HDR is difficult to obtain due to the complex fluid-structure interaction(FSI).Therefore,a literature review was conducted on the hydrodynamic damping characteristics of blade-like structures.The experimental and simulation methods used to identify and obtain the HDR quantitatively were described,placing emphasis on the experimental processes and simulation setups.Moreover,the accuracy and efficiency of different simulation methods were compared,and the modal work approach was recommended.The effects of key typical parameters,including flow velocity,angle of attack,gap,rotational speed,and cavitation,on the HDR were then summarized,and the suggestions on operating conditions were presented from the perspective of increasing the HDR.Subsequently,considering multiple flow parameters,several theoretical derivations and semi-empirical prediction formulas for HDR were introduced,and the accuracy and application were discussed.Based on the shortcomings of the existing research,the direction of future research was finally determined.The current work offers a clear understanding of the HDR of blade-like structures,which could improve the evaluation accuracy of flow-induced vibration in the design stage.展开更多
For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube samplin...For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.展开更多
Unmanned aerial vehicles(UAVs)have become crucial tools in moving target tracking due to their agility and ability to operate in complex,dynamic environments.UAVs must meet several requirements to achieve stable track...Unmanned aerial vehicles(UAVs)have become crucial tools in moving target tracking due to their agility and ability to operate in complex,dynamic environments.UAVs must meet several requirements to achieve stable tracking,including maintaining continuous target visibility amidst occlusions,ensuring flight safety,and achieving smooth trajectory planning.This paper reviews the latest advancements in UAV-based target tracking,highlighting information prediction,tracking strategies,and swarm cooperation.To address challenges including target visibility and occlusion,real-time prediction and tracking in dynamic environments,flight safety and coordination,resource management and energy efficiency,the paper identifies future research directions aimed at improving the performance,reliability,and scalability of UAV tracking system.展开更多
Geomechanical properties of rocks vary across different measurement scales,primarily due to heterogeneity.Micro-scale geomechanical tests,including micro-scale“scratch tests”and nano-scale nanoindentation tests,are ...Geomechanical properties of rocks vary across different measurement scales,primarily due to heterogeneity.Micro-scale geomechanical tests,including micro-scale“scratch tests”and nano-scale nanoindentation tests,are attractive at different scales.Each method requires minimal sample volume,is low cost,and includes a relatively rapid measurement turnaround time.However,recent micro-scale test results–including scratch test results and nanoindentation results–exhibit tangible variance and uncertainty,suggesting a need to correlate mineral composition mapping to elastic modulus mapping to isolate the relative impact of specific minerals.Different research labs often utilize different interpretation methods,and it is clear that future micro-mechanical tests may benefit from standardized testing and interpretation procedures.The objectives of this study are to seek options for standardized testing and interpretation procedures,through two specific objectives:(1)Quantify chemical and physical controls on micro-mechanical properties and(2)Quantify the source of uncertainties associated with nanoindentation measurements.To reach these goals,we conducted mechanical tests on three different scales:triaxial compression tests,scratch tests,and nanoindentation tests.We found that mineral phase weight percentage is highly correlated with nanoindentation elastic modulus distribution.Finally,we conclude that nanoindentation testing is a mineralogy and microstructure-based method and generally yields significant uncertainty and overestimation.The uncertainty of the testing method is largely associated with not mapping pore space a priori.Lastly,the uncertainty can be reduced by combining phase mapping and modulus mapping with substantial and random data sampling.展开更多
The aim of this paper is to prove another variation on the Heisenberg uncertainty principle,we generalize the quantitative uncertainty relations in n different(time-frequency)domains and we will give an algorithm for ...The aim of this paper is to prove another variation on the Heisenberg uncertainty principle,we generalize the quantitative uncertainty relations in n different(time-frequency)domains and we will give an algorithm for the signal recovery related to the canonical Fourier-Bessel transform.展开更多
The element-free Galerkin(EFG)method,which constructs shape functions via moving least squares(MLS)approximation,represents a fundamental and widely studied meshless method in numerical computation.Although it achieve...The element-free Galerkin(EFG)method,which constructs shape functions via moving least squares(MLS)approximation,represents a fundamental and widely studied meshless method in numerical computation.Although it achieves high computational accuracy,the shape functions are more complex than those in the conventional finite element method(FEM),resulting in great computational requirements.Therefore,improving the computational efficiency of the EFG method represents an important research direction.This paper systematically reviews significant contributions fromdomestic and international scholars in advancing the EFGmethod.Including the improved element-free Galerkin(IEFG)method,various interpolating EFG methods,four distinct complex variable EFG methods,and a series of dimension splitting meshless methods.In the numerical examples,the effectiveness and efficiency of the three methods are validated by analyzing the solutions of the IEFG method for 3D steadystate anisotropic heat conduction,3D elastoplasticity,and large deformation problems,as well as the performance of two-dimensional splitting meshless methods in solving the 3D Helmholtz equation.展开更多
The practical predictability of hail precipitation rates is significantly influenced by initial meteorological perturbations,stemming from various uncertainty sources.This study thoroughly assessed the predictability ...The practical predictability of hail precipitation rates is significantly influenced by initial meteorological perturbations,stemming from various uncertainty sources.This study thoroughly assessed the predictability of hail precipitation rates in both climatologically and flow-dependent perturbed ensembles(CEns and FEns).These ensembles incorporated initial meteorological uncertainties derived separately from two operational ensembles.Leveraging the Weather Research and Forecasting model,we conducted cloud-resolving simulations of an idealized hailstorm.The practical predictability of hail responded comparably to both climatological and flow-dependent uncertainties,which was revealed across the entire ensemble of 50 members.However,a notable difference emerged when comparing the peak hail precipitation rates among the top 10 and bottom 10 members.From a thermodynamic perspective,the primary source of uncertainty in hail precipitation lay in the significant variations in temperature stratification,particularly at-20℃and-40℃.On the microphysical front,perturbations within CEns generated greater uncertainty in the process of rainwater collection by hail,contributing significantly to the microphysical growth mechanisms of hail.Furthermore,the findings reveal a stronger dependency of hail precipitation uncertainty on thermodynamic perturbations compared to kinematic perturbations.These insights enhance the comprehension of the practical predictability of hail and contribute significantly to the understanding of ensemble forecasting for hail events.展开更多
文摘When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.
基金funded by the Norwegian Research Council(IKTPLUSS-IKT og digital innovasjon,project no.332901).
文摘Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,evolutionary studies,and are also frequently used in the oil and gas industry.Although some research has focused on automating the classification of foraminifera images,few have addressed the uncertainty in these classifications.Although foraminifera classification is not a safety-critical task,estimating uncertainty is crucial to avoid misclassifications that could overlook rare and ecologically significant species that are informative indicators of the environment in which they lived.Uncertainty estimation in deep learning has gained significant attention and many methods have been developed.However,evaluating the performance of these methods in practical settings remains a challenge.To create a benchmark for uncertainty estimation in the classification of foraminifera,we administered a multiple choice questionnaire containing classification tasks to four senior geologists.By analyzing their responses,we generated human-derived uncertainty estimates for a test set of 260 images of foraminifera and sediment grains.These uncertainty estimates served as a baseline for comparison when training neural networks in classification.We then trained multiple deep neural networks using a range of uncertainty quantification methods to classify and state the uncertainty about the classifications.The results of the deep learning uncertainty quantification methods were then analyzed and compared with the human benchmark,to see how the methods performed individually and how the methods aligned with humans.Our results show that human-level performance can be achieved with deep learning and that test-time data augmentation and ensembling can help improve both uncertainty estimation and classification performance.Our results also show that human uncertainty estimates are helpful indicators for detecting classification errors and that deep learning-based uncertainty estimates can improve calibration and classification accuracy.
基金Project(51318010402)supported by General Armament Department Pre-Research Program of China
文摘Measurement uncertainty plays an important role in laser tracking measurement analyses. In the present work, the guides to the expression of uncertainty in measurement(GUM) uncertainty framework(GUF) and its supplement, the Monte Carlo method, were used to estimate the uncertainty of task-specific laser tracker measurements. First, the sources of error in laser tracker measurement were analyzed in detail, including instruments, measuring network fusion, measurement strategies, measurement process factors(such as the operator), measurement environment, and task-specific data processing. Second, the GUM and Monte Carlo methods and their application to laser tracker measurement were presented. Finally, a case study involving the uncertainty estimation of a cylindricity measurement process using the GUF and Monte Carlo methods was illustrated. The expanded uncertainty results(at 95% confidence levels) obtained with the Monte Carlo method are 0.069 mm(least-squares criterion) and 0.062 mm(minimum zone criterion), respectively, while with the GUM uncertainty framework, none but the result of least-squares criterion can be got, which is 0.071 mm. Thus, the GUM uncertainty framework slightly underestimates the overall uncertainty by 10%. The results demonstrate that the two methods have different characteristics in task-specific uncertainty evaluations of laser tracker measurements. The results indicate that the Monte Carlo method is a practical tool for applying the principle of propagation of distributions and does not depend on the assumptions and limitations required by the law of propagation of uncertainties(GUF). These features of the Monte Carlo method reduce the risk of an unreliable measurement of uncertainty estimation, particularly in cases of complicated measurement models, without the need to evaluate partial derivatives. In addition, the impact of sampling strategy and evaluation method on the uncertainty of the measurement results can also be taken into account with Monte Carlo method, which plays a guiding role in measurement planning.
基金funded by the National Science and Technology Major Project,China(No.2017-II-0008-0022)。
文摘Impedance eduction methods have been developed for decades to meet the increasing need for high-quality impedance data in the design and optimization of acoustic liners.To this end,it is important to fully investigate the uncertainty problem,to which only limited attention has been devoted so far.This paper considers the possibility of acoustically-induced structural vibration as a nonnegligible uncertainty or error source in impedance eduction experiments.As the frequency moves away from the resonant frequency,with the increase in the value of cavity reactance,the acoustic particle velocity inside liner orifices possibly decreases to the extent comparable to the vibration velocity of liner facing sheet.Thus,the acoustically-induced vibration,although generally being weak except at the inherent structural frequencies,may considerably affect the impedance eduction results near the anti-resonant frequency where the liner has poor absorption.To demonstrate the effect of structural vibration,the vibration velocity of liner facing sheet is estimated from the experimentally educed admittance of the liner samples whose orifices are sealed with tape.Further,a three-dimensional numerical model is set up,in which normal particle velocity is introduced over the solid portion of liner facing sheet to imitate structural vibration,rather than directly solving the acoustic-structural coupling problem.As shown by the results,the vibration of liner facing sheet,whose velocity is as small as estimated by the experiment,can result in anomalous deviation of the educed impedance from the impedance model near the anti-resonant frequency.The trend that the anomalous deviation varies with frequency is numerically captured.
基金supported by the Innovation Foundation of Provincial Education Department of Gansu(2024B-005)the Gansu Province National Science Foundation(22YF7GA182)the Fundamental Research Funds for the Central Universities(No.lzujbky2022-kb01)。
文摘Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.
文摘In this paper,we develop an entropy-conservative discontinuous Galerkin(DG)method for the shallow water(SW)equation with random inputs.One of the most popular methods for uncertainty quantifcation is the generalized Polynomial Chaos(gPC)approach which we consider in the following manuscript.We apply the stochastic Galerkin(SG)method to the stochastic SW equations.Using the SG approach in the stochastic hyperbolic SW system yields a purely deterministic system that is not necessarily hyperbolic anymore.The lack of the hyperbolicity leads to ill-posedness and stability issues in numerical simulations.By transforming the system using Roe variables,the hyperbolicity can be ensured and an entropy-entropy fux pair is known from a recent investigation by Gerster and Herty(Commun.Comput.Phys.27(3):639–671,2020).We use this pair and determine a corresponding entropy fux potential.Then,we construct entropy conservative numerical twopoint fuxes for this augmented system.By applying these new numerical fuxes in a nodal DG spectral element method(DGSEM)with fux diferencing ansatz,we obtain a provable entropy conservative(dissipative)scheme.In numerical experiments,we validate our theoretical fndings.
基金supported by the National Natural Science Foundation of China(12172023).
文摘The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.
基金funded by the National Natural Science Foundation of China(No.41962016)the Natural Science Foundation of NingXia(Nos.2023AAC02023,2023A1218,and 2021AAC02006).
文摘Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.
基金funded by Jilin Province Science and Technology Development Plan Project,grant number 20220203163SF.
文摘With the increasing integration of large-scale distributed energy resources into the grid,traditional distribution network optimization and dispatch methods struggle to address the challenges posed by both generation and load.Accounting for these issues,this paper proposes a multi-timescale coordinated optimization dispatch method for distribution networks.First,the probability box theory was employed to determine the uncertainty intervals of generation and load forecasts,based on which,the requirements for flexibility dispatch and capacity constraints of the grid were calculated and analyzed.Subsequently,a multi-timescale optimization framework was constructed,incorporating the generation and load forecast uncertainties.This framework included optimization models for dayahead scheduling,intra-day optimization,and real-time adjustments,aiming to meet flexibility needs across different timescales and improve the economic efficiency of the grid.Furthermore,an improved soft actor-critic algorithm was introduced to enhance the uncertainty exploration capability.Utilizing a centralized training and decentralized execution framework,a multi-agent SAC network model was developed to improve the decision-making efficiency of the agents.Finally,the effectiveness and superiority of the proposed method were validated using a modified IEEE-33 bus test system.
文摘Response analysis of structures involving non-probabilistic uncertain parameters can be closely related to optimization.This paper provides a review on optimization-based methods for uncertainty analysis,with focusing attention on specific properties of adopted numerical optimization approaches.We collect and discuss the methods based on nonlinear programming,semidefinite programming,mixed-integer programming,mathematical programming with complementarity constraints,difference-of-convex programming,optimization methods using surrogate models and machine learning techniques,and metaheuristics.As a closely related topic,we also overview the methods for assessing structural robustness using non-probabilistic uncertainty modeling.We conclude the paper by drawing several remarks through this review.
基金supported by the National Key Research and Development Program of China (2023YFD1902703)the National Natural Science Foundation of China (Key Program) (U23A20158)。
文摘Cropland nitrate leaching is the major nitrogen(N) loss pathway, and it contributes significantly to water pollution. However, cropland nitrate leaching estimates show great uncertainty due to variations in input datasets and estimation methods. Here, we presented a re-evaluation of Chinese cropland nitrate leaching, and identified and quantified the sources of uncertainty by integrating three cropland area datasets, three N input datasets, and three estimation methods. The results revealed that nitrate leaching from Chinese cropland averaged 6.7±0.6 Tg N yr^(-1)in 2010, ranging from 2.9 to 15.8 Tg N yr^(-1)across 27 different estimates. The primary contributor to the uncertainty was the estimation method, accounting for 45.1%, followed by the interaction of N input dataset and estimation method at 24.4%. The results of this study emphasize the need for adopting a robust estimation method and improving the compatibility between the estimation method and N input dataset to effectively reduce uncertainty. This analysis provides valuable insights for accurately estimating cropland nitrate leaching and contributes to ongoing efforts that address water pollution concerns.
文摘In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error between the numerical solution and the exact solution is obtained,and then compared with the error formed by the difference method,it is concluded that the Lagrange interpolation method is more effective in solving the variable coefficient ordinary differential equation.
基金supported by the National Key Research and Development Program(MOST 2023YFA1606404 and MOST 2022YFA1602303)the National Natural Science Foundation of China(Nos.12347106,12147101,and 12447122)the China Postdoctoral Science Foundation(No.2024M760489).
文摘To study the uncertainty quantification of resonant states in open quantum systems,we developed a Bayesian framework by integrating a reduced basis method(RBM)emulator with the Gamow coupled-channel(GCC)approach.The RBM,constructed via eigenvector continuation and trained on both bound and resonant configurations,enables the fast and accurate emulation of resonance properties across the parameter space.To identify the physical resonant states from the emulator’s output,we introduce an overlap-based selection technique that effectively isolates true solutions from background artifacts.By applying this framework to unbound nucleus ^(6)Be,we quantified the model uncertainty in the predicted complex energies.The results demonstrate relative errors of 17.48%in the real part and 8.24%in the imaginary part,while achieving a speedup of four orders of magnitude compared with the full GCC calculations.To further investigate the asymptotic behavior of the resonant-state wavefunctions within the RBM framework,we employed a Lippmann–Schwinger(L–S)-based correction scheme.This approach not only improves the consistency between eigenvalues and wavefunctions but also enables a seamless extension from real-space training data to the complex energy plane.By bridging the gap between bound-state and continuum regimes,the L–S correction significantly enhances the emulator’s capability to accurately capture continuum structures in open quantum systems.
基金Supported by the National Natural Science Foundation of China(Nos.52222904 and 52309117)China Postdoctoral Science Foundation(Nos.2022TQ0168 and 2023M731895).
文摘Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.However,the widened hydraulic excitation frequency may satisfy the blade resonance due to the time variation in the velocity and angle of attack of the ocean current,even resulting in blade fatigue and destructively interfering with grid stability.A key parameter that determines the resonance amplitude of the blade is the hydrodynamic damping ratio(HDR).However,HDR is difficult to obtain due to the complex fluid-structure interaction(FSI).Therefore,a literature review was conducted on the hydrodynamic damping characteristics of blade-like structures.The experimental and simulation methods used to identify and obtain the HDR quantitatively were described,placing emphasis on the experimental processes and simulation setups.Moreover,the accuracy and efficiency of different simulation methods were compared,and the modal work approach was recommended.The effects of key typical parameters,including flow velocity,angle of attack,gap,rotational speed,and cavitation,on the HDR were then summarized,and the suggestions on operating conditions were presented from the perspective of increasing the HDR.Subsequently,considering multiple flow parameters,several theoretical derivations and semi-empirical prediction formulas for HDR were introduced,and the accuracy and application were discussed.Based on the shortcomings of the existing research,the direction of future research was finally determined.The current work offers a clear understanding of the HDR of blade-like structures,which could improve the evaluation accuracy of flow-induced vibration in the design stage.
基金co-supported by the National Natural Science Foundation of China(Nos.51875014,U2233212 and 51875015)the Natural Science Foundation of Beijing Municipality,China(No.L221008)+1 种基金Science,Technology Innovation 2025 Major Project of Ningbo of China(No.2022Z005)the Tianmushan Laboratory Project,China(No.TK2023-B-001)。
文摘For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.
基金financial support provided by the Natural Science Foundation of Hunan Province of China(Grant No.2021JJ10045)the Open Research Subject of State Key Laboratory of Intelligent Game(Grant No.ZBKF-24-01)+1 种基金the Postdoctoral Fellowship Program of CPSF(Grant No.GZB20240989)the China Postdoctoral Science Foundation(Grant No.2024M754304)。
文摘Unmanned aerial vehicles(UAVs)have become crucial tools in moving target tracking due to their agility and ability to operate in complex,dynamic environments.UAVs must meet several requirements to achieve stable tracking,including maintaining continuous target visibility amidst occlusions,ensuring flight safety,and achieving smooth trajectory planning.This paper reviews the latest advancements in UAV-based target tracking,highlighting information prediction,tracking strategies,and swarm cooperation.To address challenges including target visibility and occlusion,real-time prediction and tracking in dynamic environments,flight safety and coordination,resource management and energy efficiency,the paper identifies future research directions aimed at improving the performance,reliability,and scalability of UAV tracking system.
基金support of this project through the Southwest Regional Partnership on Carbon Sequestration(Grant No.DE-FC26-05NT42591)Improving Production in the Emerging Paradox Oil Play(Grant No.DE-FE0031775).
文摘Geomechanical properties of rocks vary across different measurement scales,primarily due to heterogeneity.Micro-scale geomechanical tests,including micro-scale“scratch tests”and nano-scale nanoindentation tests,are attractive at different scales.Each method requires minimal sample volume,is low cost,and includes a relatively rapid measurement turnaround time.However,recent micro-scale test results–including scratch test results and nanoindentation results–exhibit tangible variance and uncertainty,suggesting a need to correlate mineral composition mapping to elastic modulus mapping to isolate the relative impact of specific minerals.Different research labs often utilize different interpretation methods,and it is clear that future micro-mechanical tests may benefit from standardized testing and interpretation procedures.The objectives of this study are to seek options for standardized testing and interpretation procedures,through two specific objectives:(1)Quantify chemical and physical controls on micro-mechanical properties and(2)Quantify the source of uncertainties associated with nanoindentation measurements.To reach these goals,we conducted mechanical tests on three different scales:triaxial compression tests,scratch tests,and nanoindentation tests.We found that mineral phase weight percentage is highly correlated with nanoindentation elastic modulus distribution.Finally,we conclude that nanoindentation testing is a mineralogy and microstructure-based method and generally yields significant uncertainty and overestimation.The uncertainty of the testing method is largely associated with not mapping pore space a priori.Lastly,the uncertainty can be reduced by combining phase mapping and modulus mapping with substantial and random data sampling.
文摘The aim of this paper is to prove another variation on the Heisenberg uncertainty principle,we generalize the quantitative uncertainty relations in n different(time-frequency)domains and we will give an algorithm for the signal recovery related to the canonical Fourier-Bessel transform.
基金supported by the National Natural Science Foundation of China(Grant No.12271341).
文摘The element-free Galerkin(EFG)method,which constructs shape functions via moving least squares(MLS)approximation,represents a fundamental and widely studied meshless method in numerical computation.Although it achieves high computational accuracy,the shape functions are more complex than those in the conventional finite element method(FEM),resulting in great computational requirements.Therefore,improving the computational efficiency of the EFG method represents an important research direction.This paper systematically reviews significant contributions fromdomestic and international scholars in advancing the EFGmethod.Including the improved element-free Galerkin(IEFG)method,various interpolating EFG methods,four distinct complex variable EFG methods,and a series of dimension splitting meshless methods.In the numerical examples,the effectiveness and efficiency of the three methods are validated by analyzing the solutions of the IEFG method for 3D steadystate anisotropic heat conduction,3D elastoplasticity,and large deformation problems,as well as the performance of two-dimensional splitting meshless methods in solving the 3D Helmholtz equation.
基金supported by the National Natural Science Foundation of China(Grant Nos.42005005 and 42030607)the Science and Technology Department of Shaanxi Province(Grant No.2024JC-YBQN-0248)+2 种基金the Education Department of Shaanxi Province(Grant No.23JK0686)a Xi'an Science and Technology Project(Grant No.22GXFW0131)the Young Talent fund of the University Association for Science and Technology in Shaanxi(Grant No.20210706)。
文摘The practical predictability of hail precipitation rates is significantly influenced by initial meteorological perturbations,stemming from various uncertainty sources.This study thoroughly assessed the predictability of hail precipitation rates in both climatologically and flow-dependent perturbed ensembles(CEns and FEns).These ensembles incorporated initial meteorological uncertainties derived separately from two operational ensembles.Leveraging the Weather Research and Forecasting model,we conducted cloud-resolving simulations of an idealized hailstorm.The practical predictability of hail responded comparably to both climatological and flow-dependent uncertainties,which was revealed across the entire ensemble of 50 members.However,a notable difference emerged when comparing the peak hail precipitation rates among the top 10 and bottom 10 members.From a thermodynamic perspective,the primary source of uncertainty in hail precipitation lay in the significant variations in temperature stratification,particularly at-20℃and-40℃.On the microphysical front,perturbations within CEns generated greater uncertainty in the process of rainwater collection by hail,contributing significantly to the microphysical growth mechanisms of hail.Furthermore,the findings reveal a stronger dependency of hail precipitation uncertainty on thermodynamic perturbations compared to kinematic perturbations.These insights enhance the comprehension of the practical predictability of hail and contribute significantly to the understanding of ensemble forecasting for hail events.