Generative Artificial Intelligence(GenAI)systems have achieved remarkable capabilities across text,code,and image generation;however,their outputs remain prone to errors,hallucinations,and biases.Users often overtrust...Generative Artificial Intelligence(GenAI)systems have achieved remarkable capabilities across text,code,and image generation;however,their outputs remain prone to errors,hallucinations,and biases.Users often overtrust these outputs due to limited transparency,which can lead to misuse and decision errors.This study addresses the challenge of calibrating trust in GenAI through a human centered testing framework enhanced with adaptive explainability.We introduce a methodology that adjusts explanations dynamically according to user expertise,model output confidence,and contextual risk factors,providing guidance that is informative but not overwhelming.The framework was evaluated using outputs from OpenAI’s Generative Pretrained Transformer 4(GPT-4)for text and code generation and Stable Diffusion,a deep generative image model,for image synthesis.The evaluation covered text,code,and visual modalities.A dataset of 5000 GenAI outputs was created and reviewed by a diverse participant group of 360 individuals categorized by expertise level.Results show that adaptive explanations improve error detection rates,reduce the mean squared trust calibration error,and maintain efficient decision making compared with both static and no explanation conditions.Theframework increased error detection by up to 16% across expertise levels,a gain that can provide practical benefits in high stakes fields.For example,in healthcare it may help identify diagnostic errors earlier,and in law it may prevent reliance on flawed evidence in judicial work.These improvements highlight the framework’s potential to make Artificial Intelligence(AI)deployment safer and more accountable.Visual analyses,including trust accuracy plots,reliability diagrams,and misconception maps,show that the adaptive approach reduces overtrust and reveals patterns of misunderstanding across modalities.Statistical results confirmthe robustness of thesefindings across novice,intermediate,and expert users.The study offers insights for designing explanations that balance completeness and simplicity to improve trust calibration and cognitive load.The approach has implications for safe and transparent GenAI deployment and can inform both AI interface design and policy development for responsible AI use.展开更多
The determination of the local cooling rate has a great significance in optimizing the parameters of electroslag remelting(ESR)and improving the quality of the ingots.An innovative method was proposed for calibrating ...The determination of the local cooling rate has a great significance in optimizing the parameters of electroslag remelting(ESR)and improving the quality of the ingots.An innovative method was proposed for calibrating the local cooling rate of M42 high-speed steel(HSS)in the ESR process.After resolidification at different cooling rates under high-temperature laser confocal microscopy,the carbide network spacing of the specimen was observed using a scanning electron microscope.A functional relationship between the cooling rate and average carbide network spacing was established.The average local cooling rate of the solidification process of the M42 HSS ingot was calibrated.The results show that the higher the cool-ing rate,the smaller the network spacing of the carbides.For the steel ingot with a diameter of 360 mm,the average local cooling rate was 0.562℃/s at the surface,0.057℃/s at the position of 0.25D(where D is the diameter of the ingot),and 0.046℃/s at the center of the ingot.展开更多
A calibrating device for the Rogowski coil is developed, which can be used to calibrate the Rogowski coil having a partial response time within tens of nanoseconds. Its key component is a step current generator, which...A calibrating device for the Rogowski coil is developed, which can be used to calibrate the Rogowski coil having a partial response time within tens of nanoseconds. Its key component is a step current generator, which can generate the output with a rise time of less than 2 ns and a duration of larger than 300 ns. The step current generator is composed by a pulse forming line (PFL) and a pulse transmission line (PTL). A TEM (transverse electromagnetic mode) coaxial measurement unit is used as PTL, and the coil to be calibrated and the referenced standard Rogowski coil can be fixed in the unit. The effect of the dimensions of the TEM unit is discussed theoretically as well as experimentally.展开更多
A novel and efficient method for decomposing a signal into a set of intrinsic mode functions (IMFs) and a trend is proposed. Unlike the original empirical mode decomposition (EMD), which uses spline fits to extrac...A novel and efficient method for decomposing a signal into a set of intrinsic mode functions (IMFs) and a trend is proposed. Unlike the original empirical mode decomposition (EMD), which uses spline fits to extract variations from the signal by separating the local mean from the fluctuations in the decomposing process, this new method being proposed takes advantage of the theory of variable finite impulse response (FIR) filtering where filter coefficients and breakpoint frequencies can be adjusted to track any peak-to-peak time scale changes. The IMFs are results of a multiple variable frequency response FIR filtering when signals pass through the filters. Numerical examples validate that in contrast with the original EMD, the proposed method can fine-tune the frequency resolution and suppress the aliasing effectively.展开更多
The measurement of the confocal volume of a confocal three-dimensional micro-x-ray fluorescence(3D-XRF)setup is a key step in the field of confocal 3D-XRF analysis.With the development of x-ray facilities and optical ...The measurement of the confocal volume of a confocal three-dimensional micro-x-ray fluorescence(3D-XRF)setup is a key step in the field of confocal 3D-XRF analysis.With the development of x-ray facilities and optical devices,3D-XRF analysis with a micro confocal volume will create a great potential for 2D and 3D microstructural analysis and accurate quantitative analysis.However,the classic measurement method of scanning metal foils of a certain thickness leads to inaccuracy.A method for calibrating the confocal volume is proposed in this paper.The new method is based on the basic content of the textbook,and the theoretical results and the feasibility are given in detail for the 3D-XRF mono-chromatic x-ray condition and the poly-chromatic x-ray condition.We obtain a set of experimental confirmation using the poly-chromatic x-ray tube in the laboratory.It is proved that the sensitivity factor of the 3D-XRF can be directly and accurately obtained in a real calibration process.展开更多
Doped elements in alloys significantly impact their performance.Conventional methods usually sputter the surface material of the sample,or their performance is limited to the surface of alloys owing to their poor pene...Doped elements in alloys significantly impact their performance.Conventional methods usually sputter the surface material of the sample,or their performance is limited to the surface of alloys owing to their poor penetration ability.The X-ray K-edge subtraction(KES)method exhibits great potential for the nondestructive in situ detection of element contents in alloys.However,the signal of doped elements usually deteriorates because of the strong absorption of the principal component and scattering of crystal grains.This in turn prevents the extensive application of X-ray KES imaging to alloys.In this study,methods were developed to calibrate the linearity between the grayscale of the KES image and element content.The methods were aimed at the sensitive analysis of elements in alloys.Furthermore,experiments with phantoms and alloys demonstrated that,after elaborate calibration,X-ray KES imaging is capable of nondestructive and sensitive analysis of doped elements in alloys.展开更多
Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic...Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.展开更多
In this article, the extension to three dimensions (3D) of the blending technique that has been widely used in two dimensions (2D) to calibrate ocean chlorophyll is presented. The results thus obtained revealed a very...In this article, the extension to three dimensions (3D) of the blending technique that has been widely used in two dimensions (2D) to calibrate ocean chlorophyll is presented. The results thus obtained revealed a very high degree of efficiency when predicting observed values of ocean chlorophyll. The mean squared difference between the predicted and observed values of ocean chlorophyll when 3D technique was used fell far below the tolerance level which was set to the difference between satellite and observed in-situ values. The resulting blended field did not only provide better predictions of the in situ observations in areas where bottle samples cannot be obtained but also provided a smooth variation of the distribution of ocean chlorophyll throughout the year. An added advantage is its computational efficiency since data that would have been treated at least four times would be treated only once. With the advent of these results, it is believed that the modelling of the ocean life cycle will become more realistic.展开更多
It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibra...It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.展开更多
We introduce a corrected sinusoidal-wave drag force method (SDFM) into optical tweezers to calibrate the trapping stiffness of the optical trap and conversion factor (CF) of photodetectors. First, the theoretical ...We introduce a corrected sinusoidal-wave drag force method (SDFM) into optical tweezers to calibrate the trapping stiffness of the optical trap and conversion factor (CF) of photodetectors. First, the theoretical analysis and experimental result demonstrate that the correction of SDFM is necessary, especially the error of no correction is up to 11.25% for a bead of 5μm in diameter. Second, the simulation results demonstrate that the SDFM has a better performance in the calibration of optical tweezers than the triangular-wave drag force method (TDFM) and power spectrum density method (PSDM) at the same signal-to-noise ratio or trapping stiffness. Third, in experiments, the experimental standard deviations of calibration of trapping stiffness and CF with the SDFM are about less than 50% of TDFM and PSDM especially at low laser power. Finally, the experiments of stretching DNA verify that the in situ calibration with the SDFM improves the measurement stability and accuracy.展开更多
The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balan...The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balancing its model weight and multi-parametric distributions to the required accuracy. A novel measuring instrument of space manipulator is designed to orbital simulative motion and locational accuracy test. The camera system of space manipulator, calibrated by MOGA algorithm, is used to locational accuracy test in this measuring instrument. The experimental result shows that the absolute errors are [0.07, 1.75] mm for MOGA calibrating model, [2.88, 5.95] mm for MN method, and [1.19, 4.83] mm for LM method. Besides, the composite errors both of LM method and MN method are approximately seven times higher that of MOGA calibrating model. It is suggested that the MOGA calibrating model is superior both to LM method and MN method.展开更多
Sensor location uncertainty of array degrades severely the performance of eigenstruc-ture based direction finding system.A new calibration method of sensor location is presentedwith three far field sources whose direc...Sensor location uncertainty of array degrades severely the performance of eigenstruc-ture based direction finding system.A new calibration method of sensor location is presentedwith three far field sources whose directions are not known accurately.A signal subspace basediteration algorithm for sensor location calibration is developed and its convergence to the globaloptimal point has been shown.The guide line for selecting directions of calibrating sources isgiven.Simulation results illustrate that the new method is successful and practicable.展开更多
The conductance catheter technique allows real- time measurements of ventricular volume based on changes in the electrical conductance of blood within the ventricular cavity. Conductance volume measurements are correc...The conductance catheter technique allows real- time measurements of ventricular volume based on changes in the electrical conductance of blood within the ventricular cavity. Conductance volume measurements are corrected with a calibration coefficient, α, in order to improve accuracy. However, conductance volume measurements are also affected by parallel conductance, which may confound cali-bration coefficient estimation. This study was un-dertaken to examine the variation in α using a physical model of the left ventricle without parallel conductance. Calibration coefficients were calculated as the conductance-volume quotient (αV(t)) or the stroke conductance-stroke volume quotient (αSV). Both calibration coefficients varied as a non-linear function of the ventricular volume. Conductance volume measurements calibrated with αV(t) estimated ventricular volume to within 2.0 ±6.9%. By contrast, calibration with αSV substantially over-estimated the ventricular volume in a volume-dependent manner, increasing from 26 ±20% at 100ml to 106 ±36% at 500ml. The accuracy of conductance volume measurements is affected by the choice of calibration coefficient. Using a fixed or constant calibration coeffi-cient will result in volume measurement errors. The conductance-stroke volume quotient is associated with particularly significant and volume-dependent measurement errors. For this reason, conductance volume measurements should ideally be calibrated with an alternative measurement of ventricular vol-ume.展开更多
The fidelity of financial market simulation is restricted by the so-called“non-identifiability”difficulty when calibrating high-frequency data.This paper first analyzes the inherent loss of data information in this ...The fidelity of financial market simulation is restricted by the so-called“non-identifiability”difficulty when calibrating high-frequency data.This paper first analyzes the inherent loss of data information in this difficulty,and proposes to use the Kolmogorov-Smirnov test(K-S)as the objective function for high-frequency calibration.Empirical studies verify that K-S has better identifiability of calibrating high-frequency data,while also leads to a much harder multi-modal landscape in the calibration space.To this end,we propose the adaptive stochastic ranking based negatively correlated search algorithm for improving the balance between exploration and exploitation.Experimental results on both simulated data and real market data demonstrate that the proposed method can obtain up to 36.0%improvement in high-frequency data calibration problems over the compared methods.展开更多
Monitoring multiplexed biochemical markers is beneficial for the comprehensive evaluation of diabetes-associated complications.Techniques for multiplexed analyses in interstitial fluids have often been restricted by t...Monitoring multiplexed biochemical markers is beneficial for the comprehensive evaluation of diabetes-associated complications.Techniques for multiplexed analyses in interstitial fluids have often been restricted by the difficulties of electrode materials in accurately detecting chemicals in complex subcutaneous spaces.In particular,the signal stability of enzyme-based sensing electrodes often inevitably decreases due to enzyme degradation or interference in vivo.In this study,we developed a self-calibrating multiplexed microneedle(MN)electrode array(SC-MMNEA)capable of continuous,real-time monitoring of multiple types of bioanalytes(glucose,cholesterol,uric acid,lactate,reactive oxygen species[ROSs],Na+,K+,Ca2+,and pH)in the subcutaneous space.Each type of analyte was detected by a discrete MN electrode assembled in an integrated array with single-MN resolution.Moreover,this device utilized an MN-delivery-mediated self-calibration technique to address the inherent problem of decreased accuracy of implantable electrodes caused by long-term tissue variation and enzyme degradation,and this technique might increase the reliability of the MN sensors.Our results indicated that SC-MMNEA could provide real-time monitoring of multiplexed analyte concentrations in a rat model with good accuracy,especially after self-calibration.SC-MMNEA has the advantages of in situ and minimally invasive monitoring of physiological states and the potential to promote wearable devices for long-term monitoring of chemical species in vivo.展开更多
LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as ...LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.展开更多
Eucalyptus(Eucalyptus camaldulensis Dehnh.)is an important exotic species in northern Nigeria commonly used for poles and timber.Sustainable management of this resource would require quantifying its volume.Stem taper ...Eucalyptus(Eucalyptus camaldulensis Dehnh.)is an important exotic species in northern Nigeria commonly used for poles and timber.Sustainable management of this resource would require quantifying its volume.Stem taper equations are one of the main and most efficient methods for estimating stem volume to any merchantable limit of a species.There is currently no taper equation for Eucalyptus species in Nigeria.Therefore,this study developed taper equations for E.camaldulensis in northern Nigeria.Data for this study were obtained from a private plantation in Jalingo Local Government Area,Taraba State,Nigeria.68 trees were felled and sectioned into 1-m bolt across the stem to a merchantable limit of 5 cm,which were used as the fitting dataset.An additional 22 trees were felled and used to validate the taper equations for stem volume estimation.Seven taper equations were initially fitted to the dataset using nonlinear least squares.The best taper equation was then refitted using a nonlinear mixed-effects approach and calibrated using diameters of one to five sections from the butt end.The taper equations were numerically integrated to obtain the stem volume,which was compared with empirical volume equations.The result shows that the Kozak(Can J For Res 27(5):619-629.10.1139/x97-011,1997)equation,which included eight parameters,provided the best fit for predicting section diameters for under and over bark.The mixed-effects taper equation(NLME-TE)explained most stem diameter variations in the fitting dataset(pseudo-R2:0.986-0.987;RMSE:0.547-0.578 cm)without substantial residual trends.The validation showed that the prediction accuracy of the integrated NLME-TE improved as the number of sectional diameter measurements increased,with at least a 35%reduction in volume estimate error.For practical implementation,two calibration sectional diameter measurements taken from the butt end per tree are recommended.This approach would reduce measurement effort and cost while improving model performance.展开更多
In data communication,limited communication resources often lead to measurement bias,which adversely affects subsequent system estimation if not effectively handled.This paper proposes a novel bias calibration algorit...In data communication,limited communication resources often lead to measurement bias,which adversely affects subsequent system estimation if not effectively handled.This paper proposes a novel bias calibration algorithm under communication constraints to achieve accurate system states of the interested system.An output-based event-triggered scheme is first employed to alleviate transmission burden.Accounting for the limited-communication-induced measurement bias,a novel bias calibration algorithm following the Kalman filtering line is developed to restrain the effect of the measurement bias on system estimation,thereby achieving accurate system state estimates.Subsequently,the Field Programmable Gate Array(FPGA)implementation of the proposed algorithm is also realized with the hope of providing fast bias calibration in practical scenarios.A simulation about a numerical example and a practical example(for gyroscope’s angular velocity bias calibration)on MATLAB is provided to demonstrate the feasibility and effectiveness of the proposed algorithm.展开更多
We present the preparation and measurement of the radioactive isotope^(37)Ar,which was produced using thermal neutrons from a reactor,as a calibration source for liquid xenon time projection chambers.^(37)Ar is a low-...We present the preparation and measurement of the radioactive isotope^(37)Ar,which was produced using thermal neutrons from a reactor,as a calibration source for liquid xenon time projection chambers.^(37)Ar is a low-energy calibration source with a half-life of 35.01 days,making it suitable for calibration in the low-energy region of liquid xenon dark-matter experiments.Radioactive isotope^(37)Ar was produced by irradiating ^(36)Ar with thermal neutrons.It was subsequently measured in a gaseous xenon time projection chamber(GXe TPC)to validate its radioactivity.Our results demonstrate that^(37)Ar is an effective and viable calibration source that offers precise calibration capabilities in the low-energy domain of xenon-based detectors.展开更多
Microelectromechanical systems(MEMS)technology has gained significant attention over the past decade for measuring inertial angular velocity.However,due to inherent complexity,MEMS gyroscopes typically feature up to t...Microelectromechanical systems(MEMS)technology has gained significant attention over the past decade for measuring inertial angular velocity.However,due to inherent complexity,MEMS gyroscopes typically feature up to ten times more parameters than traditional sensors,making selection a challenging task even for experts.This study addresses this challenge,focusing on defensive guidance,navigation,and control(GNC)systems where precise and reliable angular velocity measurement is critical to overall performance.A comprehensive mathematical model is introduced to encapsulate all key MEMS parameters,accompanied by discussions on calibration and Allan variance interpretation.For six leading MEMS gyroscope applications,namely inertial navigation,integrated navigation,autopilot systems,rotating projectiles,homing guidance,and north finding,the most critical parameters are identified,distinguishing suitable and unsuitable sensor choices.Special emphasis is placed on inertial navigation systems,where practical rules of thumb for error evaluation are derived using six degrees of freedom motion equations.Rigorous simulations demonstrate the influence of various sensor parameters through real-world case studies,including static navigation,multi-rotor attitude estimation,gimbal stabilization,and north finding via a turntable.This work aims to be a beacon for practitioners across diverse fields,empowering them to make more informed design decisions.展开更多
文摘Generative Artificial Intelligence(GenAI)systems have achieved remarkable capabilities across text,code,and image generation;however,their outputs remain prone to errors,hallucinations,and biases.Users often overtrust these outputs due to limited transparency,which can lead to misuse and decision errors.This study addresses the challenge of calibrating trust in GenAI through a human centered testing framework enhanced with adaptive explainability.We introduce a methodology that adjusts explanations dynamically according to user expertise,model output confidence,and contextual risk factors,providing guidance that is informative but not overwhelming.The framework was evaluated using outputs from OpenAI’s Generative Pretrained Transformer 4(GPT-4)for text and code generation and Stable Diffusion,a deep generative image model,for image synthesis.The evaluation covered text,code,and visual modalities.A dataset of 5000 GenAI outputs was created and reviewed by a diverse participant group of 360 individuals categorized by expertise level.Results show that adaptive explanations improve error detection rates,reduce the mean squared trust calibration error,and maintain efficient decision making compared with both static and no explanation conditions.Theframework increased error detection by up to 16% across expertise levels,a gain that can provide practical benefits in high stakes fields.For example,in healthcare it may help identify diagnostic errors earlier,and in law it may prevent reliance on flawed evidence in judicial work.These improvements highlight the framework’s potential to make Artificial Intelligence(AI)deployment safer and more accountable.Visual analyses,including trust accuracy plots,reliability diagrams,and misconception maps,show that the adaptive approach reduces overtrust and reveals patterns of misunderstanding across modalities.Statistical results confirmthe robustness of thesefindings across novice,intermediate,and expert users.The study offers insights for designing explanations that balance completeness and simplicity to improve trust calibration and cognitive load.The approach has implications for safe and transparent GenAI deployment and can inform both AI interface design and policy development for responsible AI use.
基金the National Natural Science Foundation of China(Nos.51974153,U1960203,and 51974156)the Joint Fund of State Key Laboratory of Marine Engineering and University of Science and Technology Liaoning(SKLMEA-USTL-201901,SKLMEA-USTL-201707)China Scholarship Council(201908210457).
文摘The determination of the local cooling rate has a great significance in optimizing the parameters of electroslag remelting(ESR)and improving the quality of the ingots.An innovative method was proposed for calibrating the local cooling rate of M42 high-speed steel(HSS)in the ESR process.After resolidification at different cooling rates under high-temperature laser confocal microscopy,the carbide network spacing of the specimen was observed using a scanning electron microscope.A functional relationship between the cooling rate and average carbide network spacing was established.The average local cooling rate of the solidification process of the M42 HSS ingot was calibrated.The results show that the higher the cool-ing rate,the smaller the network spacing of the carbides.For the steel ingot with a diameter of 360 mm,the average local cooling rate was 0.562℃/s at the surface,0.057℃/s at the position of 0.25D(where D is the diameter of the ingot),and 0.046℃/s at the center of the ingot.
文摘A calibrating device for the Rogowski coil is developed, which can be used to calibrate the Rogowski coil having a partial response time within tens of nanoseconds. Its key component is a step current generator, which can generate the output with a rise time of less than 2 ns and a duration of larger than 300 ns. The step current generator is composed by a pulse forming line (PFL) and a pulse transmission line (PTL). A TEM (transverse electromagnetic mode) coaxial measurement unit is used as PTL, and the coil to be calibrated and the referenced standard Rogowski coil can be fixed in the unit. The effect of the dimensions of the TEM unit is discussed theoretically as well as experimentally.
基金supported by the National Natural Science Foundation of China (60472021).
文摘A novel and efficient method for decomposing a signal into a set of intrinsic mode functions (IMFs) and a trend is proposed. Unlike the original empirical mode decomposition (EMD), which uses spline fits to extract variations from the signal by separating the local mean from the fluctuations in the decomposing process, this new method being proposed takes advantage of the theory of variable finite impulse response (FIR) filtering where filter coefficients and breakpoint frequencies can be adjusted to track any peak-to-peak time scale changes. The IMFs are results of a multiple variable frequency response FIR filtering when signals pass through the filters. Numerical examples validate that in contrast with the original EMD, the proposed method can fine-tune the frequency resolution and suppress the aliasing effectively.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11675019 and 11875087).
文摘The measurement of the confocal volume of a confocal three-dimensional micro-x-ray fluorescence(3D-XRF)setup is a key step in the field of confocal 3D-XRF analysis.With the development of x-ray facilities and optical devices,3D-XRF analysis with a micro confocal volume will create a great potential for 2D and 3D microstructural analysis and accurate quantitative analysis.However,the classic measurement method of scanning metal foils of a certain thickness leads to inaccuracy.A method for calibrating the confocal volume is proposed in this paper.The new method is based on the basic content of the textbook,and the theoretical results and the feasibility are given in detail for the 3D-XRF mono-chromatic x-ray condition and the poly-chromatic x-ray condition.We obtain a set of experimental confirmation using the poly-chromatic x-ray tube in the laboratory.It is proved that the sensitivity factor of the 3D-XRF can be directly and accurately obtained in a real calibration process.
基金supported by the National Key Research and Development Program of China(Nos.2017YFA0403801,2017YFA0206004,2018YFC1200204)the National Natural Science Foundation of China(NSFC)(Nos.81430087,11775297,U1932205).
文摘Doped elements in alloys significantly impact their performance.Conventional methods usually sputter the surface material of the sample,or their performance is limited to the surface of alloys owing to their poor penetration ability.The X-ray K-edge subtraction(KES)method exhibits great potential for the nondestructive in situ detection of element contents in alloys.However,the signal of doped elements usually deteriorates because of the strong absorption of the principal component and scattering of crystal grains.This in turn prevents the extensive application of X-ray KES imaging to alloys.In this study,methods were developed to calibrate the linearity between the grayscale of the KES image and element content.The methods were aimed at the sensitive analysis of elements in alloys.Furthermore,experiments with phantoms and alloys demonstrated that,after elaborate calibration,X-ray KES imaging is capable of nondestructive and sensitive analysis of doped elements in alloys.
基金Supported by:Multidisciplinary Center for Earthquake Engineering Research,Contract No.R271883
文摘Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.
文摘In this article, the extension to three dimensions (3D) of the blending technique that has been widely used in two dimensions (2D) to calibrate ocean chlorophyll is presented. The results thus obtained revealed a very high degree of efficiency when predicting observed values of ocean chlorophyll. The mean squared difference between the predicted and observed values of ocean chlorophyll when 3D technique was used fell far below the tolerance level which was set to the difference between satellite and observed in-situ values. The resulting blended field did not only provide better predictions of the in situ observations in areas where bottle samples cannot be obtained but also provided a smooth variation of the distribution of ocean chlorophyll throughout the year. An added advantage is its computational efficiency since data that would have been treated at least four times would be treated only once. With the advent of these results, it is believed that the modelling of the ocean life cycle will become more realistic.
文摘It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.
基金supported by the National Natural Science Foundation of China(Grant Nos.11302220,11374292,and 31100555)the National Basic Research Program of China(Grant No.2011CB910402)
文摘We introduce a corrected sinusoidal-wave drag force method (SDFM) into optical tweezers to calibrate the trapping stiffness of the optical trap and conversion factor (CF) of photodetectors. First, the theoretical analysis and experimental result demonstrate that the correction of SDFM is necessary, especially the error of no correction is up to 11.25% for a bead of 5μm in diameter. Second, the simulation results demonstrate that the SDFM has a better performance in the calibration of optical tweezers than the triangular-wave drag force method (TDFM) and power spectrum density method (PSDM) at the same signal-to-noise ratio or trapping stiffness. Third, in experiments, the experimental standard deviations of calibration of trapping stiffness and CF with the SDFM are about less than 50% of TDFM and PSDM especially at low laser power. Finally, the experiments of stretching DNA verify that the in situ calibration with the SDFM improves the measurement stability and accuracy.
基金Project(J132012C001)supported by Technological Foundation of ChinaProject(2011YQ04013606)supported by National Major Scientific Instrument & Equipment Developing Projects,China
文摘The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balancing its model weight and multi-parametric distributions to the required accuracy. A novel measuring instrument of space manipulator is designed to orbital simulative motion and locational accuracy test. The camera system of space manipulator, calibrated by MOGA algorithm, is used to locational accuracy test in this measuring instrument. The experimental result shows that the absolute errors are [0.07, 1.75] mm for MOGA calibrating model, [2.88, 5.95] mm for MN method, and [1.19, 4.83] mm for LM method. Besides, the composite errors both of LM method and MN method are approximately seven times higher that of MOGA calibrating model. It is suggested that the MOGA calibrating model is superior both to LM method and MN method.
文摘Sensor location uncertainty of array degrades severely the performance of eigenstruc-ture based direction finding system.A new calibration method of sensor location is presentedwith three far field sources whose directions are not known accurately.A signal subspace basediteration algorithm for sensor location calibration is developed and its convergence to the globaloptimal point has been shown.The guide line for selecting directions of calibrating sources isgiven.Simulation results illustrate that the new method is successful and practicable.
文摘The conductance catheter technique allows real- time measurements of ventricular volume based on changes in the electrical conductance of blood within the ventricular cavity. Conductance volume measurements are corrected with a calibration coefficient, α, in order to improve accuracy. However, conductance volume measurements are also affected by parallel conductance, which may confound cali-bration coefficient estimation. This study was un-dertaken to examine the variation in α using a physical model of the left ventricle without parallel conductance. Calibration coefficients were calculated as the conductance-volume quotient (αV(t)) or the stroke conductance-stroke volume quotient (αSV). Both calibration coefficients varied as a non-linear function of the ventricular volume. Conductance volume measurements calibrated with αV(t) estimated ventricular volume to within 2.0 ±6.9%. By contrast, calibration with αSV substantially over-estimated the ventricular volume in a volume-dependent manner, increasing from 26 ±20% at 100ml to 106 ±36% at 500ml. The accuracy of conductance volume measurements is affected by the choice of calibration coefficient. Using a fixed or constant calibration coeffi-cient will result in volume measurement errors. The conductance-stroke volume quotient is associated with particularly significant and volume-dependent measurement errors. For this reason, conductance volume measurements should ideally be calibrated with an alternative measurement of ventricular vol-ume.
基金supported by the National Natural Science Foundation of China(Nos.62272210,62250710682,and 62331014).
文摘The fidelity of financial market simulation is restricted by the so-called“non-identifiability”difficulty when calibrating high-frequency data.This paper first analyzes the inherent loss of data information in this difficulty,and proposes to use the Kolmogorov-Smirnov test(K-S)as the objective function for high-frequency calibration.Empirical studies verify that K-S has better identifiability of calibrating high-frequency data,while also leads to a much harder multi-modal landscape in the calibration space.To this end,we propose the adaptive stochastic ranking based negatively correlated search algorithm for improving the balance between exploration and exploitation.Experimental results on both simulated data and real market data demonstrate that the proposed method can obtain up to 36.0%improvement in high-frequency data calibration problems over the compared methods.
基金support from the National Natural Sci-ence Foundation of China(grant nos.T2225010,32171399,and 32171456)Guangdong Basic and Applied Basic Research Foundation(grant no.2023A1515011267)+1 种基金Science and Technalogy Program of Guangzhou,China(grant nos.2024B03J0121 and 2024B03J1284)the Independent Fund of the State Key Laboratory of Optoelectronic Ma-terials and Technologies(Sun YatSen University)under grant no.
文摘Monitoring multiplexed biochemical markers is beneficial for the comprehensive evaluation of diabetes-associated complications.Techniques for multiplexed analyses in interstitial fluids have often been restricted by the difficulties of electrode materials in accurately detecting chemicals in complex subcutaneous spaces.In particular,the signal stability of enzyme-based sensing electrodes often inevitably decreases due to enzyme degradation or interference in vivo.In this study,we developed a self-calibrating multiplexed microneedle(MN)electrode array(SC-MMNEA)capable of continuous,real-time monitoring of multiple types of bioanalytes(glucose,cholesterol,uric acid,lactate,reactive oxygen species[ROSs],Na+,K+,Ca2+,and pH)in the subcutaneous space.Each type of analyte was detected by a discrete MN electrode assembled in an integrated array with single-MN resolution.Moreover,this device utilized an MN-delivery-mediated self-calibration technique to address the inherent problem of decreased accuracy of implantable electrodes caused by long-term tissue variation and enzyme degradation,and this technique might increase the reliability of the MN sensors.Our results indicated that SC-MMNEA could provide real-time monitoring of multiplexed analyte concentrations in a rat model with good accuracy,especially after self-calibration.SC-MMNEA has the advantages of in situ and minimally invasive monitoring of physiological states and the potential to promote wearable devices for long-term monitoring of chemical species in vivo.
基金Supported by Beijing Natural Science Foundation(Grant No.L241012)the National Natural Science Foundation of China(Grant No.62572468).
文摘LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.
文摘Eucalyptus(Eucalyptus camaldulensis Dehnh.)is an important exotic species in northern Nigeria commonly used for poles and timber.Sustainable management of this resource would require quantifying its volume.Stem taper equations are one of the main and most efficient methods for estimating stem volume to any merchantable limit of a species.There is currently no taper equation for Eucalyptus species in Nigeria.Therefore,this study developed taper equations for E.camaldulensis in northern Nigeria.Data for this study were obtained from a private plantation in Jalingo Local Government Area,Taraba State,Nigeria.68 trees were felled and sectioned into 1-m bolt across the stem to a merchantable limit of 5 cm,which were used as the fitting dataset.An additional 22 trees were felled and used to validate the taper equations for stem volume estimation.Seven taper equations were initially fitted to the dataset using nonlinear least squares.The best taper equation was then refitted using a nonlinear mixed-effects approach and calibrated using diameters of one to five sections from the butt end.The taper equations were numerically integrated to obtain the stem volume,which was compared with empirical volume equations.The result shows that the Kozak(Can J For Res 27(5):619-629.10.1139/x97-011,1997)equation,which included eight parameters,provided the best fit for predicting section diameters for under and over bark.The mixed-effects taper equation(NLME-TE)explained most stem diameter variations in the fitting dataset(pseudo-R2:0.986-0.987;RMSE:0.547-0.578 cm)without substantial residual trends.The validation showed that the prediction accuracy of the integrated NLME-TE improved as the number of sectional diameter measurements increased,with at least a 35%reduction in volume estimate error.For practical implementation,two calibration sectional diameter measurements taken from the butt end per tree are recommended.This approach would reduce measurement effort and cost while improving model performance.
基金support from the National Natural Science Foundation of China(Grant Nos.U2330206,U2230206,62173068)Sichuan Science and Technology Program(Grants Nos.2024NSFSC1483,2024ZYD0156,2023NSFC1962,DQ202412).
文摘In data communication,limited communication resources often lead to measurement bias,which adversely affects subsequent system estimation if not effectively handled.This paper proposes a novel bias calibration algorithm under communication constraints to achieve accurate system states of the interested system.An output-based event-triggered scheme is first employed to alleviate transmission burden.Accounting for the limited-communication-induced measurement bias,a novel bias calibration algorithm following the Kalman filtering line is developed to restrain the effect of the measurement bias on system estimation,thereby achieving accurate system state estimates.Subsequently,the Field Programmable Gate Array(FPGA)implementation of the proposed algorithm is also realized with the hope of providing fast bias calibration in practical scenarios.A simulation about a numerical example and a practical example(for gyroscope’s angular velocity bias calibration)on MATLAB is provided to demonstrate the feasibility and effectiveness of the proposed algorithm.
基金supported by National Key R&D grant from the Ministry of Science and Technology of China(Nos.2021YFA1601600,2023YFA1606200)National Science Foundation of China(Nos.12090062,12105008)the Major State Basic Research Development Program of China.
文摘We present the preparation and measurement of the radioactive isotope^(37)Ar,which was produced using thermal neutrons from a reactor,as a calibration source for liquid xenon time projection chambers.^(37)Ar is a low-energy calibration source with a half-life of 35.01 days,making it suitable for calibration in the low-energy region of liquid xenon dark-matter experiments.Radioactive isotope^(37)Ar was produced by irradiating ^(36)Ar with thermal neutrons.It was subsequently measured in a gaseous xenon time projection chamber(GXe TPC)to validate its radioactivity.Our results demonstrate that^(37)Ar is an effective and viable calibration source that offers precise calibration capabilities in the low-energy domain of xenon-based detectors.
文摘Microelectromechanical systems(MEMS)technology has gained significant attention over the past decade for measuring inertial angular velocity.However,due to inherent complexity,MEMS gyroscopes typically feature up to ten times more parameters than traditional sensors,making selection a challenging task even for experts.This study addresses this challenge,focusing on defensive guidance,navigation,and control(GNC)systems where precise and reliable angular velocity measurement is critical to overall performance.A comprehensive mathematical model is introduced to encapsulate all key MEMS parameters,accompanied by discussions on calibration and Allan variance interpretation.For six leading MEMS gyroscope applications,namely inertial navigation,integrated navigation,autopilot systems,rotating projectiles,homing guidance,and north finding,the most critical parameters are identified,distinguishing suitable and unsuitable sensor choices.Special emphasis is placed on inertial navigation systems,where practical rules of thumb for error evaluation are derived using six degrees of freedom motion equations.Rigorous simulations demonstrate the influence of various sensor parameters through real-world case studies,including static navigation,multi-rotor attitude estimation,gimbal stabilization,and north finding via a turntable.This work aims to be a beacon for practitioners across diverse fields,empowering them to make more informed design decisions.