Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages ha...Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages have difficulties in predicting comprehensively Reynolds number effects on airfoils,matching and characteristics curves.This study proposes Re-correction models for loss,deviation angle and endwall blockage based on classical theories and cascade tests,and loss and deviation models show good agreement in test data of NACA65 and C4 cascades.Throughflow method considering Reynolds number effects is developed by integrating the correction models into a verified Streamline Curvature(SLC)tool.A three-stage axial compressor is investigated through SLC and CFD methods from design Reynolds number(Red=2106)to low Re=4104,and the numerical methods are validated with test data of characteristic curves and spanwise distributions at Red.With Re reduction,SLC method with correction models well predicts variation in overall performances compared with CFD calculations and Wassell's model.Streamwise and spanwise matching such as total pressure and loss distributions in SLC predictions are basically consistent with those in CFD results at near-stall points under design and low Reynolds numbers.SLC and CFD methods share similar detections of stall risks in the third stage(Stg3),and their analyses of diffusion processes deviate to some extent due to different predictions in separated endwall flow.The correction models can be adopted to consider Reynolds number effects in through-flow design and analysis of axial compressors.展开更多
The rapid melting of Arctic sea ice poses significant risks to the safety of shipping routes.Accurate remote sensing data on sea ice concentration(SIC)is crucial for effective route planning of ships and ensuring navi...The rapid melting of Arctic sea ice poses significant risks to the safety of shipping routes.Accurate remote sensing data on sea ice concentration(SIC)is crucial for effective route planning of ships and ensuring navigational safety.Despite the availability of numerous SIC products in China,these datasets still lag behind mainstream international products in terms of data accuracy,spatiotemporal resolution,and time span.To enhance the accuracy of China's domestic SIC remote sensing data,this study used the SIC data derived from the passive microwave remote sensing dataset provided by the University of Bremen(BRM-SIC)as a reference to conduct a comprehensive evaluation and analysis of two additional SIC datasets:the dataset derived from the microwave radiation imager(MWRI)aboard the FY-3D satellite,provided by the National Satellite Meteorological Center(FY-SIC),and the dataset obtained through the DT-ASI algorithm from the microwave imager of the FY-3D satellite,provided by Ocean University of China(OUC-SIC).Based on the evaluation results,a TransUnet fusion correction model was developed.The performance of this model was then compared against Ordinary Least Squares(OLS),Random Forest(RF),and UNet correction models,through spatial and temporal analyses.Results indicate that,compared to FY-SIC data,the RMSE of the OUC-SIC data and the standard data is reduced by24.245%,while the R is increased by 12.516%.Overall,the accuracy of OUC-SIC data is superior to that of FY-SIC data.During the research period(2020–2022),the standard deviation(SD)and coefficient of variation(CV)of OUC-SIC were 3.877%and 10.582%,respectively,while those for FY-SIC were 7.836%and 7.982%,respectively.In the study area,compared with OUC-SIC data,FYSIC data exhibited a larger standard deviation of deviation and a smaller coefficient of variation of deviation across most sea areas.These results indicate that the OUC-SIC data exhibit better temporal and spatial stability,whereas the FY-SIC data show stronger relative dimensionless stability.Among the four correction models,all showed improvements over the original,unfused corrected data.The fusion corrections using the OLS,RF,UNet,and TransUnet models reduced RMSE by 5.563%,14.601%,42.927%,and48.316%,respectively.Correspondingly,R increased by 0.463%,1.176%,3.951%,and 4.342%,respectively.Among these models,TransUnet performed the best,effectively integrating the advantages of FY-SIC and OUC-SIC data and notably improving the overall accuracy and spatiotemporal stability of SIC data.展开更多
Marine forecasting is critical for navigation safety and disaster prevention.However,traditional ocean numerical forecasting models are often limited by substantial errors and inadequate capture of temporal-spatial fe...Marine forecasting is critical for navigation safety and disaster prevention.However,traditional ocean numerical forecasting models are often limited by substantial errors and inadequate capture of temporal-spatial features.To address the limitations,the paper proposes a TimeXer-based numerical forecast correction model optimized by an exogenous-variable attention mechanism.The model treats target forecast values as internal variables,and incorporates historical temporal-spatial data and seven-day numerical forecast results from traditional models as external variables based on the embedding strategy of TimeXer.Using a self-attention structure,the model captures correlations between exogenous variables and target sequences,explores intrinsic multi-dimensional relationships,and subsequently corrects endogenous variables with the mined exogenous features.The model’s performance is evaluated using metrics including MSE(Mean Squared Error),MAE(Mean Absolute Error),RMSE(Root Mean Square Error),MAPE(Mean Absolute Percentage Error),MSPE(Mean Square Percentage Error),and computational time,with TimeXer and PatchTST models serving as benchmarks.Experiment results show that the proposed model achieves lower errors and higher correction accuracy for both one-day and seven-day forecasts.展开更多
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the pred...Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.展开更多
By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Huna...By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.展开更多
As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculat...The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculated,and the curve was corrected.The strain compensation constitutive model of as-extruded Ti-6554 alloy based on temperature rise correction was established.The microstructure evolution under different conditions was analyzed,and the dynamic recrystallization(DRX)mechanism was revealed.The results show that the flow stress decreases with the increase in strain rate and the decrease in deformation temperature.The deformation temperature rise gradually increases with the increase in strain rate and the decrease in deformation temperature.At 700°C/1 s^(−1),the temperature rise reaches 100°C.The corrected curve value is higher than the measured value,and the strain compensation constitutive model has high prediction accuracy.The precipitation of theαphase occurs during deformation in the twophase region,which promotes DRX process of theβphase.At low strain rate,the volume fraction of dynamic recrystallization increases with the increase in deformation temperature.DRX mechanism includes continuous DRX and discontinuous DRX.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a cor...Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a correction to address this issue and apologize for any confusion this error may have caused.For details,please refer to the modified Supplementary Materials.展开更多
Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was...Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was developed using the6 h average bias to correct the systematic bias during model integration.The primary purpose of this study is to investigate the impact of the SBCS in the high-resolution China Meteorological Administration Meso-scale(CMA-MESO)numerical weather prediction(NWP)model to reduce the systematic bias and to improve the data assimilation and forecast results through this method.The SBCS is improved upon and applied to the CMA-MESO 3-km model in this study.Four-week sequential data assimilation and forecast experiments,driven by rapid update and cycling(RUC),were conducted for the period from 2–29 May 2022.In terms of the characteristics of systematic bias,both the background and analysis show diurnal bias,and these large biases are affected by complex underlying surfaces(e.g.,oceans,coasts,and mountains).After the application of the SBCS,the results of the data assimilation show that the SBCS can reduce the systematic bias of the background and yield a neutral to slightly positive result for the analysis fields.In addition,the SBCS can reduce forecast errors and improve forecast results,especially for surface variables.The above results indicate that this scheme has good prospects for high-resolution regional NWP models.展开更多
Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation o...Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation of the PLANAR algorithm,a software framework designed for fast and exact decoding of quantum codes with a planar structure.The algorithm first converts the optimal decoding of quantum codes into a partition function computation problem of an Ising spin glass model.Then it utilizes the exact Kac–Ward formula to solve it.In this way,PLANAR offers the exact maximum likelihood decoding in polynomial complexity for quantum codes with a planar structure,including the surface code with independent code-capacity noise and the quantum repetition code with circuit-level noise.Unlike traditional minimumweight decoders such as minimum-weight perfect matching(MWPM),PLANAR achieves theoretically optimal performance while maintaining polynomial-time efficiency.In addition,to demonstrate its capabilities,we exemplify the implementation using the rotated surface code,a commonly used quantum error correction code with a planar structure,and show that PLANAR achieves a threshold of approximately p_(uc)≈0.109 under the depolarizing error model,with a time complexity scaling of O(N^(0.69)),where N is the number of spins in the Ising model.展开更多
Industrial robots are integral to modern manufacturing systems,enabling high precision,high throughput,and flexibility.However,errors in accuracy and repeatability,which arise from a variety of sources such as mechani...Industrial robots are integral to modern manufacturing systems,enabling high precision,high throughput,and flexibility.However,errors in accuracy and repeatability,which arise from a variety of sources such as mechanical wear,calibration issues,and environmental factors,can significantly impact the performance of industrial robots.This paper aims to explore the theoretical modeling of errors in industrial robot systems and propose compensation strategies to enhance their accuracy and repeatability.Key factors contributing to errors,such as kinematic,dynamic,and environmental influences,are discussed in detail.Additionally,the paper explores various compensation techniques,including geometric error compensation,dynamic compensation,and adaptive control approaches.Through the integration of error modeling and compensation methods,industrial robots can achieve improved performance,ensuring higher operational efficiency and product quality.The paper concludes by highlighting the challenges and future research directions for improving the accuracy and repeatability of industrial robots in practical applications.展开更多
An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given t...An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES- GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.展开更多
Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in cu...Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in current quantum devices exceed the error correction thresholds required for effective algorithm execution.Therefore,quantum error correction technology is crucial to achieving reliable quantum computing.In this work,we study a topological surface code with a two-dimensional lattice structure that protects quantum information by introducing redundancy across multiple qubits and using syndrome qubits to detect and correct errors.However,errors can occur not only in data qubits but also in syndrome qubits,and different types of errors may generate the same syndromes,complicating the decoding task and creating a need for more efficient decoding methods.To address this challenge,we used a transformer decoder based on an attention mechanism.By mapping the surface code lattice,the decoder performs a self-attention process on all input syndromes,thereby obtaining a global receptive field.The performance of the decoder was evaluated under a phenomenological error model.Numerical results demonstrate that the decoder achieved a decoding accuracy of 93.8%.Additionally,we obtained decoding thresholds of 5%and 6.05%at maximum code distances of 7 and 9,respectively.These results indicate that the decoder used demonstrates a certain capability in correcting noise errors in surface codes.展开更多
Thermal errors in CNC machine tools,particularly those involving the spindle,significantly affect machining accuracy and performance.These errors,caused by temperature fluctuations in the spindle and surrounding compo...Thermal errors in CNC machine tools,particularly those involving the spindle,significantly affect machining accuracy and performance.These errors,caused by temperature fluctuations in the spindle and surrounding components,result in dimensional deviations that can lead to poor part quality and reduced precision in high-speed manufacturing processes.This paper explores thermal error modeling and compensation methods for the spindle of five-axis CNC machine tools.A detailed analysis of the heat generation,transfer mechanisms,and finite element analysis(FEA)is presented to develop accurate thermal error models.Compensation techniques,such as model-based methods,sensor-based methods,real-time compensation algorithms,and hybrid approaches,are critically reviewed.This study also discusses the challenges in real-time compensation and the integration of thermal error compensation with machine tool control systems.The objective is to provide a comprehensive understanding of thermal error phenomena and their compensation strategies,ultimately contributing to the enhancement of machining accuracy in advanced manufacturing applications.展开更多
In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance c...In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance components.However,there is still no related research in the mixed additive and multiplicative random error model(MAMREM).Based on the MAMREM,this paper applies the nonnegative least squares variance component estimation(NNLS-VCE)algorithm to this model.The correlation formula and iterative algorithm of NNLS-VCE for MAMREM are derived.The problem of negative variance in VCE for MAMREM is solved.This paper uses the digital simulation example and the Digital Terrain Mode(DTM)to prove the proposed algorithm's validity.The experimental results demonstrated that the proposed algorithm can effectively correct the VCE in MAMREM when there is a negative VCE.展开更多
The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)...The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)technology rely on manual site setup to collect intensity training data at different distances and incidence angles,which is noisy and limited in sample quantity,restricting the improvement of model accuracy.To overcome this limitation,this study proposes a fine-grained intensity correction modeling method based on Mobile Laser Scanning(MLS)technology.The method utilizes the continuous scanning characteristics of MLS technology to obtain dense point cloud intensity data at various distances and incidence angles.Then,a fine-grained screening strategy is employed to accurately select distance-intensity and incidence angle-intensity modeling samples.Finally,based on these samples,a high-precision intensity correction model is established through polynomial fitting functions.To verify the effectiveness of the proposed method,comparative experiments were designed,and the MLS modeling method was validated against the traditional TLS modeling method on the same test set.The results show that on Test Set 1,where the distance values vary widely(i.e.,0.1–3 m),the intensity consistency after correction using the MLS modeling method reached 7.692 times the original intensity,while the traditional TLS modeling method only increased to 4.630 times the original intensity.On Test Set 2,where the incidence angle values vary widely(i.e.,0○–80○),the MLS modeling method,although with a relatively smaller advantage,still improved the intensity consistency to 3.937 times the original intensity,slightly better than the TLS modeling method’s 3.413 times.These results demonstrate the significant advantage of the modeling method proposed in this study in enhancing the accuracy of intensity correction models.展开更多
Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections h...Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.展开更多
基金supported by the National Science and Tech-nology Major Project of China(Nos.2017-II-0007-0021 and J2019-II-0017-0038)。
文摘Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages have difficulties in predicting comprehensively Reynolds number effects on airfoils,matching and characteristics curves.This study proposes Re-correction models for loss,deviation angle and endwall blockage based on classical theories and cascade tests,and loss and deviation models show good agreement in test data of NACA65 and C4 cascades.Throughflow method considering Reynolds number effects is developed by integrating the correction models into a verified Streamline Curvature(SLC)tool.A three-stage axial compressor is investigated through SLC and CFD methods from design Reynolds number(Red=2106)to low Re=4104,and the numerical methods are validated with test data of characteristic curves and spanwise distributions at Red.With Re reduction,SLC method with correction models well predicts variation in overall performances compared with CFD calculations and Wassell's model.Streamwise and spanwise matching such as total pressure and loss distributions in SLC predictions are basically consistent with those in CFD results at near-stall points under design and low Reynolds numbers.SLC and CFD methods share similar detections of stall risks in the third stage(Stg3),and their analyses of diffusion processes deviate to some extent due to different predictions in separated endwall flow.The correction models can be adopted to consider Reynolds number effects in through-flow design and analysis of axial compressors.
基金supported by the National Natural Science Foundation of China(No.41971339)the SDUST Research Fund(No.2019TDJH103)。
文摘The rapid melting of Arctic sea ice poses significant risks to the safety of shipping routes.Accurate remote sensing data on sea ice concentration(SIC)is crucial for effective route planning of ships and ensuring navigational safety.Despite the availability of numerous SIC products in China,these datasets still lag behind mainstream international products in terms of data accuracy,spatiotemporal resolution,and time span.To enhance the accuracy of China's domestic SIC remote sensing data,this study used the SIC data derived from the passive microwave remote sensing dataset provided by the University of Bremen(BRM-SIC)as a reference to conduct a comprehensive evaluation and analysis of two additional SIC datasets:the dataset derived from the microwave radiation imager(MWRI)aboard the FY-3D satellite,provided by the National Satellite Meteorological Center(FY-SIC),and the dataset obtained through the DT-ASI algorithm from the microwave imager of the FY-3D satellite,provided by Ocean University of China(OUC-SIC).Based on the evaluation results,a TransUnet fusion correction model was developed.The performance of this model was then compared against Ordinary Least Squares(OLS),Random Forest(RF),and UNet correction models,through spatial and temporal analyses.Results indicate that,compared to FY-SIC data,the RMSE of the OUC-SIC data and the standard data is reduced by24.245%,while the R is increased by 12.516%.Overall,the accuracy of OUC-SIC data is superior to that of FY-SIC data.During the research period(2020–2022),the standard deviation(SD)and coefficient of variation(CV)of OUC-SIC were 3.877%and 10.582%,respectively,while those for FY-SIC were 7.836%and 7.982%,respectively.In the study area,compared with OUC-SIC data,FYSIC data exhibited a larger standard deviation of deviation and a smaller coefficient of variation of deviation across most sea areas.These results indicate that the OUC-SIC data exhibit better temporal and spatial stability,whereas the FY-SIC data show stronger relative dimensionless stability.Among the four correction models,all showed improvements over the original,unfused corrected data.The fusion corrections using the OLS,RF,UNet,and TransUnet models reduced RMSE by 5.563%,14.601%,42.927%,and48.316%,respectively.Correspondingly,R increased by 0.463%,1.176%,3.951%,and 4.342%,respectively.Among these models,TransUnet performed the best,effectively integrating the advantages of FY-SIC and OUC-SIC data and notably improving the overall accuracy and spatiotemporal stability of SIC data.
基金supported by the National Key Research and Development Program Project(2023YFC3107804)Planning Fund Project of Humanities and Social Sciences Research of the Ministry of Education(24YJA880097)the Graduate Education Reform Project in North China University of Technology(217051360025XN095-17)。
文摘Marine forecasting is critical for navigation safety and disaster prevention.However,traditional ocean numerical forecasting models are often limited by substantial errors and inadequate capture of temporal-spatial features.To address the limitations,the paper proposes a TimeXer-based numerical forecast correction model optimized by an exogenous-variable attention mechanism.The model treats target forecast values as internal variables,and incorporates historical temporal-spatial data and seven-day numerical forecast results from traditional models as external variables based on the embedding strategy of TimeXer.Using a self-attention structure,the model captures correlations between exogenous variables and target sequences,explores intrinsic multi-dimensional relationships,and subsequently corrects endogenous variables with the mined exogenous features.The model’s performance is evaluated using metrics including MSE(Mean Squared Error),MAE(Mean Absolute Error),RMSE(Root Mean Square Error),MAPE(Mean Absolute Percentage Error),MSPE(Mean Square Percentage Error),and computational time,with TimeXer and PatchTST models serving as benchmarks.Experiment results show that the proposed model achieves lower errors and higher correction accuracy for both one-day and seven-day forecasts.
基金funded by the National Natural Science Foundation Science Fund for Youth (Grant No.41405095)the Key Projects in the National Science and Technology Pillar Program during the Twelfth Fiveyear Plan Period (Grant No.2012BAC22B02)the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)
文摘Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
基金Supported by the Scientific Research Subject of Department of Education in Hunan Province(10C0556)
文摘By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.
基金National Key R&D Program of China(2022YFB3706901)National Natural Science Foundation of China(52274382)Key Research and Development Program of Hubei Province(2022BAA024)。
文摘The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculated,and the curve was corrected.The strain compensation constitutive model of as-extruded Ti-6554 alloy based on temperature rise correction was established.The microstructure evolution under different conditions was analyzed,and the dynamic recrystallization(DRX)mechanism was revealed.The results show that the flow stress decreases with the increase in strain rate and the decrease in deformation temperature.The deformation temperature rise gradually increases with the increase in strain rate and the decrease in deformation temperature.At 700°C/1 s^(−1),the temperature rise reaches 100°C.The corrected curve value is higher than the measured value,and the strain compensation constitutive model has high prediction accuracy.The precipitation of theαphase occurs during deformation in the twophase region,which promotes DRX process of theβphase.At low strain rate,the volume fraction of dynamic recrystallization increases with the increase in deformation temperature.DRX mechanism includes continuous DRX and discontinuous DRX.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
文摘Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a correction to address this issue and apologize for any confusion this error may have caused.For details,please refer to the modified Supplementary Materials.
基金supported by the National Natural Science Foundation of China(Grant Nos.U2242213,U2142213,42305167,42175105)。
文摘Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was developed using the6 h average bias to correct the systematic bias during model integration.The primary purpose of this study is to investigate the impact of the SBCS in the high-resolution China Meteorological Administration Meso-scale(CMA-MESO)numerical weather prediction(NWP)model to reduce the systematic bias and to improve the data assimilation and forecast results through this method.The SBCS is improved upon and applied to the CMA-MESO 3-km model in this study.Four-week sequential data assimilation and forecast experiments,driven by rapid update and cycling(RUC),were conducted for the period from 2–29 May 2022.In terms of the characteristics of systematic bias,both the background and analysis show diurnal bias,and these large biases are affected by complex underlying surfaces(e.g.,oceans,coasts,and mountains).After the application of the SBCS,the results of the data assimilation show that the SBCS can reduce the systematic bias of the background and yield a neutral to slightly positive result for the analysis fields.In addition,the SBCS can reduce forecast errors and improve forecast results,especially for surface variables.The above results indicate that this scheme has good prospects for high-resolution regional NWP models.
基金supported by the National Natural Science Foundation of China(Grant Nos.12325501,12047503,and 12247104)the Chinese Academy of Sciences(Grant No.ZDRW-XX-2022-3-02)P.Z.is partially supported by the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900).
文摘Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation of the PLANAR algorithm,a software framework designed for fast and exact decoding of quantum codes with a planar structure.The algorithm first converts the optimal decoding of quantum codes into a partition function computation problem of an Ising spin glass model.Then it utilizes the exact Kac–Ward formula to solve it.In this way,PLANAR offers the exact maximum likelihood decoding in polynomial complexity for quantum codes with a planar structure,including the surface code with independent code-capacity noise and the quantum repetition code with circuit-level noise.Unlike traditional minimumweight decoders such as minimum-weight perfect matching(MWPM),PLANAR achieves theoretically optimal performance while maintaining polynomial-time efficiency.In addition,to demonstrate its capabilities,we exemplify the implementation using the rotated surface code,a commonly used quantum error correction code with a planar structure,and show that PLANAR achieves a threshold of approximately p_(uc)≈0.109 under the depolarizing error model,with a time complexity scaling of O(N^(0.69)),where N is the number of spins in the Ising model.
文摘Industrial robots are integral to modern manufacturing systems,enabling high precision,high throughput,and flexibility.However,errors in accuracy and repeatability,which arise from a variety of sources such as mechanical wear,calibration issues,and environmental factors,can significantly impact the performance of industrial robots.This paper aims to explore the theoretical modeling of errors in industrial robot systems and propose compensation strategies to enhance their accuracy and repeatability.Key factors contributing to errors,such as kinematic,dynamic,and environmental influences,are discussed in detail.Additionally,the paper explores various compensation techniques,including geometric error compensation,dynamic compensation,and adaptive control approaches.Through the integration of error modeling and compensation methods,industrial robots can achieve improved performance,ensuring higher operational efficiency and product quality.The paper concludes by highlighting the challenges and future research directions for improving the accuracy and repeatability of industrial robots in practical applications.
基金funded by the National Natural Science Foundation Science Fund for Youth (Grant No.41405095)the Key Projects in the National Science and Technology Pillar Program during the Twelfth Fiveyear Plan Period (Grant No.2012BAC22B02)the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)
文摘An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES- GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province(Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)the Key R&D Program of Shandong Province,China(Grant No.2023CXGC010901)。
文摘Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in current quantum devices exceed the error correction thresholds required for effective algorithm execution.Therefore,quantum error correction technology is crucial to achieving reliable quantum computing.In this work,we study a topological surface code with a two-dimensional lattice structure that protects quantum information by introducing redundancy across multiple qubits and using syndrome qubits to detect and correct errors.However,errors can occur not only in data qubits but also in syndrome qubits,and different types of errors may generate the same syndromes,complicating the decoding task and creating a need for more efficient decoding methods.To address this challenge,we used a transformer decoder based on an attention mechanism.By mapping the surface code lattice,the decoder performs a self-attention process on all input syndromes,thereby obtaining a global receptive field.The performance of the decoder was evaluated under a phenomenological error model.Numerical results demonstrate that the decoder achieved a decoding accuracy of 93.8%.Additionally,we obtained decoding thresholds of 5%and 6.05%at maximum code distances of 7 and 9,respectively.These results indicate that the decoder used demonstrates a certain capability in correcting noise errors in surface codes.
文摘Thermal errors in CNC machine tools,particularly those involving the spindle,significantly affect machining accuracy and performance.These errors,caused by temperature fluctuations in the spindle and surrounding components,result in dimensional deviations that can lead to poor part quality and reduced precision in high-speed manufacturing processes.This paper explores thermal error modeling and compensation methods for the spindle of five-axis CNC machine tools.A detailed analysis of the heat generation,transfer mechanisms,and finite element analysis(FEA)is presented to develop accurate thermal error models.Compensation techniques,such as model-based methods,sensor-based methods,real-time compensation algorithms,and hybrid approaches,are critically reviewed.This study also discusses the challenges in real-time compensation and the integration of thermal error compensation with machine tool control systems.The objective is to provide a comprehensive understanding of thermal error phenomena and their compensation strategies,ultimately contributing to the enhancement of machining accuracy in advanced manufacturing applications.
基金supported by the National Natural Science Foundation of China(No.42174011)。
文摘In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance components.However,there is still no related research in the mixed additive and multiplicative random error model(MAMREM).Based on the MAMREM,this paper applies the nonnegative least squares variance component estimation(NNLS-VCE)algorithm to this model.The correlation formula and iterative algorithm of NNLS-VCE for MAMREM are derived.The problem of negative variance in VCE for MAMREM is solved.This paper uses the digital simulation example and the Digital Terrain Mode(DTM)to prove the proposed algorithm's validity.The experimental results demonstrated that the proposed algorithm can effectively correct the VCE in MAMREM when there is a negative VCE.
基金supported in part by the National Natural Science Foundation of China under grant number 31901239funded by Researchers Supporting Project Number(RSPD2025R947),King Saud University,Riyadh,Saudi Arabia.
文摘The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)technology rely on manual site setup to collect intensity training data at different distances and incidence angles,which is noisy and limited in sample quantity,restricting the improvement of model accuracy.To overcome this limitation,this study proposes a fine-grained intensity correction modeling method based on Mobile Laser Scanning(MLS)technology.The method utilizes the continuous scanning characteristics of MLS technology to obtain dense point cloud intensity data at various distances and incidence angles.Then,a fine-grained screening strategy is employed to accurately select distance-intensity and incidence angle-intensity modeling samples.Finally,based on these samples,a high-precision intensity correction model is established through polynomial fitting functions.To verify the effectiveness of the proposed method,comparative experiments were designed,and the MLS modeling method was validated against the traditional TLS modeling method on the same test set.The results show that on Test Set 1,where the distance values vary widely(i.e.,0.1–3 m),the intensity consistency after correction using the MLS modeling method reached 7.692 times the original intensity,while the traditional TLS modeling method only increased to 4.630 times the original intensity.On Test Set 2,where the incidence angle values vary widely(i.e.,0○–80○),the MLS modeling method,although with a relatively smaller advantage,still improved the intensity consistency to 3.937 times the original intensity,slightly better than the TLS modeling method’s 3.413 times.These results demonstrate the significant advantage of the modeling method proposed in this study in enhancing the accuracy of intensity correction models.
文摘Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.