The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculat...The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculated,and the curve was corrected.The strain compensation constitutive model of as-extruded Ti-6554 alloy based on temperature rise correction was established.The microstructure evolution under different conditions was analyzed,and the dynamic recrystallization(DRX)mechanism was revealed.The results show that the flow stress decreases with the increase in strain rate and the decrease in deformation temperature.The deformation temperature rise gradually increases with the increase in strain rate and the decrease in deformation temperature.At 700°C/1 s^(−1),the temperature rise reaches 100°C.The corrected curve value is higher than the measured value,and the strain compensation constitutive model has high prediction accuracy.The precipitation of theαphase occurs during deformation in the twophase region,which promotes DRX process of theβphase.At low strain rate,the volume fraction of dynamic recrystallization increases with the increase in deformation temperature.DRX mechanism includes continuous DRX and discontinuous DRX.展开更多
Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a cor...Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a correction to address this issue and apologize for any confusion this error may have caused.For details,please refer to the modified Supplementary Materials.展开更多
Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was...Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was developed using the6 h average bias to correct the systematic bias during model integration.The primary purpose of this study is to investigate the impact of the SBCS in the high-resolution China Meteorological Administration Meso-scale(CMA-MESO)numerical weather prediction(NWP)model to reduce the systematic bias and to improve the data assimilation and forecast results through this method.The SBCS is improved upon and applied to the CMA-MESO 3-km model in this study.Four-week sequential data assimilation and forecast experiments,driven by rapid update and cycling(RUC),were conducted for the period from 2–29 May 2022.In terms of the characteristics of systematic bias,both the background and analysis show diurnal bias,and these large biases are affected by complex underlying surfaces(e.g.,oceans,coasts,and mountains).After the application of the SBCS,the results of the data assimilation show that the SBCS can reduce the systematic bias of the background and yield a neutral to slightly positive result for the analysis fields.In addition,the SBCS can reduce forecast errors and improve forecast results,especially for surface variables.The above results indicate that this scheme has good prospects for high-resolution regional NWP models.展开更多
Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation o...Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation of the PLANAR algorithm,a software framework designed for fast and exact decoding of quantum codes with a planar structure.The algorithm first converts the optimal decoding of quantum codes into a partition function computation problem of an Ising spin glass model.Then it utilizes the exact Kac–Ward formula to solve it.In this way,PLANAR offers the exact maximum likelihood decoding in polynomial complexity for quantum codes with a planar structure,including the surface code with independent code-capacity noise and the quantum repetition code with circuit-level noise.Unlike traditional minimumweight decoders such as minimum-weight perfect matching(MWPM),PLANAR achieves theoretically optimal performance while maintaining polynomial-time efficiency.In addition,to demonstrate its capabilities,we exemplify the implementation using the rotated surface code,a commonly used quantum error correction code with a planar structure,and show that PLANAR achieves a threshold of approximately p_(uc)≈0.109 under the depolarizing error model,with a time complexity scaling of O(N^(0.69)),where N is the number of spins in the Ising model.展开更多
Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in cu...Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in current quantum devices exceed the error correction thresholds required for effective algorithm execution.Therefore,quantum error correction technology is crucial to achieving reliable quantum computing.In this work,we study a topological surface code with a two-dimensional lattice structure that protects quantum information by introducing redundancy across multiple qubits and using syndrome qubits to detect and correct errors.However,errors can occur not only in data qubits but also in syndrome qubits,and different types of errors may generate the same syndromes,complicating the decoding task and creating a need for more efficient decoding methods.To address this challenge,we used a transformer decoder based on an attention mechanism.By mapping the surface code lattice,the decoder performs a self-attention process on all input syndromes,thereby obtaining a global receptive field.The performance of the decoder was evaluated under a phenomenological error model.Numerical results demonstrate that the decoder achieved a decoding accuracy of 93.8%.Additionally,we obtained decoding thresholds of 5%and 6.05%at maximum code distances of 7 and 9,respectively.These results indicate that the decoder used demonstrates a certain capability in correcting noise errors in surface codes.展开更多
In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance c...In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance components.However,there is still no related research in the mixed additive and multiplicative random error model(MAMREM).Based on the MAMREM,this paper applies the nonnegative least squares variance component estimation(NNLS-VCE)algorithm to this model.The correlation formula and iterative algorithm of NNLS-VCE for MAMREM are derived.The problem of negative variance in VCE for MAMREM is solved.This paper uses the digital simulation example and the Digital Terrain Mode(DTM)to prove the proposed algorithm's validity.The experimental results demonstrated that the proposed algorithm can effectively correct the VCE in MAMREM when there is a negative VCE.展开更多
The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)...The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)technology rely on manual site setup to collect intensity training data at different distances and incidence angles,which is noisy and limited in sample quantity,restricting the improvement of model accuracy.To overcome this limitation,this study proposes a fine-grained intensity correction modeling method based on Mobile Laser Scanning(MLS)technology.The method utilizes the continuous scanning characteristics of MLS technology to obtain dense point cloud intensity data at various distances and incidence angles.Then,a fine-grained screening strategy is employed to accurately select distance-intensity and incidence angle-intensity modeling samples.Finally,based on these samples,a high-precision intensity correction model is established through polynomial fitting functions.To verify the effectiveness of the proposed method,comparative experiments were designed,and the MLS modeling method was validated against the traditional TLS modeling method on the same test set.The results show that on Test Set 1,where the distance values vary widely(i.e.,0.1–3 m),the intensity consistency after correction using the MLS modeling method reached 7.692 times the original intensity,while the traditional TLS modeling method only increased to 4.630 times the original intensity.On Test Set 2,where the incidence angle values vary widely(i.e.,0○–80○),the MLS modeling method,although with a relatively smaller advantage,still improved the intensity consistency to 3.937 times the original intensity,slightly better than the TLS modeling method’s 3.413 times.These results demonstrate the significant advantage of the modeling method proposed in this study in enhancing the accuracy of intensity correction models.展开更多
As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections h...Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.展开更多
Geometric error,mainly due to imperfect geometry and dimensions of machine components,is one of the major error sources of machine tools.Considering that geometric error has significant effects on the machining qualit...Geometric error,mainly due to imperfect geometry and dimensions of machine components,is one of the major error sources of machine tools.Considering that geometric error has significant effects on the machining quality of manufactured parts,it has been a popular topic for academic and industrial research for many years.A great deal of research work has been carried out since the 1970s for solving the problem and improving the machining accuracy.Researchers have studied how to measure,detect,model,identify,reduce,and compensate the geometric errors.This paper presents a thorough review of the latest research activities and gives an overview of the state of the art in understanding changes in machine tool performance due to geometric errors.Recent advances in measuring the geometrical errors of machine tools are summarized,and different kinds of error identification methods of translational axes and rotation axes are illustrated respectively.Besides,volumetric geometric error modeling,tracing,and compensation techniques for five-axis machine tools are emphatically introduced.Finally,research challenges in order to improve the volumetric accuracy of machine tools are also highlighted.展开更多
Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting ...Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting wind speed accurately is difficult.A new hybrid deep learning model based on empirical wavelet transform,recurrent neural network and error correction for short-term wind speed prediction is proposed in this paper.The empirical wavelet transformation is applied to decompose the original wind speed series.The long short term memory network and the Elman neural network are adopted to predict low-frequency and high-frequency wind speed sub-layers respectively to balance the calculation efficiency and prediction accuracy.The error correction strategy based on deep long short term memory network is developed to modify the prediction errors.Four actual wind speed series are utilized to verify the effectiveness of the proposed model.The empirical results indicate that the method proposed in this paper has satisfactory performance in wind speed prediction.展开更多
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the pred...Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.展开更多
By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Huna...By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.展开更多
Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with ...Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
To solve the problem that external systematic errors of the optical camera cannot be fully estimated due to limited computing resources,a unified dimensionality reduction representation method for the external systema...To solve the problem that external systematic errors of the optical camera cannot be fully estimated due to limited computing resources,a unified dimensionality reduction representation method for the external systematic errors of the optical camera is proposed,and autonomous relative optical navigation is realized.The camera translational and misalignment errors are converted into a three-dimensional rotation error,whose differential model can be established through specific attitude control and appropriate assumption.Then,the rotation error and the relative motion state are jointly estimated in an augmented Kalman filter framework.Compared with the traditional method that estimates the camera translational and misalignment errors,the proposed method reduces the computational complexity in that the estimated state dimension is reduced.Furthermore,as demonstrated by numerical simulation,the estimation accuracy is improved significantly.展开更多
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met...At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
基金National Key R&D Program of China(2022YFB3706901)National Natural Science Foundation of China(52274382)Key Research and Development Program of Hubei Province(2022BAA024)。
文摘The hot deformation behavior of as-extruded Ti-6554 alloy was investigated through isothermal compression at 700–950°C and 0.001–1 s^(−1).The temperature rise under different deformation conditions was calculated,and the curve was corrected.The strain compensation constitutive model of as-extruded Ti-6554 alloy based on temperature rise correction was established.The microstructure evolution under different conditions was analyzed,and the dynamic recrystallization(DRX)mechanism was revealed.The results show that the flow stress decreases with the increase in strain rate and the decrease in deformation temperature.The deformation temperature rise gradually increases with the increase in strain rate and the decrease in deformation temperature.At 700°C/1 s^(−1),the temperature rise reaches 100°C.The corrected curve value is higher than the measured value,and the strain compensation constitutive model has high prediction accuracy.The precipitation of theαphase occurs during deformation in the twophase region,which promotes DRX process of theβphase.At low strain rate,the volume fraction of dynamic recrystallization increases with the increase in deformation temperature.DRX mechanism includes continuous DRX and discontinuous DRX.
文摘Following the publication of Zeng et al.(2023),an inadvertent error was recently identified in Figure 1B and Supplementary Figure S3.To ensure the accuracy and integrity of our published work,we formally request a correction to address this issue and apologize for any confusion this error may have caused.For details,please refer to the modified Supplementary Materials.
基金supported by the National Natural Science Foundation of China(Grant Nos.U2242213,U2142213,42305167,42175105)。
文摘Systematic bias is a type of model error that can affect the accuracy of data assimilation and forecasting that must be addressed.An online bias correction scheme called the sequential bias correction scheme(SBCS),was developed using the6 h average bias to correct the systematic bias during model integration.The primary purpose of this study is to investigate the impact of the SBCS in the high-resolution China Meteorological Administration Meso-scale(CMA-MESO)numerical weather prediction(NWP)model to reduce the systematic bias and to improve the data assimilation and forecast results through this method.The SBCS is improved upon and applied to the CMA-MESO 3-km model in this study.Four-week sequential data assimilation and forecast experiments,driven by rapid update and cycling(RUC),were conducted for the period from 2–29 May 2022.In terms of the characteristics of systematic bias,both the background and analysis show diurnal bias,and these large biases are affected by complex underlying surfaces(e.g.,oceans,coasts,and mountains).After the application of the SBCS,the results of the data assimilation show that the SBCS can reduce the systematic bias of the background and yield a neutral to slightly positive result for the analysis fields.In addition,the SBCS can reduce forecast errors and improve forecast results,especially for surface variables.The above results indicate that this scheme has good prospects for high-resolution regional NWP models.
基金supported by the National Natural Science Foundation of China(Grant Nos.12325501,12047503,and 12247104)the Chinese Academy of Sciences(Grant No.ZDRW-XX-2022-3-02)P.Z.is partially supported by the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900).
文摘Quantum error correction is essential for realizing fault-tolerant quantum computing,where both the efficiency and accuracy of the decoding algorithms play critical roles.In this work,we introduce the implementation of the PLANAR algorithm,a software framework designed for fast and exact decoding of quantum codes with a planar structure.The algorithm first converts the optimal decoding of quantum codes into a partition function computation problem of an Ising spin glass model.Then it utilizes the exact Kac–Ward formula to solve it.In this way,PLANAR offers the exact maximum likelihood decoding in polynomial complexity for quantum codes with a planar structure,including the surface code with independent code-capacity noise and the quantum repetition code with circuit-level noise.Unlike traditional minimumweight decoders such as minimum-weight perfect matching(MWPM),PLANAR achieves theoretically optimal performance while maintaining polynomial-time efficiency.In addition,to demonstrate its capabilities,we exemplify the implementation using the rotated surface code,a commonly used quantum error correction code with a planar structure,and show that PLANAR achieves a threshold of approximately p_(uc)≈0.109 under the depolarizing error model,with a time complexity scaling of O(N^(0.69)),where N is the number of spins in the Ising model.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province(Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)the Key R&D Program of Shandong Province,China(Grant No.2023CXGC010901)。
文摘Quantum computing has the potential to solve complex problems that are inefficiently handled by classical computation.However,the high sensitivity of qubits to environmental interference and the high error rates in current quantum devices exceed the error correction thresholds required for effective algorithm execution.Therefore,quantum error correction technology is crucial to achieving reliable quantum computing.In this work,we study a topological surface code with a two-dimensional lattice structure that protects quantum information by introducing redundancy across multiple qubits and using syndrome qubits to detect and correct errors.However,errors can occur not only in data qubits but also in syndrome qubits,and different types of errors may generate the same syndromes,complicating the decoding task and creating a need for more efficient decoding methods.To address this challenge,we used a transformer decoder based on an attention mechanism.By mapping the surface code lattice,the decoder performs a self-attention process on all input syndromes,thereby obtaining a global receptive field.The performance of the decoder was evaluated under a phenomenological error model.Numerical results demonstrate that the decoder achieved a decoding accuracy of 93.8%.Additionally,we obtained decoding thresholds of 5%and 6.05%at maximum code distances of 7 and 9,respectively.These results indicate that the decoder used demonstrates a certain capability in correcting noise errors in surface codes.
基金supported by the National Natural Science Foundation of China(No.42174011)。
文摘In the variance component estimation(VCE)of geodetic data,the problem of negative VCE is likely to occur.In the ordinary additive error model,there have been related studies to solve the problem of negative variance components.However,there is still no related research in the mixed additive and multiplicative random error model(MAMREM).Based on the MAMREM,this paper applies the nonnegative least squares variance component estimation(NNLS-VCE)algorithm to this model.The correlation formula and iterative algorithm of NNLS-VCE for MAMREM are derived.The problem of negative variance in VCE for MAMREM is solved.This paper uses the digital simulation example and the Digital Terrain Mode(DTM)to prove the proposed algorithm's validity.The experimental results demonstrated that the proposed algorithm can effectively correct the VCE in MAMREM when there is a negative VCE.
基金supported in part by the National Natural Science Foundation of China under grant number 31901239funded by Researchers Supporting Project Number(RSPD2025R947),King Saud University,Riyadh,Saudi Arabia.
文摘The correction of Light Detection and Ranging(LiDAR)intensity data is of great significance for enhancing its application value.However,traditional intensity correction methods based on Terrestrial Laser Scanning(TLS)technology rely on manual site setup to collect intensity training data at different distances and incidence angles,which is noisy and limited in sample quantity,restricting the improvement of model accuracy.To overcome this limitation,this study proposes a fine-grained intensity correction modeling method based on Mobile Laser Scanning(MLS)technology.The method utilizes the continuous scanning characteristics of MLS technology to obtain dense point cloud intensity data at various distances and incidence angles.Then,a fine-grained screening strategy is employed to accurately select distance-intensity and incidence angle-intensity modeling samples.Finally,based on these samples,a high-precision intensity correction model is established through polynomial fitting functions.To verify the effectiveness of the proposed method,comparative experiments were designed,and the MLS modeling method was validated against the traditional TLS modeling method on the same test set.The results show that on Test Set 1,where the distance values vary widely(i.e.,0.1–3 m),the intensity consistency after correction using the MLS modeling method reached 7.692 times the original intensity,while the traditional TLS modeling method only increased to 4.630 times the original intensity.On Test Set 2,where the incidence angle values vary widely(i.e.,0○–80○),the MLS modeling method,although with a relatively smaller advantage,still improved the intensity consistency to 3.937 times the original intensity,slightly better than the TLS modeling method’s 3.413 times.These results demonstrate the significant advantage of the modeling method proposed in this study in enhancing the accuracy of intensity correction models.
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.
文摘Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.
基金supported by the National Natural Science Foundation of China(Nos.52005413,52022082)Natural Science Basic Research Plan in Shaanxi Province of China(No.2021JM-054)the Fundamental Research Funds for the Central Universities(No.D5000220135)。
文摘Geometric error,mainly due to imperfect geometry and dimensions of machine components,is one of the major error sources of machine tools.Considering that geometric error has significant effects on the machining quality of manufactured parts,it has been a popular topic for academic and industrial research for many years.A great deal of research work has been carried out since the 1970s for solving the problem and improving the machining accuracy.Researchers have studied how to measure,detect,model,identify,reduce,and compensate the geometric errors.This paper presents a thorough review of the latest research activities and gives an overview of the state of the art in understanding changes in machine tool performance due to geometric errors.Recent advances in measuring the geometrical errors of machine tools are summarized,and different kinds of error identification methods of translational axes and rotation axes are illustrated respectively.Besides,volumetric geometric error modeling,tracing,and compensation techniques for five-axis machine tools are emphatically introduced.Finally,research challenges in order to improve the volumetric accuracy of machine tools are also highlighted.
基金the Gansu Province Soft Scientific Research Projects(No.2015GS06516)the Funds for Distinguished Young Scientists of Lanzhou University of Technology,China(No.J201304)。
文摘Predicting wind speed accurately is essential to ensure the stability of the wind power system and improve the utilization rate of wind energy.However,owing to the stochastic and intermittent of wind speed,predicting wind speed accurately is difficult.A new hybrid deep learning model based on empirical wavelet transform,recurrent neural network and error correction for short-term wind speed prediction is proposed in this paper.The empirical wavelet transformation is applied to decompose the original wind speed series.The long short term memory network and the Elman neural network are adopted to predict low-frequency and high-frequency wind speed sub-layers respectively to balance the calculation efficiency and prediction accuracy.The error correction strategy based on deep long short term memory network is developed to modify the prediction errors.Four actual wind speed series are utilized to verify the effectiveness of the proposed model.The empirical results indicate that the method proposed in this paper has satisfactory performance in wind speed prediction.
基金funded by the National Natural Science Foundation Science Fund for Youth (Grant No.41405095)the Key Projects in the National Science and Technology Pillar Program during the Twelfth Fiveyear Plan Period (Grant No.2012BAC22B02)the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)
文摘Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
基金Supported by the Scientific Research Subject of Department of Education in Hunan Province(10C0556)
文摘By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.
基金supported by the Key Research and Development Program of Zhejiang Province,China(2023C03116).
文摘Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金supported by National Natural Science Foundation of China(Nos.U20B2055 and 61525301)Graduate Research Innovation Projects of Hunan Province,China(No.CX20210013)。
文摘To solve the problem that external systematic errors of the optical camera cannot be fully estimated due to limited computing resources,a unified dimensionality reduction representation method for the external systematic errors of the optical camera is proposed,and autonomous relative optical navigation is realized.The camera translational and misalignment errors are converted into a three-dimensional rotation error,whose differential model can be established through specific attitude control and appropriate assumption.Then,the rotation error and the relative motion state are jointly estimated in an augmented Kalman filter framework.Compared with the traditional method that estimates the camera translational and misalignment errors,the proposed method reduces the computational complexity in that the estimated state dimension is reduced.Furthermore,as demonstrated by numerical simulation,the estimation accuracy is improved significantly.
基金the International Center for Global Earth Models(ICGEM)for the height anomaly and gravity anomaly data and Bureau Gravimetrique International(BGI)for free-air gravity anomaly data from the World Gravity Map project(WGM2012)The authors are grateful to Głowny Urza˛d Geodezji i Kartografii of Poland for the height anomaly data of the quasi-geoid PL-geoid2021.
文摘At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.