Due to their complex structure,2-D models are challenging to work with;additionally,simulation,analysis,design,and control get increasingly difficult as the order of the model grows.Moreover,in particular time interva...Due to their complex structure,2-D models are challenging to work with;additionally,simulation,analysis,design,and control get increasingly difficult as the order of the model grows.Moreover,in particular time intervals,Gawronski and Juang’s time-limited model reduction schemes produce an unstable reduced-order model for the 2-D and 1-D models.Researchers revealed some stability preservation solutions to address this key flaw which ensure the stability of 1-D reduced-order systems;nevertheless,these strategies result in large approximation errors.However,to the best of the authors’knowledge,there is no literature available for the stability preserving time-limited-interval Gramian-based model reduction framework for the 2-D discrete-time systems.In this article,2-D models are decomposed into two separate sub-models(i.e.,two cascaded 1-D models)using the condition of minimal rank-decomposition.Model reduction procedures are conducted on these obtained two 1-D sub-models using limited-time Gramian.The suggested methodology works for both 2-D and 1-D models.Moreover,the suggested methodology gives the stability of the reduced model as well as a priori error-bound expressions for the 2-D and 1-D models.Numerical results and comparisons between existing and suggested methodologies are provided to demonstrate the effectiveness of the suggested methodology.展开更多
In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the c...In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the coupled Lorenz model, which possesses two different characteristic time scales. The limit of predictability is defined here as the time at which the error reaches 95% of its saturation level; nonlinear behaviors of the error growth are therefore involved in the definition of the limit of predictability. Our results show that the logarithmic function performs well in describing the relationship between the limit of predictability and initial error in both models, although the coefficients in the logarithmic function were not constant across the examined range of initial errors. Compared with the Lorenz model, in the coupled Lorenz model in which the slow dynamics and the fast dynamics interact with each other--there is a more complex relationship between the limit of predictability and initial error. The limit of predictability of the Lorenz model is unbounded as the initial error becomes infinitesimally small; therefore, the limit of predictability of the Lorenz model may be extended by reducing the amplitude of the initial error. In contrast, if there exists a fixed initial error in the fast dynamics of the coupled Lorenz model, the slow dynamics has an intrinsic finite limit of predictability that cannot be extended by reducing the amplitude of the initial error in the slow dynamics, and vice versa. The findings reported here reveal the possible existence of an intrinsic finite limit of predictability in a coupled system that possesses many scales of time or motion.展开更多
In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF...In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF relays that succeed in decoding the source signal are allowed to re-transmit their decoded results simultaneously to the destination in a cooperative beamforming manner. In order to carry out the cooperative beamforming, the destination needs to send the quantized channel state information (CSI) to the relays through a limited feedback channel in the face of channel quantization errors (CQE). We propose a CQE oriented multi-relay beamforming (MRB) scheme, denoted CQE-MRB for short, for the sake of improving the throughput of relay-destination transmissions. An effective throughput defined as the difference between the transmission rate and the feedback rate is used to measure an outage probability of the source-destination transmission. Simulation results demonstrate that the outage performance of proposed CQEMRB scheme is improved substantially with an increasing number of relays. Moreover, it is shown that the number of channel quantization bits can be further optimized to minimize the outage probability of proposed CQE-MRB scheme.展开更多
Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused b...Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused by whole geodesic, photographic and photogrammetric procedures in close-range photogrammetry could not be considered. However, it is necessary to analyze all the reasons which cause the deformations of the shape and size and to present their corresponding theories and equations. This situation, of course, is very different from the conventional topophotogrammetry. In this paper some specific characters of limit errors in close-range photogrammetry are presented in detail, including limit errors for calibration of interior elements for close-range cameras, the limit errors of relative and absolute orientations in close-range cameras, the limit errors of relative and absolute orientations in close-range photogrammetric procedures, and the limit errors of control works in close-range photogrammetry. A theoretical equation of calibration accuracy for close-range camerais given. Relating to the three examples in this paper, their theoretical accuracy requirement of interior elements of camera change in the scope of ±(0.005–0.350) mm. This discussion permits us to reduce accuracy requirement in calibration for an object with small relief, but the camera platform is located in violent vibration environment. Another theoretical equation of relative RMS of base lines (m S/S) and the equation RMS of start direction are also presented. It is proved that them S/S could be equal to the relative RMS ofm ΔX/ΔX. It is also proved that the permitting RMS of start direction is much bigger than the traditionally used one. Some useful equations of limit errors in close-range photogrammetry are presented as well. Suggestions mentioned above are perhaps beneficial for increasing efficiency, for reducing production cost.展开更多
To explore the common factors that influence the error of microbial limit testing of drugs and put forward preventive suggestions accordingly. Methods: a total of 100 batches of drug samples tested in a laboratory fro...To explore the common factors that influence the error of microbial limit testing of drugs and put forward preventive suggestions accordingly. Methods: a total of 100 batches of drug samples tested in a laboratory from January 2020 to June 2020 were selected as the analysis objects. The test related data were analyzed to summarize the common and typical factors affecting the error of microbial limit test. The error factors were analyzed and targeted intervention measures were sought. Results: among the included samples, there were 29 batches of errors, and the error rate was 29.0%. The main influencing factors for the errors of microbial limit test data included culture medium and colony count, drug preparation process, drug properties, testing equipment, etc., and more than 60% of the errors were affected by various factors. Conclusion: in the process of drug testing, there are many factors that may affect the accuracy of microbial limit testing. In the process of testing, it is necessary to strictly follow the concept of aseptic operation and have a standard system to guide the operation process, so as to reduce the risk of error as far as possible.展开更多
To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compres...To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compressor cascades.Considering the limited samples,the leadingedge angle and leading-edge radius distribution forms were evaluated by Shapiro-Wilk test and quantile–quantile plot.Their statistical characteristics provided can be introduced to later related researches.The parameterization design method B-spline and Bezier are adopted to create geometry models with manufacturing error based on leading-edge angle and leading-edge radius.The influence of real manufacturing error is quantified and analyzed by self-developed non-intrusive polynomial chaos and Sobol’indices.The mechanism of leading-edge manufacturing error on aerodynamic performance is discussed.The results show that the total pressure loss coefficient is sensitive to the leading-edge manufacturing error compared with the static pressure ratio,especially at high incidence.Specifically,manufacturing error of the leading edge will influence the local flow acceleration and subsequently cause fluctuation of the downstream flow.The aerodynamic performance is sensitive to the manufacturing error of leading-edge radius at the design and negative incidences,while it is sensitive to the manufacturing error of leading-edge angle under the operation conditions with high incidences.展开更多
Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are no...Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are non-negligible under warm/hot stamping conditions. To investigate the experimental errors, experiments for obtaining the FLCs of the AA5754 are conducted at 250℃. Then, FE models are created and validated on the basis of experimental results. A number of FE simulations are carried out for FLC test-pieces and punches with different geometry configurations and varying friction coefficients between the test-piece and the punch. The errors for all the test conditions are predicted and analyzed. Particular attention of error analysis is paid to two special cases, namely, the biaxial FLC test and the uniaxial FLC test. The failure location and the variation of the error with respect to the friction coefficient are studied as well. The results obtained from the FLC tests and the above analyses show that, for the biaxial tension state, the friction coefficient should be controlled within 0.15 to avoid significant shifting of the necking location away from the center of the punch; for the uniaxial tension state, the friction coefficient should be controlled within 0.1 to guarantee the validity of the data collected from FLC tests. The conclusions summarized are beneficial for obtaining accurate FLCs under warm/hot stamping conditions.展开更多
The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for ...The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for truncation error of bandlimited functions in the n dimensional Lebesgue space Lp(Rn) associated with multidimensional Shannon sampling representation.展开更多
Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it i...Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.展开更多
The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the qua...The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the quality of dose. All the efforts so far have been directed only to minimize the random errors in count rate by repetition. In the absence of probability distribution for the systematic errors, we propose to minimize these errors by estimating the upper and lower limits by the technique of determinant in equalities developed by us. Using the algorithm we have developed based on the tech- nique of determinant inequalities and the concept of maximization of mutual information (MI), we show how to process element by element of the covariance matrix to minimize the correlated systematic errors in the count rate of 113 mIn. The element wise processing of covariance matrix is so unique by our technique that it gives experimentalists enough maneuverability to mitigate different factors causing systematic errors in the count rate and consequently the activity of 113 mIn.展开更多
Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finit...Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.展开更多
Flood control forecast operation mode is one of the main ways for determining the upper bound of dynamic control of flood limited water level during flood season. The floodwater utilization rate can be effectively inc...Flood control forecast operation mode is one of the main ways for determining the upper bound of dynamic control of flood limited water level during flood season. The floodwater utilization rate can be effectively increased by using flood forecast information and flood control forecast operation mode. In this paper, Dahuofang Reservoir is selected as a case study. At first, the distribution pattern and the bound of forecast error which is a key source of risk are analyzed. Then, based on the definition of flood risk, the risk of dynamic control of reservoir flood limited water level within different flood forecast error bounds is studied. The results show that, the dynamic control of reservoir flood limited water level with flood forecast information can increase the floodwater utilization rate without increasing flood control risk effectively and it is feasible in practice.展开更多
文摘Due to their complex structure,2-D models are challenging to work with;additionally,simulation,analysis,design,and control get increasingly difficult as the order of the model grows.Moreover,in particular time intervals,Gawronski and Juang’s time-limited model reduction schemes produce an unstable reduced-order model for the 2-D and 1-D models.Researchers revealed some stability preservation solutions to address this key flaw which ensure the stability of 1-D reduced-order systems;nevertheless,these strategies result in large approximation errors.However,to the best of the authors’knowledge,there is no literature available for the stability preserving time-limited-interval Gramian-based model reduction framework for the 2-D discrete-time systems.In this article,2-D models are decomposed into two separate sub-models(i.e.,two cascaded 1-D models)using the condition of minimal rank-decomposition.Model reduction procedures are conducted on these obtained two 1-D sub-models using limited-time Gramian.The suggested methodology works for both 2-D and 1-D models.Moreover,the suggested methodology gives the stability of the reduced model as well as a priori error-bound expressions for the 2-D and 1-D models.Numerical results and comparisons between existing and suggested methodologies are provided to demonstrate the effectiveness of the suggested methodology.
基金sprovided jointly by the 973 Program (Grant No.2010CB950400)National Natural Science Foundation of China (Grant Nos. 40805022 and 40821092)
文摘In this study, the relationship between the limit of predictability and initial error was investigated using two simple chaotic systems: the Lorenz model, which possesses a single characteristic time scale, and the coupled Lorenz model, which possesses two different characteristic time scales. The limit of predictability is defined here as the time at which the error reaches 95% of its saturation level; nonlinear behaviors of the error growth are therefore involved in the definition of the limit of predictability. Our results show that the logarithmic function performs well in describing the relationship between the limit of predictability and initial error in both models, although the coefficients in the logarithmic function were not constant across the examined range of initial errors. Compared with the Lorenz model, in the coupled Lorenz model in which the slow dynamics and the fast dynamics interact with each other--there is a more complex relationship between the limit of predictability and initial error. The limit of predictability of the Lorenz model is unbounded as the initial error becomes infinitesimally small; therefore, the limit of predictability of the Lorenz model may be extended by reducing the amplitude of the initial error. In contrast, if there exists a fixed initial error in the fast dynamics of the coupled Lorenz model, the slow dynamics has an intrinsic finite limit of predictability that cannot be extended by reducing the amplitude of the initial error in the slow dynamics, and vice versa. The findings reported here reveal the possible existence of an intrinsic finite limit of predictability in a coupled system that possesses many scales of time or motion.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61522109, 61631020, 61671253 and 91738201)the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20150040, BK20171446 and BRA2018043)
文摘In this paper, we consider a multi-relay cooperative communication network that consists of a source node transmitting to its destination with the help of multiple decode-and- forward (DF) relays. Specifically, the DF relays that succeed in decoding the source signal are allowed to re-transmit their decoded results simultaneously to the destination in a cooperative beamforming manner. In order to carry out the cooperative beamforming, the destination needs to send the quantized channel state information (CSI) to the relays through a limited feedback channel in the face of channel quantization errors (CQE). We propose a CQE oriented multi-relay beamforming (MRB) scheme, denoted CQE-MRB for short, for the sake of improving the throughput of relay-destination transmissions. An effective throughput defined as the difference between the transmission rate and the feedback rate is used to measure an outage probability of the source-destination transmission. Simulation results demonstrate that the outage performance of proposed CQEMRB scheme is improved substantially with an increasing number of relays. Moreover, it is shown that the number of channel quantization bits can be further optimized to minimize the outage probability of proposed CQE-MRB scheme.
文摘Close-range photogrammetry is to determine the shape and size of the object, instead of it's absolute position. Therefore, at first, any translation and rotation of the photogrammetric model of the object caused by whole geodesic, photographic and photogrammetric procedures in close-range photogrammetry could not be considered. However, it is necessary to analyze all the reasons which cause the deformations of the shape and size and to present their corresponding theories and equations. This situation, of course, is very different from the conventional topophotogrammetry. In this paper some specific characters of limit errors in close-range photogrammetry are presented in detail, including limit errors for calibration of interior elements for close-range cameras, the limit errors of relative and absolute orientations in close-range cameras, the limit errors of relative and absolute orientations in close-range photogrammetric procedures, and the limit errors of control works in close-range photogrammetry. A theoretical equation of calibration accuracy for close-range camerais given. Relating to the three examples in this paper, their theoretical accuracy requirement of interior elements of camera change in the scope of ±(0.005–0.350) mm. This discussion permits us to reduce accuracy requirement in calibration for an object with small relief, but the camera platform is located in violent vibration environment. Another theoretical equation of relative RMS of base lines (m S/S) and the equation RMS of start direction are also presented. It is proved that them S/S could be equal to the relative RMS ofm ΔX/ΔX. It is also proved that the permitting RMS of start direction is much bigger than the traditionally used one. Some useful equations of limit errors in close-range photogrammetry are presented as well. Suggestions mentioned above are perhaps beneficial for increasing efficiency, for reducing production cost.
文摘To explore the common factors that influence the error of microbial limit testing of drugs and put forward preventive suggestions accordingly. Methods: a total of 100 batches of drug samples tested in a laboratory from January 2020 to June 2020 were selected as the analysis objects. The test related data were analyzed to summarize the common and typical factors affecting the error of microbial limit test. The error factors were analyzed and targeted intervention measures were sought. Results: among the included samples, there were 29 batches of errors, and the error rate was 29.0%. The main influencing factors for the errors of microbial limit test data included culture medium and colony count, drug preparation process, drug properties, testing equipment, etc., and more than 60% of the errors were affected by various factors. Conclusion: in the process of drug testing, there are many factors that may affect the accuracy of microbial limit testing. In the process of testing, it is necessary to strictly follow the concept of aseptic operation and have a standard system to guide the operation process, so as to reduce the risk of error as far as possible.
基金the National Natural Science Foundation of China(No.51790512)the 111 Project(No.B17037)the National Key Laboratory Foundation,Industry-Academia-Research Collaboration Project of Aero Engine Corporation of China(No.HFZL2018CXY011-1)and MIIT。
文摘To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compressor cascades.Considering the limited samples,the leadingedge angle and leading-edge radius distribution forms were evaluated by Shapiro-Wilk test and quantile–quantile plot.Their statistical characteristics provided can be introduced to later related researches.The parameterization design method B-spline and Bezier are adopted to create geometry models with manufacturing error based on leading-edge angle and leading-edge radius.The influence of real manufacturing error is quantified and analyzed by self-developed non-intrusive polynomial chaos and Sobol’indices.The mechanism of leading-edge manufacturing error on aerodynamic performance is discussed.The results show that the total pressure loss coefficient is sensitive to the leading-edge manufacturing error compared with the static pressure ratio,especially at high incidence.Specifically,manufacturing error of the leading edge will influence the local flow acceleration and subsequently cause fluctuation of the downstream flow.The aerodynamic performance is sensitive to the manufacturing error of leading-edge radius at the design and negative incidences,while it is sensitive to the manufacturing error of leading-edge angle under the operation conditions with high incidences.
基金Supported by National Natural Science Foundation of China(Grant No.51375201)Jilin Province Science and Technology Development Plan(Grant No.20130101048JC)Open Research Fund of Shanghai Key Laboratory of Digital Manufacturer for Thin-walled Structure(Grant No.2013001)
文摘Forming limit curves(FLCs) are commonly used for evaluating the formability of sheet metals. However, it is difficult to obtain the FLCs with desirable accuracy by experiments due to that the friction effects are non-negligible under warm/hot stamping conditions. To investigate the experimental errors, experiments for obtaining the FLCs of the AA5754 are conducted at 250℃. Then, FE models are created and validated on the basis of experimental results. A number of FE simulations are carried out for FLC test-pieces and punches with different geometry configurations and varying friction coefficients between the test-piece and the punch. The errors for all the test conditions are predicted and analyzed. Particular attention of error analysis is paid to two special cases, namely, the biaxial FLC test and the uniaxial FLC test. The failure location and the variation of the error with respect to the friction coefficient are studied as well. The results obtained from the FLC tests and the above analyses show that, for the biaxial tension state, the friction coefficient should be controlled within 0.15 to avoid significant shifting of the necking location away from the center of the punch; for the uniaxial tension state, the friction coefficient should be controlled within 0.1 to guarantee the validity of the data collected from FLC tests. The conclusions summarized are beneficial for obtaining accurate FLCs under warm/hot stamping conditions.
基金Projcct supported by the Natural Science Foundation of China (Grant No. 10371009 ) of Beijing Educational Committee (No. 2002KJ112).
文摘The truncation error associated with a given sampling representation is defined as the difference between the signal and on approximating sumutilizing a finite number of terms. In this paper we give uniform bound for truncation error of bandlimited functions in the n dimensional Lebesgue space Lp(Rn) associated with multidimensional Shannon sampling representation.
基金Project supported by the National Natural Science Foundation of China(Grant No.61873251)。
文摘Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.
文摘The quality of the radiation dose depends upon the gamma count rate of the radionuclide used. Any reduction in error in the count rate is reflected in the reduction in error in the activity and consequently on the quality of dose. All the efforts so far have been directed only to minimize the random errors in count rate by repetition. In the absence of probability distribution for the systematic errors, we propose to minimize these errors by estimating the upper and lower limits by the technique of determinant in equalities developed by us. Using the algorithm we have developed based on the tech- nique of determinant inequalities and the concept of maximization of mutual information (MI), we show how to process element by element of the covariance matrix to minimize the correlated systematic errors in the count rate of 113 mIn. The element wise processing of covariance matrix is so unique by our technique that it gives experimentalists enough maneuverability to mitigate different factors causing systematic errors in the count rate and consequently the activity of 113 mIn.
文摘Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.
基金supported by the National Natural Science Foundation of China (Grant Nos. 51079015, 50979011)
文摘Flood control forecast operation mode is one of the main ways for determining the upper bound of dynamic control of flood limited water level during flood season. The floodwater utilization rate can be effectively increased by using flood forecast information and flood control forecast operation mode. In this paper, Dahuofang Reservoir is selected as a case study. At first, the distribution pattern and the bound of forecast error which is a key source of risk are analyzed. Then, based on the definition of flood risk, the risk of dynamic control of reservoir flood limited water level within different flood forecast error bounds is studied. The results show that, the dynamic control of reservoir flood limited water level with flood forecast information can increase the floodwater utilization rate without increasing flood control risk effectively and it is feasible in practice.