In this paper,some issues concerning the Chinese remaindering representation are discussed.A new converting method is described. An efficient refinement of the division algorithm of Chiu,Davida and Litow is given.
The main purpose of this paper is to define prime and introduce non-commutative arithmetics based on Thompson's group F.Defining primes in a non-abelian monoid is highly non-trivial,which relies on a concept calle...The main purpose of this paper is to define prime and introduce non-commutative arithmetics based on Thompson's group F.Defining primes in a non-abelian monoid is highly non-trivial,which relies on a concept called"castling".Three types of castlings are essential to grasp the arithmetics.The divisor functionτon Thompson's monoid S satisfiesτ(uv)≤τ(u)τ(v)for any u,v∈S.Then the limitτ_(0)(u)=lim_(n→∞)(τ(u~n))^(1/n)exists.The quantityC(S)=sup_(1≠u∈S)τ_(0)(u)/τ(u)describes the complexity for castlings in S.We show thatC(S)=1.Moreover,the MCbius function on S is calculated.And the Liouville functionCon S is studied.展开更多
In this paper,we introduce a new concept,namelyε-arithmetics,for real vectors of any fixed dimension.The basic idea is to use vectors of rational values(called rational vectors)to approximate vectors of real values o...In this paper,we introduce a new concept,namelyε-arithmetics,for real vectors of any fixed dimension.The basic idea is to use vectors of rational values(called rational vectors)to approximate vectors of real values of the same dimension withinεrange.For rational vectors of a fixed dimension m,they can form a field that is an mth order extension Q(α)of the rational field Q whereαhas its minimal polynomial of degree m over Q.Then,the arithmetics,such as addition,subtraction,multiplication,and division,of real vectors can be defined by using that of their approximated rational vectors withinεrange.We also define complex conjugate of a real vector and then inner product and convolutions of two real vectors and two real vector sequences(signals)of finite length.With these newly defined concepts for real vectors,linear processing,such as linear filtering,ARMA modeling,and least squares fitting,can be implemented to real vectorvalued signals with real vector-valued coefficients,which will broaden the existing linear processing to scalar-valued signals.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
Indoor Radon Concentrations in Severe Cold Area and Cold Area and Impact of Energy-saving Design on Indoor Radon in China Yunyun Wu1, Yanchao Song1, Changsong Hou1, Hongxing Cui1, Bing Shang1, Haoran Sun1(1. Key Labor...Indoor Radon Concentrations in Severe Cold Area and Cold Area and Impact of Energy-saving Design on Indoor Radon in China Yunyun Wu1, Yanchao Song1, Changsong Hou1, Hongxing Cui1, Bing Shang1, Haoran Sun1(1. Key Laboratory of Radiological Protection and Nuclear Emergency, China CDC&National Institute for Radiological Protection,Chinese Center for Disease Control and Prevention, Beijing 100088, China)Abstract:This study investigated indoor radon concentrations in modern residential buildings in the Cold Area and Severe Cold Area in China. A total of 19 cities covering 16 provinces were selected with 1, 610 dwellings measured for indoor radon concentration. The arithmetic mean and geometric mean of indoor radon concentration were 68 Bq m-3 and 57 Bq m-3,respectively. It was found that indoor radon concentrations were much higher in the Severe Cold Area than those in the Cold Area.The indoor radon concentrations showed an increasing trend for newly constructed buildings.展开更多
Let a_(1),a_(2),a_(3)be nonzero integers with gcd(a_(1),a_(2),a_(3))=1,and let k be any positive integer,K=max[3,|a_(1)|,|a_(2)|,|a_(3)|,k].Suppose that l_(1),l_(2),l_(3)are integers each coprime to k.Suppose further ...Let a_(1),a_(2),a_(3)be nonzero integers with gcd(a_(1),a_(2),a_(3))=1,and let k be any positive integer,K=max[3,|a_(1)|,|a_(2)|,|a_(3)|,k].Suppose that l_(1),l_(2),l_(3)are integers each coprime to k.Suppose further that b is any integer satisfying some necessary congruent conditions.The solvability of linear equation a_(1)p_(1)+a_(2)p_(2)+a_(3)p_(3)=b(p_(j)=l_(j)(mod k),1≤j≤3)with prime variables pi,p_(2),ps is investigated.It is proved that if ai,a_(2),a_(3)are all positive,then the above equation is solvable whenever b≥K^(25);if a,a_(2),a_(3)are not all of the same sign,then the above equation has a solution p_(1),p_(2),p_(3)satisfying max(p_(1),p_(2),p_(3))≤3|b|+K^(25).展开更多
The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved...The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved to be very complicated, and various trials were published between the 1900s and 2000s. The development of indentation instruments and the wish to make the application in numerous steps easier, led in 1992 to trials with iterations by using relative values instead of absolute ones. Excessive iterations of computers with 3 + 8 free parameters of the loading and unloading curves became possible and were implemented into the instruments and worldwide standards. The physical formula for hardness was defined as force over area. For the conical, pyramidal, and spherical indenters, one simply took the projected area for the calculation of the indentation depth from the projected area, adjusted it later by the iterations with respect to fused quartz or aluminium as standard materials, and called it “contact height”. Continuously measured indentation loading curves were formulated as loading force over depth square. The unloading curves after release of the indenter used the initial steepness of the pressure relief for the calculation of what was (and is) incorrectly called “Young’s modulus”. But it is not unidirectional. And for the spherical indentations’ loading curve, they defined the indentation force over depth raised to 3/2 (but without R/h correction). They till now (2025) violate the energy law, because they use all applied force for the indenter depth and ignore the obvious sidewise force upon indentation (cf. e.g. the wood cleaving). The various refinements led to more and more complicated formulas that could not be reasonably calculated with them. One decided to use 3 + 8 free-parameter iterations for fitting to the (poor) standards of fused quartz or aluminium. The mechanical values of these were considered to be “true”. This is till now the worldwide standard of DIN-ISO-ASTM-14577, avoiding overcomplicated formulas with their complexity. Some of these are shown in the Introduction Section. By doing so, one avoided the understanding of indentation results on a physical basis. However, we open a simple way to obtain absolute values (though still on the blackbox instrument’s unsuitable force calibration). We do not iterate but calculate algebraically on the basis of the correct, physically deduced exponent of the loading force parabolas with h3/2 instead of false “h2” (for the spherical indentation, there is a calotte-radius over depth correction), and we reveal the physical errors taken up in the official worldwide “14577-Standard”. Importantly, we reveal the hitherto fully overlooked phase transitions under load that are not detectable with the false exponent. Phase-transition twinning is even present and falsifies the iteration standards. Instead of elasticity theory, we use the well-defined geometry of these indentations. By doing so, we reach simple algebraically calculable formulas and find the physical indentation hardness of materials with their onset depth, onset force and energy, as well as their phase-transition energy (temperature dependent also its activation energy). The most important phase transitions are our absolute algebraically calculated results. The now most easily obtained phase transitions under load are very dangerous because they produce polymorph interfaces between the changed and the unchanged material. It was found and published by high-enlargement microscopy (5000-fold) that these trouble spots are the sites for the development of stable, 1 to 2 µm long, micro-cracks (stable for months). If however, a force higher than the one of their formation occurs to them, these grow to catastrophic crash. That works equally with turbulences at the pickle fork of airliners. After the publication of these facts and after three fatal crashing had occurred in a short sequence, FAA (Federal Aviation Agency) reacted by rechecking all airplanes for such micro cracks. These were now found in a new fleet of airliners from where the three crashed ones came. These were previously overlooked. FAA became aware of that risk and grounded 290 (certainly all) of them, because the material of these did not have higher phase-transition onset and energy than other airplanes with better material. They did so despite the 14577-Standard that does not find (and thus formally forbids) phase transitions under indenter load with the false exponent on the indentation parabola. However, this “Standard” will, despite the present author’s well-founded petition, not be corrected for the next 5 years.展开更多
In order to decrease the calculation complexity of connectivity reliability of road networks, an improved recursive decomposition arithmetic is proposed. First, the basic theory of recursive decomposition arithmetic i...In order to decrease the calculation complexity of connectivity reliability of road networks, an improved recursive decomposition arithmetic is proposed. First, the basic theory of recursive decomposition arithmetic is reviewed. Then the characteristics of road networks, which are different from general networks, are analyzed. Under this condition, an improved recursive decomposition arithmetic is put forward which fits road networks better. Furthermore, detailed calculation steps are presented which are convenient for the computer, and the advantage of the approximate arithmetic is analyzed based on this improved arithmetic. This improved recursive decomposition arithmetic directly produces disjoint minipaths and avoids the non-polynomial increasing problems. And because the characteristics of road networks are considered, this arithmetic is greatly simplified. Finally, an example is given to prove its validity.展开更多
Register transfer level mapping (RTLM) algorithm for technology mapping at RT level is presented,which supports current design methodologies using high level design and design reuse.The mapping rules implement a sou...Register transfer level mapping (RTLM) algorithm for technology mapping at RT level is presented,which supports current design methodologies using high level design and design reuse.The mapping rules implement a source ALU using target ALU.The source ALUs and the target ALUs are all represented by the general ALUs and the mapping rules are applied in the algorithm.The mapping rules are described in a table fashion.The graph clustering algorithm is a branch and bound algorithm based on the graph formulation of the mapping algorithm.The mapping algorithm suits well mapping of regularly structured data path.Comparisons are made between the experimental results generated by 1 greedy algorithm and graphclustering algorithm,showing the feasibility of presented algorithm.展开更多
Counting has always been one of the most important operations for human be-ings. Naturally, it is inherent in economics and business. We count with the unique arithmetic, which humans have used for millennia. However,...Counting has always been one of the most important operations for human be-ings. Naturally, it is inherent in economics and business. We count with the unique arithmetic, which humans have used for millennia. However, over time, the most inquisitive thinkers have questioned the validity of standard arithmetic in certain settings. It started in ancient Greece with the famous philosopher Zeno of Elea, who elaborated a number of paradoxes questioning popular knowledge. Millennia later, the famous German researcher Herman Helmholtz (1821-1894) [1] expressed reservations about applicability of conventional arithmetic with respect to physical phenomena. In the 20th and 21st century, mathematicians such as Yesenin-Volpin (1960) [2], Van Bendegem (1994) [3], Rosinger (2008) [4] and others articulated similar concerns. In validation, in the 20th century expressions such as 1 + 1 = 3 or 1 + 1 = 1 occurred to reflect important characteristics of economic, business, and social processes. We call these expressions synergy arithmetic. It is common notion that synergy arithmetic has no meaning mathematically. However in this paper we mathematically ground and explicate synergy arithmetic.展开更多
This work is aimed to show that various problems from different fields can be modeled more efficiently using multiplicative calculus, in place of Newtonian calculus. Since multiplicative calculus is still in its infan...This work is aimed to show that various problems from different fields can be modeled more efficiently using multiplicative calculus, in place of Newtonian calculus. Since multiplicative calculus is still in its infancy, some effort is put to explain its basic principles such as exponential arithmetic, multiplicative calculus, and multiplicative differential equations. Examples from finance, actuarial science, economics, and social sciences are presented with solutions using multiplicative calculus concepts. Based on the encouraging results obtained it is recommended that further research into this field be vested to exploit the applicability of multiplicative calculus in different fields as well as the development of multiplicative calculus concepts.展开更多
The output-signal models and impulse response shaping(IRS)functions of semiconductor detectors are important for establishing high-precision measurement systems.In this paper,an output-signal model for semiconductor d...The output-signal models and impulse response shaping(IRS)functions of semiconductor detectors are important for establishing high-precision measurement systems.In this paper,an output-signal model for semiconductor detector systems is proposed.According to the proposed model,a multistage cascade deconvolution IRS algorithm was developed using the C-R inverse system,R-C inverse system,and differentiator system.The silicon drift detector signals acquired from the analog-to-digital converter were tested.The experimental results indicated that the shaped pulses obtained using the proposed model had no undershoot,and the average peak base width of the output shaped pulses was reduced by 36%compared with that for a simple model proposed in a previous work[1].Offline processing results indicated that compared with the traditional IRS algorithm,the average peak base width of the output shaped pulses obtained using the proposed algorithm was reduced by 11%,and the total elapsed time required for pulse shaping was reduced by 26%.The proposed algorithm avoids recursive calculation.If the sampling frequency of the digital system reaches 100 MHz,the proposed algorithm can be simplified to integer arithmetic.The proposed IRS algorithm can be applied to high-resolution energy spectrum analysis,highcounting rate energy spectrum correction,and coincidence and anti-coincidence measurements.展开更多
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio...For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.展开更多
The way to use the least-mean-square (LMS) arithmetic to cancel the direct wave for a passive radar system is introduced. The model of the direct wave is deduced. By using the LMS adaptive FIR filter, the software sol...The way to use the least-mean-square (LMS) arithmetic to cancel the direct wave for a passive radar system is introduced. The model of the direct wave is deduced. By using the LMS adaptive FIR filter, the software solution for FM passive radar system is developed instead of the hardware consumption of the existent experiment system of passive radar. Further more some simulative results are given. The simulative results indicate that using LMS arithmetic to cancel the direct wave is effective.展开更多
It is a great challenge to find effective atomizing technology for reducing industrial pollution; the twin-fluid atomizing nozzle has drawn great attention in this field recently. Current studies on twin-fluid nozzles...It is a great challenge to find effective atomizing technology for reducing industrial pollution; the twin-fluid atomizing nozzle has drawn great attention in this field recently. Current studies on twin-fluid nozzles mainly focus on droplet breakup and single droplet characteristics. Research relating to the influences of structural parameters on the droplet diameter characteristics in the flow field is scarcely available. In this paper, the influence of a self-excited vibrating cavity structure on droplet diameter characteristics was investigated. Twin-fluid atomizing tests were performed by a self-built open atomizing test bench, which was based on a phase Doppler particle analyzer(PDPA). The atomizing flow field of the twin-fluid nozzle with a self-excited vibrating cavity and its absence were tested and analyzed. Then the atomizing flow field of the twin-fluid nozzle with different self-excited vibrating cavity structures was investigated.The experimental results show that the structural parameters of the self-excited vibrating cavity had a great effect on the breakup of large droplets. The Sauter mean diameter(SMD) increased with the increase of orifice diameter or orifice depth. Moreover, a smaller orifice diameter or orifice depth was beneficial to enhancing the turbulence around the outlet of nozzle and decreasing the SMD. The atomizing performance was better when the orifice diameter was2.0 mm or the orifice depth was 1.5 mm. Furthermore, the SMD increased first and then decreased with the increase of the distance between the nozzle outlet and self-excited vibrating cavity, and the SMD of more than half the atomizing flow field was under 35 μm when the distance was 5.0 mm. In addition, with the increase of axial and radial distance from the nozzle outlet, the SMD and arithmetic mean diameter(AMD) tend to increase. The research results provide some design parameters for the twin-fluid nozzle, and the experimental results could serve as a beneficial supplement to the twin-fluid nozzle study.展开更多
The coalescence and missed detection are two key challenges in Multi-Target Tracking(MTT).To balance the tracking accuracy and real-time performance,the existing Random Finite Set(RFS)based filters are generally diffi...The coalescence and missed detection are two key challenges in Multi-Target Tracking(MTT).To balance the tracking accuracy and real-time performance,the existing Random Finite Set(RFS)based filters are generally difficult to handle the above problems simultaneously,such as the Track-Oriented marginal Multi-Bernoulli/Poisson(TOMB/P)and Measurement-Oriented marginal Multi-Bernoulli/Poisson(MOMB/P)filters.Based on the Arithmetic Average(AA)fusion rule,this paper proposes a novel fusion framework for the Poisson Multi-Bernoulli(PMB)filter,which integrates both the advantages of the TOMB/P filter in dealing with missed detection and the advantages of the MOMB/P filter in dealing with coalescence.In order to fuse the different PMB distributions,the Bernoulli components in different Multi-Bernoulli(MB)distributions are associated with each other by Kullback-Leibler Divergence(KLD)minimization.Moreover,an adaptive AA fusion rule is designed on the basis of the exponential fusion weights,which utilizes the TOMB/P and MOMB/P updates to solve these difficulties in MTT.Finally,by comparing with the TOMB/P and MOMB/P filters,the performance of the proposed filter in terms of accuracy and efficiency is demonstrated in three challenging scenarios.展开更多
A cost-based selective maintenance decision-making method was presented.The purpose of this method was to find an optimal choice of maintenance actions to be performed on a selected group of machines for manufacturing...A cost-based selective maintenance decision-making method was presented.The purpose of this method was to find an optimal choice of maintenance actions to be performed on a selected group of machines for manufacturing systems.The arithmetic reduction of intensity model was introduced to describe the influence on machine failure intensity by different maintenance actions (preventive maintenance,minimal repair and overhaul).In the meantime,a resolution algorithm combining the greedy heuristic rules with genetic algorithm was provided.Finally,a case study of the maintenance decision-making problem of automobile workshop was given.Furthermore,the case study demonstrates the practicability of this method.展开更多
Dynamic fault tree analysis is widely used for the reliability analysis of the complex system with dynamic failure characteristics. In many circumstances, the exact value of system reliability is difficult to obtain d...Dynamic fault tree analysis is widely used for the reliability analysis of the complex system with dynamic failure characteristics. In many circumstances, the exact value of system reliability is difficult to obtain due to absent or insufficient data for failure probabilities or failure rates of components. The traditional fuzzy operation arithmetic based on extension principle or interval theory may lead to fuzzy accumulations. Moreover, the existing fuzzy dynamic fault tree analysis methods are restricted to the case that all system components follow exponential time-to-failure distributions. To overcome these problems, a new fuzzy dynamic fault tree analysis approach based on the weakest n-dimensional t-norm arithmetic and developed sequential binary decision diagrams method is proposed to evaluate system fuzzy reliability. Compared with the existing approach,the proposed method can effectively reduce fuzzy cumulative and be applicable to any time-tofailure distribution type for system components. Finally, a case study is presented to illustrate the application and advantages of the proposed approach.展开更多
A comparison of arithmetic operations of two dynamic process optimization approaches called quasi-sequential approach and reduced Sequential Quadratic Programming(rSQP)simultaneous approach with respect to equality co...A comparison of arithmetic operations of two dynamic process optimization approaches called quasi-sequential approach and reduced Sequential Quadratic Programming(rSQP)simultaneous approach with respect to equality constrained optimization problems is presented.Through the detail comparison of arithmetic operations,it is concluded that the average iteration number within differential algebraic equations(DAEs)integration of quasi-sequential approach could be regarded as a criterion.One formula is given to calculate the threshold value of average iteration number.If the average iteration number is less than the threshold value,quasi-sequential approach takes advantage of rSQP simultaneous approach which is more suitable contrarily.Two optimal control problems are given to demonstrate the usage of threshold value.For optimal control problems whose objective is to stay near desired operating point,the iteration number is usually small.Therefore,quasi-sequential approach seems more suitable for such problems.展开更多
文摘In this paper,some issues concerning the Chinese remaindering representation are discussed.A new converting method is described. An efficient refinement of the division algorithm of Chiu,Davida and Litow is given.
基金Supported by National Natural Science Foundation of China(Grant No.11701549)。
文摘The main purpose of this paper is to define prime and introduce non-commutative arithmetics based on Thompson's group F.Defining primes in a non-abelian monoid is highly non-trivial,which relies on a concept called"castling".Three types of castlings are essential to grasp the arithmetics.The divisor functionτon Thompson's monoid S satisfiesτ(uv)≤τ(u)τ(v)for any u,v∈S.Then the limitτ_(0)(u)=lim_(n→∞)(τ(u~n))^(1/n)exists.The quantityC(S)=sup_(1≠u∈S)τ_(0)(u)/τ(u)describes the complexity for castlings in S.We show thatC(S)=1.Moreover,the MCbius function on S is calculated.And the Liouville functionCon S is studied.
文摘In this paper,we introduce a new concept,namelyε-arithmetics,for real vectors of any fixed dimension.The basic idea is to use vectors of rational values(called rational vectors)to approximate vectors of real values of the same dimension withinεrange.For rational vectors of a fixed dimension m,they can form a field that is an mth order extension Q(α)of the rational field Q whereαhas its minimal polynomial of degree m over Q.Then,the arithmetics,such as addition,subtraction,multiplication,and division,of real vectors can be defined by using that of their approximated rational vectors withinεrange.We also define complex conjugate of a real vector and then inner product and convolutions of two real vectors and two real vector sequences(signals)of finite length.With these newly defined concepts for real vectors,linear processing,such as linear filtering,ARMA modeling,and least squares fitting,can be implemented to real vectorvalued signals with real vector-valued coefficients,which will broaden the existing linear processing to scalar-valued signals.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.
文摘Indoor Radon Concentrations in Severe Cold Area and Cold Area and Impact of Energy-saving Design on Indoor Radon in China Yunyun Wu1, Yanchao Song1, Changsong Hou1, Hongxing Cui1, Bing Shang1, Haoran Sun1(1. Key Laboratory of Radiological Protection and Nuclear Emergency, China CDC&National Institute for Radiological Protection,Chinese Center for Disease Control and Prevention, Beijing 100088, China)Abstract:This study investigated indoor radon concentrations in modern residential buildings in the Cold Area and Severe Cold Area in China. A total of 19 cities covering 16 provinces were selected with 1, 610 dwellings measured for indoor radon concentration. The arithmetic mean and geometric mean of indoor radon concentration were 68 Bq m-3 and 57 Bq m-3,respectively. It was found that indoor radon concentrations were much higher in the Severe Cold Area than those in the Cold Area.The indoor radon concentrations showed an increasing trend for newly constructed buildings.
文摘Let a_(1),a_(2),a_(3)be nonzero integers with gcd(a_(1),a_(2),a_(3))=1,and let k be any positive integer,K=max[3,|a_(1)|,|a_(2)|,|a_(3)|,k].Suppose that l_(1),l_(2),l_(3)are integers each coprime to k.Suppose further that b is any integer satisfying some necessary congruent conditions.The solvability of linear equation a_(1)p_(1)+a_(2)p_(2)+a_(3)p_(3)=b(p_(j)=l_(j)(mod k),1≤j≤3)with prime variables pi,p_(2),ps is investigated.It is proved that if ai,a_(2),a_(3)are all positive,then the above equation is solvable whenever b≥K^(25);if a,a_(2),a_(3)are not all of the same sign,then the above equation has a solution p_(1),p_(2),p_(3)satisfying max(p_(1),p_(2),p_(3))≤3|b|+K^(25).
文摘The search for mechanical properties of materials reached a highly acclaimed level, when indentations could be analysed on the basis of elastic theory for hardness and elastic modulus. The mathematical formulas proved to be very complicated, and various trials were published between the 1900s and 2000s. The development of indentation instruments and the wish to make the application in numerous steps easier, led in 1992 to trials with iterations by using relative values instead of absolute ones. Excessive iterations of computers with 3 + 8 free parameters of the loading and unloading curves became possible and were implemented into the instruments and worldwide standards. The physical formula for hardness was defined as force over area. For the conical, pyramidal, and spherical indenters, one simply took the projected area for the calculation of the indentation depth from the projected area, adjusted it later by the iterations with respect to fused quartz or aluminium as standard materials, and called it “contact height”. Continuously measured indentation loading curves were formulated as loading force over depth square. The unloading curves after release of the indenter used the initial steepness of the pressure relief for the calculation of what was (and is) incorrectly called “Young’s modulus”. But it is not unidirectional. And for the spherical indentations’ loading curve, they defined the indentation force over depth raised to 3/2 (but without R/h correction). They till now (2025) violate the energy law, because they use all applied force for the indenter depth and ignore the obvious sidewise force upon indentation (cf. e.g. the wood cleaving). The various refinements led to more and more complicated formulas that could not be reasonably calculated with them. One decided to use 3 + 8 free-parameter iterations for fitting to the (poor) standards of fused quartz or aluminium. The mechanical values of these were considered to be “true”. This is till now the worldwide standard of DIN-ISO-ASTM-14577, avoiding overcomplicated formulas with their complexity. Some of these are shown in the Introduction Section. By doing so, one avoided the understanding of indentation results on a physical basis. However, we open a simple way to obtain absolute values (though still on the blackbox instrument’s unsuitable force calibration). We do not iterate but calculate algebraically on the basis of the correct, physically deduced exponent of the loading force parabolas with h3/2 instead of false “h2” (for the spherical indentation, there is a calotte-radius over depth correction), and we reveal the physical errors taken up in the official worldwide “14577-Standard”. Importantly, we reveal the hitherto fully overlooked phase transitions under load that are not detectable with the false exponent. Phase-transition twinning is even present and falsifies the iteration standards. Instead of elasticity theory, we use the well-defined geometry of these indentations. By doing so, we reach simple algebraically calculable formulas and find the physical indentation hardness of materials with their onset depth, onset force and energy, as well as their phase-transition energy (temperature dependent also its activation energy). The most important phase transitions are our absolute algebraically calculated results. The now most easily obtained phase transitions under load are very dangerous because they produce polymorph interfaces between the changed and the unchanged material. It was found and published by high-enlargement microscopy (5000-fold) that these trouble spots are the sites for the development of stable, 1 to 2 µm long, micro-cracks (stable for months). If however, a force higher than the one of their formation occurs to them, these grow to catastrophic crash. That works equally with turbulences at the pickle fork of airliners. After the publication of these facts and after three fatal crashing had occurred in a short sequence, FAA (Federal Aviation Agency) reacted by rechecking all airplanes for such micro cracks. These were now found in a new fleet of airliners from where the three crashed ones came. These were previously overlooked. FAA became aware of that risk and grounded 290 (certainly all) of them, because the material of these did not have higher phase-transition onset and energy than other airplanes with better material. They did so despite the 14577-Standard that does not find (and thus formally forbids) phase transitions under indenter load with the false exponent on the indentation parabola. However, this “Standard” will, despite the present author’s well-founded petition, not be corrected for the next 5 years.
基金The National Key Technology R& D Program of Chinaduring the 11th Five-Year Plan Period (No.2006BAJ18B03).
文摘In order to decrease the calculation complexity of connectivity reliability of road networks, an improved recursive decomposition arithmetic is proposed. First, the basic theory of recursive decomposition arithmetic is reviewed. Then the characteristics of road networks, which are different from general networks, are analyzed. Under this condition, an improved recursive decomposition arithmetic is put forward which fits road networks better. Furthermore, detailed calculation steps are presented which are convenient for the computer, and the advantage of the approximate arithmetic is analyzed based on this improved arithmetic. This improved recursive decomposition arithmetic directly produces disjoint minipaths and avoids the non-polynomial increasing problems. And because the characteristics of road networks are considered, this arithmetic is greatly simplified. Finally, an example is given to prove its validity.
文摘Register transfer level mapping (RTLM) algorithm for technology mapping at RT level is presented,which supports current design methodologies using high level design and design reuse.The mapping rules implement a source ALU using target ALU.The source ALUs and the target ALUs are all represented by the general ALUs and the mapping rules are applied in the algorithm.The mapping rules are described in a table fashion.The graph clustering algorithm is a branch and bound algorithm based on the graph formulation of the mapping algorithm.The mapping algorithm suits well mapping of regularly structured data path.Comparisons are made between the experimental results generated by 1 greedy algorithm and graphclustering algorithm,showing the feasibility of presented algorithm.
文摘Counting has always been one of the most important operations for human be-ings. Naturally, it is inherent in economics and business. We count with the unique arithmetic, which humans have used for millennia. However, over time, the most inquisitive thinkers have questioned the validity of standard arithmetic in certain settings. It started in ancient Greece with the famous philosopher Zeno of Elea, who elaborated a number of paradoxes questioning popular knowledge. Millennia later, the famous German researcher Herman Helmholtz (1821-1894) [1] expressed reservations about applicability of conventional arithmetic with respect to physical phenomena. In the 20th and 21st century, mathematicians such as Yesenin-Volpin (1960) [2], Van Bendegem (1994) [3], Rosinger (2008) [4] and others articulated similar concerns. In validation, in the 20th century expressions such as 1 + 1 = 3 or 1 + 1 = 1 occurred to reflect important characteristics of economic, business, and social processes. We call these expressions synergy arithmetic. It is common notion that synergy arithmetic has no meaning mathematically. However in this paper we mathematically ground and explicate synergy arithmetic.
文摘This work is aimed to show that various problems from different fields can be modeled more efficiently using multiplicative calculus, in place of Newtonian calculus. Since multiplicative calculus is still in its infancy, some effort is put to explain its basic principles such as exponential arithmetic, multiplicative calculus, and multiplicative differential equations. Examples from finance, actuarial science, economics, and social sciences are presented with solutions using multiplicative calculus concepts. Based on the encouraging results obtained it is recommended that further research into this field be vested to exploit the applicability of multiplicative calculus in different fields as well as the development of multiplicative calculus concepts.
基金supported by the National Natural Science Foundation of China(Nos.11975060,12005026,and 12075038)the Major Science and Technology Project in Sichuan Province(No.19ZDZD0137)the Sichuan Science and Technology Program(No.2020YFG0019).
文摘The output-signal models and impulse response shaping(IRS)functions of semiconductor detectors are important for establishing high-precision measurement systems.In this paper,an output-signal model for semiconductor detector systems is proposed.According to the proposed model,a multistage cascade deconvolution IRS algorithm was developed using the C-R inverse system,R-C inverse system,and differentiator system.The silicon drift detector signals acquired from the analog-to-digital converter were tested.The experimental results indicated that the shaped pulses obtained using the proposed model had no undershoot,and the average peak base width of the output shaped pulses was reduced by 36%compared with that for a simple model proposed in a previous work[1].Offline processing results indicated that compared with the traditional IRS algorithm,the average peak base width of the output shaped pulses obtained using the proposed algorithm was reduced by 11%,and the total elapsed time required for pulse shaping was reduced by 26%.The proposed algorithm avoids recursive calculation.If the sampling frequency of the digital system reaches 100 MHz,the proposed algorithm can be simplified to integer arithmetic.The proposed IRS algorithm can be applied to high-resolution energy spectrum analysis,highcounting rate energy spectrum correction,and coincidence and anti-coincidence measurements.
基金This project is supported by National Natural Science Foundation of China(No.61202439)partly supported by Scientific Research Foundation of Hunan Provincial Education Department of China(No.16A008)partly supported by Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems(No.2017TP1016).
文摘For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.
文摘The way to use the least-mean-square (LMS) arithmetic to cancel the direct wave for a passive radar system is introduced. The model of the direct wave is deduced. By using the LMS adaptive FIR filter, the software solution for FM passive radar system is developed instead of the hardware consumption of the existent experiment system of passive radar. Further more some simulative results are given. The simulative results indicate that using LMS arithmetic to cancel the direct wave is effective.
基金Supported by National Natural Science Foundation of China(Grant No.51705445)Hebei Provincial Natural Science Foundation of China,(Grant No.E2016203324)Open Foundation of the State Key Laboratory of Fluid Power and Mechatronic Systems of China(Grant No.GZKF-201714)
文摘It is a great challenge to find effective atomizing technology for reducing industrial pollution; the twin-fluid atomizing nozzle has drawn great attention in this field recently. Current studies on twin-fluid nozzles mainly focus on droplet breakup and single droplet characteristics. Research relating to the influences of structural parameters on the droplet diameter characteristics in the flow field is scarcely available. In this paper, the influence of a self-excited vibrating cavity structure on droplet diameter characteristics was investigated. Twin-fluid atomizing tests were performed by a self-built open atomizing test bench, which was based on a phase Doppler particle analyzer(PDPA). The atomizing flow field of the twin-fluid nozzle with a self-excited vibrating cavity and its absence were tested and analyzed. Then the atomizing flow field of the twin-fluid nozzle with different self-excited vibrating cavity structures was investigated.The experimental results show that the structural parameters of the self-excited vibrating cavity had a great effect on the breakup of large droplets. The Sauter mean diameter(SMD) increased with the increase of orifice diameter or orifice depth. Moreover, a smaller orifice diameter or orifice depth was beneficial to enhancing the turbulence around the outlet of nozzle and decreasing the SMD. The atomizing performance was better when the orifice diameter was2.0 mm or the orifice depth was 1.5 mm. Furthermore, the SMD increased first and then decreased with the increase of the distance between the nozzle outlet and self-excited vibrating cavity, and the SMD of more than half the atomizing flow field was under 35 μm when the distance was 5.0 mm. In addition, with the increase of axial and radial distance from the nozzle outlet, the SMD and arithmetic mean diameter(AMD) tend to increase. The research results provide some design parameters for the twin-fluid nozzle, and the experimental results could serve as a beneficial supplement to the twin-fluid nozzle study.
基金supported by the National Natural Science Foundation of China(No.61871301)。
文摘The coalescence and missed detection are two key challenges in Multi-Target Tracking(MTT).To balance the tracking accuracy and real-time performance,the existing Random Finite Set(RFS)based filters are generally difficult to handle the above problems simultaneously,such as the Track-Oriented marginal Multi-Bernoulli/Poisson(TOMB/P)and Measurement-Oriented marginal Multi-Bernoulli/Poisson(MOMB/P)filters.Based on the Arithmetic Average(AA)fusion rule,this paper proposes a novel fusion framework for the Poisson Multi-Bernoulli(PMB)filter,which integrates both the advantages of the TOMB/P filter in dealing with missed detection and the advantages of the MOMB/P filter in dealing with coalescence.In order to fuse the different PMB distributions,the Bernoulli components in different Multi-Bernoulli(MB)distributions are associated with each other by Kullback-Leibler Divergence(KLD)minimization.Moreover,an adaptive AA fusion rule is designed on the basis of the exponential fusion weights,which utilizes the TOMB/P and MOMB/P updates to solve these difficulties in MTT.Finally,by comparing with the TOMB/P and MOMB/P filters,the performance of the proposed filter in terms of accuracy and efficiency is demonstrated in three challenging scenarios.
基金Project(51105141,51275191)supported by the National Natural Science Foundation of ChinaProject(2009AA043301)supported by the National High Technology Research and Development Program of ChinaProject(2012TS073)supported by the Fundamental Research Funds for the Central University of HUST,China
文摘A cost-based selective maintenance decision-making method was presented.The purpose of this method was to find an optimal choice of maintenance actions to be performed on a selected group of machines for manufacturing systems.The arithmetic reduction of intensity model was introduced to describe the influence on machine failure intensity by different maintenance actions (preventive maintenance,minimal repair and overhaul).In the meantime,a resolution algorithm combining the greedy heuristic rules with genetic algorithm was provided.Finally,a case study of the maintenance decision-making problem of automobile workshop was given.Furthermore,the case study demonstrates the practicability of this method.
基金supported by the National Defense Basic Scientific Research program of China (No.61325102)
文摘Dynamic fault tree analysis is widely used for the reliability analysis of the complex system with dynamic failure characteristics. In many circumstances, the exact value of system reliability is difficult to obtain due to absent or insufficient data for failure probabilities or failure rates of components. The traditional fuzzy operation arithmetic based on extension principle or interval theory may lead to fuzzy accumulations. Moreover, the existing fuzzy dynamic fault tree analysis methods are restricted to the case that all system components follow exponential time-to-failure distributions. To overcome these problems, a new fuzzy dynamic fault tree analysis approach based on the weakest n-dimensional t-norm arithmetic and developed sequential binary decision diagrams method is proposed to evaluate system fuzzy reliability. Compared with the existing approach,the proposed method can effectively reduce fuzzy cumulative and be applicable to any time-tofailure distribution type for system components. Finally, a case study is presented to illustrate the application and advantages of the proposed approach.
基金Supported by the National Natural Science Foundation of China(20676117) the National Creative Research Groups Science Foundation of China(60421002)
文摘A comparison of arithmetic operations of two dynamic process optimization approaches called quasi-sequential approach and reduced Sequential Quadratic Programming(rSQP)simultaneous approach with respect to equality constrained optimization problems is presented.Through the detail comparison of arithmetic operations,it is concluded that the average iteration number within differential algebraic equations(DAEs)integration of quasi-sequential approach could be regarded as a criterion.One formula is given to calculate the threshold value of average iteration number.If the average iteration number is less than the threshold value,quasi-sequential approach takes advantage of rSQP simultaneous approach which is more suitable contrarily.Two optimal control problems are given to demonstrate the usage of threshold value.For optimal control problems whose objective is to stay near desired operating point,the iteration number is usually small.Therefore,quasi-sequential approach seems more suitable for such problems.