Based on the sampler decomposition method and modified Z transform, this paper proposes a pulse transfer function matrix description of the multivariable multirate sampling systems. This multirate sampling system mode...Based on the sampler decomposition method and modified Z transform, this paper proposes a pulse transfer function matrix description of the multivariable multirate sampling systems. This multirate sampling system model has a simple structure, and can be used as a basis for the analysis and synthesis of the multirate sampling systems.展开更多
A novel carbon trap sampling system for gas-phase mercury measurement in flue gas is developed, including the high efficient sorbents made of modified biomass cokes and high precision sorbent traps for measuring parti...A novel carbon trap sampling system for gas-phase mercury measurement in flue gas is developed, including the high efficient sorbents made of modified biomass cokes and high precision sorbent traps for measuring particle-bound and total vapor-phase mercury in flue gas. A dedusting device is installed to collect fine fly ash for reducing the measurement errors. The thorough comparison test of mercury concentration in flue gas is conducted between the novel sampling system and the Ontario hydro method (OHM) in a 6 kW circulating fluidized bed combustor. Mercury mass balance rates of the OHM range from 95.47% to 104.72%. The mercury breakthrough rates for the second section of the sorbent trap are all below 2%. The relative deviations in the two test cases are in the range of 15. 96% to 17. 56% under different conditions. The verified data suggest that this novel carbon trap sampling system can meet the standards of quality assurance and quality control required by EPA Method 30B and can be applied to the coal-fired flue gas mercury sampling system.展开更多
This letter explores the distributed multisensor dynamic system, which has uniform sampling velocity and asynchronous sampling data for different sensors, and puts forward a new gradation fusion algorithm of multisens...This letter explores the distributed multisensor dynamic system, which has uniform sampling velocity and asynchronous sampling data for different sensors, and puts forward a new gradation fusion algorithm of multisensor dynamic system. As the total forecasted increment value between the two adjacent moments is the forecasted estimate value of the corresponding state increment in the fusion center, the new algorithm models the state and the forecasted estimate value of every moment. Kalman filter and all measurements arriving sequentially in the fusion period are employed to update the evaluation of target state step by step, on the condition that the system has obtained the target state evaluation that is based on the overall information in the previous fusion period. Accordingly, in the present period, the fusion evaluation of the target state at each sampling point on the basis of the overall information can be obtained. This letter elaborates the form of this new algorithm. Computer simulation demonstrates that this new algorithm owns greater precision in estimating target state than the present asynchronous fusion algorithm calibrated in time does.展开更多
One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific object...One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific objectives,measurement targets,and measurement requirements for the proposed Gas and Ion Analyzer(GIA).The GIA is designed for in-situ mass spectrometry of neutral gases and low-energy ions,such as hydrogen,carbon,and oxygen,in the vicinity of 311P.Ion sampling techniques are essential for the GIA's Time-of-Flight(TOF)mass analysis capabilities.In this paper,we present an enhanced ion sampling technique through the development of an ion attraction model and an ion source model.The ion attraction model demonstrates that adjusting attraction grid voltage can enhance the detection efficiency of low-energy ions and mitigate the repulsive force of ions during sampling,which is influenced by the satellite's surface positive charging.The ion source model simulates the processes of gas ionization and ion multiplication.Simulation results indicate that the GIA can achieve a lower pressure limit below 10-13Pa and possess a dynamic range exceeding 10~9.These performances ensure the generation of ions with stable and consistent current,which is crucial for high-resolution and broad dynamic range mass spectrometer analysis.Preliminary testing experiments have verified GIA's capability to detect gas compositions such as H2O and N2.In-situ measurements near 311P using GIA are expected to significantly contribute to our understanding of asteroid activity mechanisms,the evolution of the atmospheric and ionized environments of main-belt comets,the interactions with solar wind,and the origin of Earth's water.展开更多
Electro-Optic Sampling(EOS)detection technique has been widely used in terahertz science and tech⁃nology,and it also can measure the field time waveform of the few-cycle laser pulse.Its frequency response and band lim...Electro-Optic Sampling(EOS)detection technique has been widely used in terahertz science and tech⁃nology,and it also can measure the field time waveform of the few-cycle laser pulse.Its frequency response and band limitation are determined directly by the electro-optic crystal and duration of the probe laser pulse.Here,we investigate the performance of the EOS with thin GaSe crystal in the measurement of the mid-infrared few-cycle la⁃ser pulse.The shift of the central frequency and change of the bandwidth induced by the EOS detection are calcu⁃lated,and then the pulse distortions induced in this detection process are discussed.It is found that this technique produces a red-shift of the central frequency and narrowing of the bandwidth.These changings decrease when the laser wavelength increases from 2μm to 10μm.This work can help to estimate the performance of the EOS de⁃tection technique in the mid-infrared band and offer a reference for the related experiment as well.展开更多
Critical Height Sampling(CHS)estimates stand volume free from any model and tree form assumptions.Despite its introduction more than four decades ago,CHS has not been widely applied in the field due to perceived chall...Critical Height Sampling(CHS)estimates stand volume free from any model and tree form assumptions.Despite its introduction more than four decades ago,CHS has not been widely applied in the field due to perceived challenges in measurement.The objectives of this study were to compare estimated stand volume between CHS and sampling methods that used volume or taper models,the equivalence of the sampling methods,and their relative efficiency.We established 65 field plots in planted forests of two coniferous tree species.We estimated stand volume for a range of Basal Area Factors(BAFs).Results showed that CHS produced the most similar mean stand volume across BAFs and tree species with maximum differences between BAFs of 5-18m^(3)·ha^(−1).Horizontal Point Sampling(HPS)using volume models produced very large variability in mean stand volume across BAFs with the differences up to 126m^(3)·ha^(−1).However,CHS was less precise and less efficient than HPS.Furthermore,none of the sampling methods were statistically interchangeable with CHS at an allowable tolerance of≤55m^(3)·ha^(−1).About 72%of critical height measurements were below crown base indicating that critical height was more accessible to measurement than expected.Our study suggests that the consistency in the mean estimates of CHS is a major advantage when planning a forest inventory.When checking against CHS,results hint that HPS estimates might contain potential model bias.These strengths of CHS could outweigh its lower precision.Our study also implies serious implications in financial terms when choosing a sampling method.Lastly,CHS could potentially benefit forest management as an alternate option of estimating stand volume when volume or taper models are lacking or are not reliable.展开更多
Optical solitons,as self-sustaining waveforms in a nonlinear medium where dispersion and nonlinear effects are balanced,have key applications in ultrafast laser systems and optical communications.Physics-informed neur...Optical solitons,as self-sustaining waveforms in a nonlinear medium where dispersion and nonlinear effects are balanced,have key applications in ultrafast laser systems and optical communications.Physics-informed neural networks(PINN)provide a new way to solve the nonlinear Schrodinger equation describing the soliton evolution by fusing data-driven and physical constraints.However,the grid point sampling strategy of traditional PINN suffers from high computational complexity and unstable gradient flow,which makes it difficult to capture the physical details efficiently.In this paper,we propose a residual-based adaptive multi-distribution(RAMD)sampling method to optimize the PINN training process by dynamically constructing a multi-modal loss distribution.With a 50%reduction in the number of grid points,RAMD significantly reduces the relative error of PINN and,in particular,optimizes the solution error of the(2+1)Ginzburg–Landau equation from 4.55%to 1.98%.RAMD breaks through the lack of physical constraints in the purely data-driven model by the innovative combination of multi-modal distribution modeling and autonomous sampling control for the design of all-optical communication devices.RAMD provides a high-precision numerical simulation tool for the design of all-optical communication devices,optimization of nonlinear laser devices,and other studies.展开更多
Sent out at 1:31am GMT+8 on May 29 by a Long March-3B carrier rocket from the Xichang Satellite Launch Center in Sichuan province,China,Tianwen-2,the second mission of China’s Planetary Exploration Program,correctly ...Sent out at 1:31am GMT+8 on May 29 by a Long March-3B carrier rocket from the Xichang Satellite Launch Center in Sichuan province,China,Tianwen-2,the second mission of China’s Planetary Exploration Program,correctly entered the transfer trajectory toward an asteroid named 2016HO3 after flying for 18 minutes.Its solar wings unfolded properly,signaling a successful start,and primed for the next stage of its mission.展开更多
Neutron time-of-flight(ToF)measurement is a highly accurate method for obtaining the kinetic energy of a neutron by measuring its velocity,but requires precise acquisition of the neutron signal arrival time.However,th...Neutron time-of-flight(ToF)measurement is a highly accurate method for obtaining the kinetic energy of a neutron by measuring its velocity,but requires precise acquisition of the neutron signal arrival time.However,the high hardware costs and data burden associated with the acquisition of neutron ToF signals pose significant challenges.Higher sampling rates increase the data volume,data processing,and storage hardware costs.Compressed sampling can address these challenges,but it faces issues regarding optimal sampling efficiency and high-quality reconstructed signals.This paper proposes a revolutionary deep learning-based compressed sampling(DL-CS)algorithm for reconstructing neutron ToF signals that outperform traditional compressed sampling methods.This approach comprises four modules:random projection,rising dimensions,initial reconstruction,and final reconstruction.Initially,the technique adaptively compresses neutron ToF signals sequentially using three convolutional layers,replacing random measurement matrices in traditional compressed sampling theory.Subsequently,the signals are reconstructed using a modified inception module,long short-term memory,and self-attention.The performance of this deep compressed sampling method was quantified using the percentage root-mean-square difference,correlation coefficient,and reconstruction time.Experimental results showed that our proposed DL-CS approach can significantly enhance signal quality compared with other compressed sampling methods.This is evidenced by a percentage root-mean-square difference,correlation coefficient,and reconstruction time results of 5%,0.9988,and 0.0108 s,respectively,obtained for sampling rates below 10%for the neutron ToF signal generated using an electron-beam-driven photoneutron source.The results showed that the proposed DL-CS approach significantly improves the signal quality compared with other compressed sampling methods,exhibiting excellent reconstruction accuracy and speed.展开更多
When dealing with imbalanced datasets,the traditional support vectormachine(SVM)tends to produce a classification hyperplane that is biased towards the majority class,which exhibits poor robustness.This paper proposes...When dealing with imbalanced datasets,the traditional support vectormachine(SVM)tends to produce a classification hyperplane that is biased towards the majority class,which exhibits poor robustness.This paper proposes a high-performance classification algorithm specifically designed for imbalanced datasets.The proposed method first uses a biased second-order cone programming support vectormachine(B-SOCP-SVM)to identify the support vectors(SVs)and non-support vectors(NSVs)in the imbalanced data.Then,it applies the synthetic minority over-sampling technique(SV-SMOTE)to oversample the support vectors of the minority class and uses the random under-sampling technique(NSV-RUS)multiple times to undersample the non-support vectors of the majority class.Combining the above-obtained minority class data set withmultiple majority class datasets can obtainmultiple new balanced data sets.Finally,SOCP-SVM is used to classify each data set,and the final result is obtained through the integrated algorithm.Experimental results demonstrate that the proposed method performs excellently on imbalanced datasets.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The small punch test technique facilitates the convenient acquisition of the mechanical properties of in-service equipment materials and the assessment of their remaining service life through sampling.However,the weld...The small punch test technique facilitates the convenient acquisition of the mechanical properties of in-service equipment materials and the assessment of their remaining service life through sampling.However,the weldability of components with thin walls after small punch sampling,such as ethylene cracking furnace tubes,requires further investigation.Therefore,the weldability of in-service ethylene cracking furnace tubes following small punch sampling was investigated through nondestructive testing,microstructural characterization,and mechanical testing.Additionally,the impact of small punch sampling size and residual stress on the creep performance of the specimens was studied using an improved ductility exhaustion model.The results indicate that both the surface and interior of the weld repair areas on new furnace tubes and service-exposed furnace tubes after small-punch sampling are defect-free,exhibiting good weld quality.The strength of the specimens after weld repair was higher than that before sampling,whereas toughness decreased.Weld repair following small punch sampling of furnace tubes is both feasible and necessary.Furthermore,a linear relationship was observed between specimen thickness,diameter,and creep fracture time.The residual stress of welding affects the creep performance of the specimen under different stresses.展开更多
Large dynamic range and ultra-wideband receiving abilities are significant for many receivers. With these abilities, receivers can obtain signals with different power in ultra-wideband frequency space without informat...Large dynamic range and ultra-wideband receiving abilities are significant for many receivers. With these abilities, receivers can obtain signals with different power in ultra-wideband frequency space without information loss. However, conventional receiving scheme is hard to have large dynamic range and ultra-wideband receiving simultaneously because of the analog-to-digital converter(ADC) dynamic range and sample rate limitations. In this paper, based on the modulated sampling and unlimited sampling, a novel receiving scheme is proposed to achieve large dynamic range and ultra-wideband receiving. Focusing on the single carrier signals, the proposed scheme only uses a single self-rest ADC(SR-ADC) with low sample rate, and it achieves large dynamic range and ultra-wideband receiving simultaneously. Two receiving scenarios are considered, and they are cooperative strong signal receiving and non-cooperative strong/weak signals receiving. In the cooperative receiving scenario, an improved fast recovery method is proposed to obtain the modulated sampling output. In the non-cooperative receiving scenario, the strong and weak signals with different carrier frequencies are considered, and the signal processing method can recover and estimate each signal. Simulation results show that the proposed scheme can realize large dynamic range and ultra-wideband receiving simultaneously when the input signal-to-noise(SNR) ratio is high.展开更多
Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC syst...Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC systems based on a data-based representation, a stability criterion is derived to obtain the admissible maximum sampling interval(MSI) for a given controller and a design condition of the PI-type controller is further developed to meet the required MSI. Finally, the effectiveness of the proposed methods is verified by a case study.展开更多
With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, i...With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, its anonymity has provided new ways for Ponzi schemes to commit fraud, posing significant risks to investors. Current research still has some limitations, for example, Ponzi schemes are difficult to detect in the early stages of smart contract deployment, and data imbalance is not considered. In addition, there is room for improving the detection accuracy. To address the above issues, this paper proposes LT-SPSD (LSTM-Transformer smart Ponzi schemes detection), which is a Ponzi scheme detection method that combines Long Short-Term Memory (LSTM) and Transformer considering the time-series transaction information of smart contracts as well as the global information. Based on the verified smart contract addresses, account features, and code features are extracted to construct a feature dataset, and the SMOTE-Tomek algorithm is used to deal with the imbalanced data classification problem. By comparing our method with the other four typical detection methods in the experiment, the LT-SPSD method shows significant performance improvement in precision, recall, and F1-score. The results of the experiment confirm the efficacy of the model, which has some application value in Ethereum Ponzi scheme smart contract detection.展开更多
Selection of negative samples significantly influences landslide susceptibility assessment,especially when establishing the relationship between landslides and environmental factors in regions with complex geological ...Selection of negative samples significantly influences landslide susceptibility assessment,especially when establishing the relationship between landslides and environmental factors in regions with complex geological conditions.Traditional sampling strategies commonly used in landslide susceptibility models can lead to a misrepresentation of the distribution of negative samples,causing a deviation from actual geological conditions.This,in turn,negatively affects the discriminative ability and generalization performance of the models.To address this issue,we propose a novel approach for selecting negative samples to enhance the quality of machine learning models.We choose the Liangshan Yi Autonomous Prefecture,located in southwestern Sichuan,China,as the case study.This area,characterized by complex terrain,frequent tectonic activities,and steep slope erosion,experiences recurrent landslides,making it an ideal setting for validating our proposed method.We calculate the contribution values of environmental factors using the relief algorithm to construct the feature space,apply the Target Space Exteriorization Sampling(TSES)method to select negative samples,calculate landslide probability values by Random Forest(RF)modeling,and then create regional landslide susceptibility maps.We evaluate the performance of the RF model optimized by the Environmental Factor Selection-based TSES(EFSTSES)method using standard performance metrics.The results indicated that the model achieved an accuracy(ACC)of 0.962,precision(PRE)of 0.961,and an area under the curve(AUC)of 0.962.These findings demonstrate that the EFSTSES-based model effectively mitigates the negative sample imbalance issue,enhances the differentiation between landslide and non-landslide samples,and reduces misclassification,particularly in geologically complex areas.These improvements offer valuable insights for disaster prevention,land use planning,and risk mitigation strategies.展开更多
A comprehensive fishery-independent survey generally incorporates various specialized surveys and integrates different survey objectives to maximize benefits while accounting for cost limitations.It is important to ev...A comprehensive fishery-independent survey generally incorporates various specialized surveys and integrates different survey objectives to maximize benefits while accounting for cost limitations.It is important to evaluate the adaptability of the comprehensive survey for different taxon to get the optimal design.However,the validity and adaptability of ichthyoplankton sampling incorporated in a comprehensive fishery-independent survey program in estimating abundance of ichthyoplankton species is little known.This study included ichthyoplankton sampling in an integrated survey and assessed the appropriateness of survey design.The Kriging interpolation based on Gaussian models was used to estimate the values at unsurveyed locations based on the original ichthyoplankton survey data in the Haizhou Bay as the“true”values.The sampling performances of the ongoing stratified random sampling(StRS),simple random sampling(SRS),cluster sampling(CS),hexagonal systematic sampling(SYS h),and regular systematic sampling(SYS r)with different sample sizes in estimating ichthyoplankton abundance were compared in relative estimation error(REE),relative bias(RB),and coefficient of variation(CV)by computer simulation.The ongoing StRS performed better than CS and SRS,but not as good as the two systematic sampling methods,and the current sample size in StRS design was insufficient to estimate ichthyoplankton abundance.The average REE values(meanREE)were significantly smaller in two systematic sampling designs than those in other three sampling designs,and the two systematic sampling designs could maintain good inter-annual stability of sampling performances.It is suggested that incorporating ichthyoplankton survey directly into stratified random fishery-independent surveys could not achieve the desired level of accuracy for survey objectives,but the accuracy can be improved by setting additional stations.The assessment framework presented in this study serves as a reference for evaluating the adaptability of integrated surveys to different objectives in other waters.展开更多
The joint roughness coefficient(JRC) is one of the key parameters for evaluating the shear strength of rock joints.Because of the scale effect in the JRC,reliable JRC values are of great importance for most rock engin...The joint roughness coefficient(JRC) is one of the key parameters for evaluating the shear strength of rock joints.Because of the scale effect in the JRC,reliable JRC values are of great importance for most rock engineering projects.During the collection process of JRC samples,the redundancy or insufficiency of representative rock joint surface topography(RJST) information in serial length JRC samples is the essential reason that affects the reliability of the scale effect results.Therefore,this paper proposes an adaptive sampling method,in which we use the entropy consistency measure Q(a) to evaluate the consistency of the joint morphology information contained in adjacent JRC samples.Then the sampling interval is automatically adjusted according to the threshold Q(at) of the entropy consistency measure to ensure that the degree of change of RJST information between JRC samples is the same,and ultimately makes the representative RJST information in the collected JRC samples more balanced.The application results of actual cases show that the proposed method can obtain the scale effect in the JRC efficiently and reliably.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
Bilingual lexicon induction focuses on learning word translation pairs,also known as bitexts,from monolingual corpora by establishing a mapping between the source and target embedding spaces.Despite recent advancement...Bilingual lexicon induction focuses on learning word translation pairs,also known as bitexts,from monolingual corpora by establishing a mapping between the source and target embedding spaces.Despite recent advancements,bilingual lexicon induction is limited to inducing bitexts consisting of individual words,lacking the ability to handle semantics-rich phrases.To bridge this gap and support downstream cross-lingual tasks,it is practical to develop a method for bilingual phrase induction that extracts bilingual phrase pairs from monolingual corpora without relying on cross-lingual knowledge.In this paper,the authors propose a novel phrase embedding training method based on the skip-gram structure.Specifically,a local hard negative sampling strategy that utilises negative samples of central tokens in sliding windows to enhance phrase embedding learning is introduced.The proposed method achieves competitive or superior performance compared to baseline approaches,with exceptional results recorded for distant languages.Additionally,we develop a phrase representation learning method that leverages multilingual pre-trained language models.These mPLMs-based representations can be combined with the above-mentioned static phrase embeddings to further improve the accuracy of the bilingual phrase induction task.We manually construct a dataset of bilingual phrase pairs and integrate it with MUSE to facilitate the bilingual phrase induction task.展开更多
文摘Based on the sampler decomposition method and modified Z transform, this paper proposes a pulse transfer function matrix description of the multivariable multirate sampling systems. This multirate sampling system model has a simple structure, and can be used as a basis for the analysis and synthesis of the multirate sampling systems.
基金The National Natural Science Foundation of China(No.51376046,51076030)the National Science and Technology Support Program of China(No.2012BAA02B01)+1 种基金the Fundamental Research Funds for the Central Universitiesthe Scientific Innovation Research of College Graduates in Jiangsu Province(No.CXZZ13_0093,KYLX_0115,KYLX_018)
文摘A novel carbon trap sampling system for gas-phase mercury measurement in flue gas is developed, including the high efficient sorbents made of modified biomass cokes and high precision sorbent traps for measuring particle-bound and total vapor-phase mercury in flue gas. A dedusting device is installed to collect fine fly ash for reducing the measurement errors. The thorough comparison test of mercury concentration in flue gas is conducted between the novel sampling system and the Ontario hydro method (OHM) in a 6 kW circulating fluidized bed combustor. Mercury mass balance rates of the OHM range from 95.47% to 104.72%. The mercury breakthrough rates for the second section of the sorbent trap are all below 2%. The relative deviations in the two test cases are in the range of 15. 96% to 17. 56% under different conditions. The verified data suggest that this novel carbon trap sampling system can meet the standards of quality assurance and quality control required by EPA Method 30B and can be applied to the coal-fired flue gas mercury sampling system.
基金Supported by the National Natural Science Foundation of China (No.60434020, 60374020)International Cooperation Item of Henan (No.0446650006)Henan Outstanding Youth Science Fund (No.0312001900).
文摘This letter explores the distributed multisensor dynamic system, which has uniform sampling velocity and asynchronous sampling data for different sensors, and puts forward a new gradation fusion algorithm of multisensor dynamic system. As the total forecasted increment value between the two adjacent moments is the forecasted estimate value of the corresponding state increment in the fusion center, the new algorithm models the state and the forecasted estimate value of every moment. Kalman filter and all measurements arriving sequentially in the fusion period are employed to update the evaluation of target state step by step, on the condition that the system has obtained the target state evaluation that is based on the overall information in the previous fusion period. Accordingly, in the present period, the fusion evaluation of the target state at each sampling point on the basis of the overall information can be obtained. This letter elaborates the form of this new algorithm. Computer simulation demonstrates that this new algorithm owns greater precision in estimating target state than the present asynchronous fusion algorithm calibrated in time does.
基金Supported by the National Natural Science Foundation of China(42474239,41204128)China National Space Administration(Pre-research project on Civil Aerospace Technologies No.D010301)Strategic Priority Research Program of the Chinese Academy of Sciences(XDA17010303)。
文摘One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific objectives,measurement targets,and measurement requirements for the proposed Gas and Ion Analyzer(GIA).The GIA is designed for in-situ mass spectrometry of neutral gases and low-energy ions,such as hydrogen,carbon,and oxygen,in the vicinity of 311P.Ion sampling techniques are essential for the GIA's Time-of-Flight(TOF)mass analysis capabilities.In this paper,we present an enhanced ion sampling technique through the development of an ion attraction model and an ion source model.The ion attraction model demonstrates that adjusting attraction grid voltage can enhance the detection efficiency of low-energy ions and mitigate the repulsive force of ions during sampling,which is influenced by the satellite's surface positive charging.The ion source model simulates the processes of gas ionization and ion multiplication.Simulation results indicate that the GIA can achieve a lower pressure limit below 10-13Pa and possess a dynamic range exceeding 10~9.These performances ensure the generation of ions with stable and consistent current,which is crucial for high-resolution and broad dynamic range mass spectrometer analysis.Preliminary testing experiments have verified GIA's capability to detect gas compositions such as H2O and N2.In-situ measurements near 311P using GIA are expected to significantly contribute to our understanding of asteroid activity mechanisms,the evolution of the atmospheric and ionized environments of main-belt comets,the interactions with solar wind,and the origin of Earth's water.
基金Supported by the National Natural Science Foundation of China(12064028)Jiangxi Provincial Natural Science Foundation(20232BAB201045).
文摘Electro-Optic Sampling(EOS)detection technique has been widely used in terahertz science and tech⁃nology,and it also can measure the field time waveform of the few-cycle laser pulse.Its frequency response and band limitation are determined directly by the electro-optic crystal and duration of the probe laser pulse.Here,we investigate the performance of the EOS with thin GaSe crystal in the measurement of the mid-infrared few-cycle la⁃ser pulse.The shift of the central frequency and change of the bandwidth induced by the EOS detection are calcu⁃lated,and then the pulse distortions induced in this detection process are discussed.It is found that this technique produces a red-shift of the central frequency and narrowing of the bandwidth.These changings decrease when the laser wavelength increases from 2μm to 10μm.This work can help to estimate the performance of the EOS de⁃tection technique in the mid-infrared band and offer a reference for the related experiment as well.
文摘Critical Height Sampling(CHS)estimates stand volume free from any model and tree form assumptions.Despite its introduction more than four decades ago,CHS has not been widely applied in the field due to perceived challenges in measurement.The objectives of this study were to compare estimated stand volume between CHS and sampling methods that used volume or taper models,the equivalence of the sampling methods,and their relative efficiency.We established 65 field plots in planted forests of two coniferous tree species.We estimated stand volume for a range of Basal Area Factors(BAFs).Results showed that CHS produced the most similar mean stand volume across BAFs and tree species with maximum differences between BAFs of 5-18m^(3)·ha^(−1).Horizontal Point Sampling(HPS)using volume models produced very large variability in mean stand volume across BAFs with the differences up to 126m^(3)·ha^(−1).However,CHS was less precise and less efficient than HPS.Furthermore,none of the sampling methods were statistically interchangeable with CHS at an allowable tolerance of≤55m^(3)·ha^(−1).About 72%of critical height measurements were below crown base indicating that critical height was more accessible to measurement than expected.Our study suggests that the consistency in the mean estimates of CHS is a major advantage when planning a forest inventory.When checking against CHS,results hint that HPS estimates might contain potential model bias.These strengths of CHS could outweigh its lower precision.Our study also implies serious implications in financial terms when choosing a sampling method.Lastly,CHS could potentially benefit forest management as an alternate option of estimating stand volume when volume or taper models are lacking or are not reliable.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1604200)National Natural Science Foundation of China(Grant No.12261131495)+1 种基金Beijing Municipal Science and Technology Commission,Adminitrative Commission of Zhongguancun Science Park(Grant No.Z231100006623006)Institute of Systems Science,Beijing Wuzi University(Grant No.BWUISS21)。
文摘Optical solitons,as self-sustaining waveforms in a nonlinear medium where dispersion and nonlinear effects are balanced,have key applications in ultrafast laser systems and optical communications.Physics-informed neural networks(PINN)provide a new way to solve the nonlinear Schrodinger equation describing the soliton evolution by fusing data-driven and physical constraints.However,the grid point sampling strategy of traditional PINN suffers from high computational complexity and unstable gradient flow,which makes it difficult to capture the physical details efficiently.In this paper,we propose a residual-based adaptive multi-distribution(RAMD)sampling method to optimize the PINN training process by dynamically constructing a multi-modal loss distribution.With a 50%reduction in the number of grid points,RAMD significantly reduces the relative error of PINN and,in particular,optimizes the solution error of the(2+1)Ginzburg–Landau equation from 4.55%to 1.98%.RAMD breaks through the lack of physical constraints in the purely data-driven model by the innovative combination of multi-modal distribution modeling and autonomous sampling control for the design of all-optical communication devices.RAMD provides a high-precision numerical simulation tool for the design of all-optical communication devices,optimization of nonlinear laser devices,and other studies.
文摘Sent out at 1:31am GMT+8 on May 29 by a Long March-3B carrier rocket from the Xichang Satellite Launch Center in Sichuan province,China,Tianwen-2,the second mission of China’s Planetary Exploration Program,correctly entered the transfer trajectory toward an asteroid named 2016HO3 after flying for 18 minutes.Its solar wings unfolded properly,signaling a successful start,and primed for the next stage of its mission.
基金supported by the National Defense Technology Foundation Program of China(No.JSJT2022209A001-3)Sichuan Science and Technology Program(No.2021JDRC0011)+1 种基金Nuclear Energy Development Research Program of China(Research on High Energy X-ray Imaging of Nuclear Fuel)Scientific Research and Innovation Team Program of Sichuan University of Science and Engineering(No.SUSE652A001).
文摘Neutron time-of-flight(ToF)measurement is a highly accurate method for obtaining the kinetic energy of a neutron by measuring its velocity,but requires precise acquisition of the neutron signal arrival time.However,the high hardware costs and data burden associated with the acquisition of neutron ToF signals pose significant challenges.Higher sampling rates increase the data volume,data processing,and storage hardware costs.Compressed sampling can address these challenges,but it faces issues regarding optimal sampling efficiency and high-quality reconstructed signals.This paper proposes a revolutionary deep learning-based compressed sampling(DL-CS)algorithm for reconstructing neutron ToF signals that outperform traditional compressed sampling methods.This approach comprises four modules:random projection,rising dimensions,initial reconstruction,and final reconstruction.Initially,the technique adaptively compresses neutron ToF signals sequentially using three convolutional layers,replacing random measurement matrices in traditional compressed sampling theory.Subsequently,the signals are reconstructed using a modified inception module,long short-term memory,and self-attention.The performance of this deep compressed sampling method was quantified using the percentage root-mean-square difference,correlation coefficient,and reconstruction time.Experimental results showed that our proposed DL-CS approach can significantly enhance signal quality compared with other compressed sampling methods.This is evidenced by a percentage root-mean-square difference,correlation coefficient,and reconstruction time results of 5%,0.9988,and 0.0108 s,respectively,obtained for sampling rates below 10%for the neutron ToF signal generated using an electron-beam-driven photoneutron source.The results showed that the proposed DL-CS approach significantly improves the signal quality compared with other compressed sampling methods,exhibiting excellent reconstruction accuracy and speed.
基金supported by the Natural Science Basic Research Program of Shaanxi(Program No.2024JC-YBMS-026).
文摘When dealing with imbalanced datasets,the traditional support vectormachine(SVM)tends to produce a classification hyperplane that is biased towards the majority class,which exhibits poor robustness.This paper proposes a high-performance classification algorithm specifically designed for imbalanced datasets.The proposed method first uses a biased second-order cone programming support vectormachine(B-SOCP-SVM)to identify the support vectors(SVs)and non-support vectors(NSVs)in the imbalanced data.Then,it applies the synthetic minority over-sampling technique(SV-SMOTE)to oversample the support vectors of the minority class and uses the random under-sampling technique(NSV-RUS)multiple times to undersample the non-support vectors of the majority class.Combining the above-obtained minority class data set withmultiple majority class datasets can obtainmultiple new balanced data sets.Finally,SOCP-SVM is used to classify each data set,and the final result is obtained through the integrated algorithm.Experimental results demonstrate that the proposed method performs excellently on imbalanced datasets.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
基金supports provided by the National Natural Science Foundation of China(No.52372330).
文摘The small punch test technique facilitates the convenient acquisition of the mechanical properties of in-service equipment materials and the assessment of their remaining service life through sampling.However,the weldability of components with thin walls after small punch sampling,such as ethylene cracking furnace tubes,requires further investigation.Therefore,the weldability of in-service ethylene cracking furnace tubes following small punch sampling was investigated through nondestructive testing,microstructural characterization,and mechanical testing.Additionally,the impact of small punch sampling size and residual stress on the creep performance of the specimens was studied using an improved ductility exhaustion model.The results indicate that both the surface and interior of the weld repair areas on new furnace tubes and service-exposed furnace tubes after small-punch sampling are defect-free,exhibiting good weld quality.The strength of the specimens after weld repair was higher than that before sampling,whereas toughness decreased.Weld repair following small punch sampling of furnace tubes is both feasible and necessary.Furthermore,a linear relationship was observed between specimen thickness,diameter,and creep fracture time.The residual stress of welding affects the creep performance of the specimen under different stresses.
文摘Large dynamic range and ultra-wideband receiving abilities are significant for many receivers. With these abilities, receivers can obtain signals with different power in ultra-wideband frequency space without information loss. However, conventional receiving scheme is hard to have large dynamic range and ultra-wideband receiving simultaneously because of the analog-to-digital converter(ADC) dynamic range and sample rate limitations. In this paper, based on the modulated sampling and unlimited sampling, a novel receiving scheme is proposed to achieve large dynamic range and ultra-wideband receiving. Focusing on the single carrier signals, the proposed scheme only uses a single self-rest ADC(SR-ADC) with low sample rate, and it achieves large dynamic range and ultra-wideband receiving simultaneously. Two receiving scenarios are considered, and they are cooperative strong signal receiving and non-cooperative strong/weak signals receiving. In the cooperative receiving scenario, an improved fast recovery method is proposed to obtain the modulated sampling output. In the non-cooperative receiving scenario, the strong and weak signals with different carrier frequencies are considered, and the signal processing method can recover and estimate each signal. Simulation results show that the proposed scheme can realize large dynamic range and ultra-wideband receiving simultaneously when the input signal-to-noise(SNR) ratio is high.
基金supported in part by the National Natural Science Foundation of China(62373337,62373333)the 111 Project(B17040)State Key Laboratory of Advanced Electromagnetic Technology(2024KF002)
文摘Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC systems based on a data-based representation, a stability criterion is derived to obtain the admissible maximum sampling interval(MSI) for a given controller and a design condition of the PI-type controller is further developed to meet the required MSI. Finally, the effectiveness of the proposed methods is verified by a case study.
基金This work was granted by Qin Xin Talents Cultivation Program(No.QXTCP C202115)Beijing Information Science and Technology University+1 种基金the Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing Fund(No.GJJ-23)National Social Science Foundation,China(No.21BTQ079).
文摘With the widespread use of blockchain technology for smart contracts and decentralized applications on the Ethereum platform, the blockchain has become a cornerstone of trust in the modern financial system. However, its anonymity has provided new ways for Ponzi schemes to commit fraud, posing significant risks to investors. Current research still has some limitations, for example, Ponzi schemes are difficult to detect in the early stages of smart contract deployment, and data imbalance is not considered. In addition, there is room for improving the detection accuracy. To address the above issues, this paper proposes LT-SPSD (LSTM-Transformer smart Ponzi schemes detection), which is a Ponzi scheme detection method that combines Long Short-Term Memory (LSTM) and Transformer considering the time-series transaction information of smart contracts as well as the global information. Based on the verified smart contract addresses, account features, and code features are extracted to construct a feature dataset, and the SMOTE-Tomek algorithm is used to deal with the imbalanced data classification problem. By comparing our method with the other four typical detection methods in the experiment, the LT-SPSD method shows significant performance improvement in precision, recall, and F1-score. The results of the experiment confirm the efficacy of the model, which has some application value in Ethereum Ponzi scheme smart contract detection.
基金supported by Natural Science Research Project of Anhui Educational Committee(2023AH030041)National Natural Science Foundation of China(42277136)Anhui Province Young and Middle-aged Teacher Training Action Project(DTR2023018).
文摘Selection of negative samples significantly influences landslide susceptibility assessment,especially when establishing the relationship between landslides and environmental factors in regions with complex geological conditions.Traditional sampling strategies commonly used in landslide susceptibility models can lead to a misrepresentation of the distribution of negative samples,causing a deviation from actual geological conditions.This,in turn,negatively affects the discriminative ability and generalization performance of the models.To address this issue,we propose a novel approach for selecting negative samples to enhance the quality of machine learning models.We choose the Liangshan Yi Autonomous Prefecture,located in southwestern Sichuan,China,as the case study.This area,characterized by complex terrain,frequent tectonic activities,and steep slope erosion,experiences recurrent landslides,making it an ideal setting for validating our proposed method.We calculate the contribution values of environmental factors using the relief algorithm to construct the feature space,apply the Target Space Exteriorization Sampling(TSES)method to select negative samples,calculate landslide probability values by Random Forest(RF)modeling,and then create regional landslide susceptibility maps.We evaluate the performance of the RF model optimized by the Environmental Factor Selection-based TSES(EFSTSES)method using standard performance metrics.The results indicated that the model achieved an accuracy(ACC)of 0.962,precision(PRE)of 0.961,and an area under the curve(AUC)of 0.962.These findings demonstrate that the EFSTSES-based model effectively mitigates the negative sample imbalance issue,enhances the differentiation between landslide and non-landslide samples,and reduces misclassification,particularly in geologically complex areas.These improvements offer valuable insights for disaster prevention,land use planning,and risk mitigation strategies.
基金Supported by the National Key R&D Program of China(No.2022YFD2401301)the Special Financial Fund of Spawning Ground Survey in the Bohai Sea and the Yellow Sea from the Ministry of Agriculture and Rural Affairs,China(No.125C0505)。
文摘A comprehensive fishery-independent survey generally incorporates various specialized surveys and integrates different survey objectives to maximize benefits while accounting for cost limitations.It is important to evaluate the adaptability of the comprehensive survey for different taxon to get the optimal design.However,the validity and adaptability of ichthyoplankton sampling incorporated in a comprehensive fishery-independent survey program in estimating abundance of ichthyoplankton species is little known.This study included ichthyoplankton sampling in an integrated survey and assessed the appropriateness of survey design.The Kriging interpolation based on Gaussian models was used to estimate the values at unsurveyed locations based on the original ichthyoplankton survey data in the Haizhou Bay as the“true”values.The sampling performances of the ongoing stratified random sampling(StRS),simple random sampling(SRS),cluster sampling(CS),hexagonal systematic sampling(SYS h),and regular systematic sampling(SYS r)with different sample sizes in estimating ichthyoplankton abundance were compared in relative estimation error(REE),relative bias(RB),and coefficient of variation(CV)by computer simulation.The ongoing StRS performed better than CS and SRS,but not as good as the two systematic sampling methods,and the current sample size in StRS design was insufficient to estimate ichthyoplankton abundance.The average REE values(meanREE)were significantly smaller in two systematic sampling designs than those in other three sampling designs,and the two systematic sampling designs could maintain good inter-annual stability of sampling performances.It is suggested that incorporating ichthyoplankton survey directly into stratified random fishery-independent surveys could not achieve the desired level of accuracy for survey objectives,but the accuracy can be improved by setting additional stations.The assessment framework presented in this study serves as a reference for evaluating the adaptability of integrated surveys to different objectives in other waters.
基金supported by the National Natural Science Foundation of China(No.42207175)。
文摘The joint roughness coefficient(JRC) is one of the key parameters for evaluating the shear strength of rock joints.Because of the scale effect in the JRC,reliable JRC values are of great importance for most rock engineering projects.During the collection process of JRC samples,the redundancy or insufficiency of representative rock joint surface topography(RJST) information in serial length JRC samples is the essential reason that affects the reliability of the scale effect results.Therefore,this paper proposes an adaptive sampling method,in which we use the entropy consistency measure Q(a) to evaluate the consistency of the joint morphology information contained in adjacent JRC samples.Then the sampling interval is automatically adjusted according to the threshold Q(at) of the entropy consistency measure to ensure that the degree of change of RJST information between JRC samples is the same,and ultimately makes the representative RJST information in the collected JRC samples more balanced.The application results of actual cases show that the proposed method can obtain the scale effect in the JRC efficiently and reliably.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金National Key Research and Development Program of China,Grant/Award Number:2023YFC3305003National Natural Science Foundation of China,Grant/Award Number:62376076。
文摘Bilingual lexicon induction focuses on learning word translation pairs,also known as bitexts,from monolingual corpora by establishing a mapping between the source and target embedding spaces.Despite recent advancements,bilingual lexicon induction is limited to inducing bitexts consisting of individual words,lacking the ability to handle semantics-rich phrases.To bridge this gap and support downstream cross-lingual tasks,it is practical to develop a method for bilingual phrase induction that extracts bilingual phrase pairs from monolingual corpora without relying on cross-lingual knowledge.In this paper,the authors propose a novel phrase embedding training method based on the skip-gram structure.Specifically,a local hard negative sampling strategy that utilises negative samples of central tokens in sliding windows to enhance phrase embedding learning is introduced.The proposed method achieves competitive or superior performance compared to baseline approaches,with exceptional results recorded for distant languages.Additionally,we develop a phrase representation learning method that leverages multilingual pre-trained language models.These mPLMs-based representations can be combined with the above-mentioned static phrase embeddings to further improve the accuracy of the bilingual phrase induction task.We manually construct a dataset of bilingual phrase pairs and integrate it with MUSE to facilitate the bilingual phrase induction task.