There may be several internal defects in railway track work that have different shapes and distribution rules,and these defects affect the safety of high-speed trains.Establishing reliable detection models and methods...There may be several internal defects in railway track work that have different shapes and distribution rules,and these defects affect the safety of high-speed trains.Establishing reliable detection models and methods for these internal defects remains a challenging task.To address this challenge,in this study,an intelligent detection method based on a generalization feature cluster is proposed for internal defects of railway tracks.First,the defects are classified and counted according to their shape and location features.Then,generalized features of the internal defects are extracted and formulated based on the maximum difference between different types of defects and the maximum tolerance among same defects’types.Finally,the extracted generalized features are expressed by function constraints,and formulated as generalization feature clusters to classify and identify internal defects in the railway track.Furthermore,to improve the detection reliability and speed,a reduced-dimension method of the generalization feature clusters is presented in this paper.Based on this reduced-dimension feature and strongly constrained generalized features,the K-means clustering algorithm is developed for defect clustering,and good clustering results are achieved.Regarding the defects in the rail head region,the clustering accuracy is over 95%,and the Davies-Bouldin index(DBI)index is negligible,which indicates the validation of the proposed generalization features with strong constraints.Experimental results prove that the accuracy of the proposed method based on generalization feature clusters is up to 97.55%,and the average detection time is 0.12 s/frame,which indicates that it performs well in adaptability,high accuracy,and detection speed under complex working environments.The proposed algorithm can effectively detect internal defects in railway tracks using an established generalization feature cluster model.展开更多
Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN model...Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.展开更多
The least squares method is one of the most fundamental methods in Statistics to estimate correlations among various data. On the other hand, Deep Learning is the heart of Artificial Intelligence and it is a learning ...The least squares method is one of the most fundamental methods in Statistics to estimate correlations among various data. On the other hand, Deep Learning is the heart of Artificial Intelligence and it is a learning method based on the least squares method, in which a parameter called learning rate plays an important role. It is in general very hard to determine its value. In this paper we generalize the preceding paper [K. Fujii: Least squares method from the view point of Deep Learning: Advances in Pure Mathematics, 8, 485-493, 2018] and give an admissible value of the learning rate, which is easily obtained.展开更多
Stair matrices and their generalizations are introduced. The definitions and some properties of the matrices were first given by Lu Hao. This class of matrices provide bases of matrix splittings for iterative methods....Stair matrices and their generalizations are introduced. The definitions and some properties of the matrices were first given by Lu Hao. This class of matrices provide bases of matrix splittings for iterative methods. The remarkable feature of iterative methods based on the new class of matrices is that the methods are easily implemented for parallel computation. In particular, a generalization of the accelerated overrelaxation method (GAOR) is introduced. Some theories of the AOR method are extended to the generalized method to include a wide class of matrices. The convergence of the new method is derived for Hermitian positive definite matrices. Finally, some examples are given in order to show the superiority of the new method.展开更多
A new generalized F-expansion method is introduced and applied to the study of the (2+1)-dimensional Boussinesq equation. The further extension of the method is discussed at the end of this paper.
The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurr...The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurrence of a traumatic event[1].Fear generalization within the normal range represents an adaptive evolutionary mechanism to facilitate prompt reactions to potential threats and to enhance the likelihood of survival.展开更多
The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggl...The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggle with this aspect.A potential reason is that the benchmarks used for training and evaluation may not adequately offer a diverse set of transferable tasks.Although recent studies have developed bench-marking environments to address this shortcoming,they typically fall short in providing tasks that both ensure a solid foundation for generalization and exhibit significant variability.To overcome these limitations,this work introduces the concept that‘objects are composed of more fundamental components’in environment design,as implemented in the proposed environment called summon the magic(StM).This environment generates tasks where objects are derived from extensible and shareable basic components,facilitating strategy reuse and enhancing generalization.Furthermore,two new metrics,adaptation sensitivity range(ASR)and parameter correlation coefficient(PCC),are proposed to better capture and evaluate the generalization process of RL agents.Experimental results show that increasing the number of basic components of the object reduces the proximal policy optimization(PPO)agent’s training-testing gap by 60.9%(in episode reward),significantly alleviating overfitting.Additionally,linear variations in other environmental factors,such as the training monster set proportion and the total number of basic components,uniformly decrease the gap by at least 32.1%.These results highlight StM’s effectiveness in benchmarking and probing the generalization capabilities of RL algorithms.展开更多
On the basis of assuming that the narrow state X(3872) is a molecule state consisting of D0 and D*0, we apply the Mandelstam generalization of the Ge11-Mann-Low method to calculate the matrix element of quark curre...On the basis of assuming that the narrow state X(3872) is a molecule state consisting of D0 and D*0, we apply the Mandelstam generalization of the Ge11-Mann-Low method to calculate the matrix element of quark current between the heavy meson states described by Bether-Salpeter wave function. In calculation of the matrix element of quark current the operator product expansion is used in order to include the nonperturbative contribution of the vacuum condensates. In this scheme we calculate the mass of X(3872). We believe that this scheme is closer to QCD than the previous work.展开更多
Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the rea...Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the real world,which makes the existing approaches perform poorly for recognition tasks in different scenes.In this paper,we propose a domain generaliza-tion framework is proposed to improve the adaptability of radar emitter signal recognition in changing environments.Specifically,we propose an end-to-end denoising based domain-invariant radar emitter recognition network(DDIRNet)consisting of a denoising model and a domain invariant representation learning model(IRLM),which mutually benefit from each other.For the signal denoising model,a loss function is proposed to match the feature of the radar signals and guarantee the effectiveness of the model.For the domain invariant representation learning model,contrastive learning is introduced to learn the cross-domain feature by aligning the source and unseen domain distri-bution.Moreover,we design a data augmentation method that improves the diversity of signal data for training.Extensive experiments on classification have shown that DDIRNet achieves up to 6.4%improvement compared with the state-of-the-art radar emitter recognition methods.The proposed method pro-vides a promising direction to solve the radar emitter signal recognition problem.展开更多
In actual industrial scenarios,the variation of operating conditions,the existence of data noise,and failure of measurement equipment will inevitably affect the distribution of perceptive data.Deep learning-based faul...In actual industrial scenarios,the variation of operating conditions,the existence of data noise,and failure of measurement equipment will inevitably affect the distribution of perceptive data.Deep learning-based fault diagnosis algorithms strongly rely on the assumption that source and target data are independent and identically distributed,and the learned diagnosis knowledge is difficult to generalize to out-of-distribution data.Domain generalization(DG)aims to achieve the generalization of arbitrary target domain data by using only limited source domain data for diagnosis model training.The research of DG for fault diagnosis has made remarkable progress in recent years and lots of achievements have been obtained.In this article,for the first time a comprehensive literature review on DG for fault diagnosis from a learning mechanism-oriented perspective is provided to summarize the development in recent years.Specifically,we first conduct a comprehensive review on existing methods based on the similarity of basic principles and design motivations.Then,the recent trend of DG for fault diagnosis is also analyzed.Finally,the existing problems and future prospect is performed.展开更多
In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when fa...In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.展开更多
This paper analyzes the generalization of minimax regret optimization(MRO)under distribution shift.A new learning framework is proposed by injecting the measure of con-ditional value at risk(CVaR)into MRO,and its gene...This paper analyzes the generalization of minimax regret optimization(MRO)under distribution shift.A new learning framework is proposed by injecting the measure of con-ditional value at risk(CVaR)into MRO,and its generalization error bound is established through the lens of uniform convergence analysis.The CVaR-based MRO can achieve the polynomial decay rate on the excess risk,which extends the generalization analysis associated with the expected risk to the risk-averse case.展开更多
In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and the...In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and then uses spatial cognitive elements such as direction, area, height and their topological constraints to classify them precisely, so as to make them conform to the urban morphological characteristics. Delaunay triangulation network and boundary tracking synthesis algorithm are used to merge and summarize the models, and the models are stored hierarchically. The proposed algorithm should be verified experimentally with a typical urban complex model. The experimental results show that the efficiency of the method used in this paper is at least 20% higher than that of previous one, and with the growth of test data, the higher efficiency is improved. The classification results conform to human cognitive habits, and the generalization levels of different models can be relatively unified by adaptive control of each threshold in the clustering generalization process.展开更多
Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduce...Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduced and then enlarged through interpolation,followed by the embedding of secret data into the newly generated pixels.A general improving approach for embedding secret messages is proposed.The approach may be regarded a general model for enhancing the data embedding capacity of various existing image interpolation-based data hiding methods.This enhancement is achieved by expanding the range of pixel values available for embedding secret messages,removing the limitations of many existing methods,where the range is restricted to powers of two to facilitate the direct embedding of bit-based messages.This improvement is accomplished through the application of multiple-based number conversion to the secret message data.The method converts the message bits into a multiple-based number and uses an algorithm to embed each digit of this number into an individual pixel,thereby enhancing the message embedding efficiency,as proved by a theorem derived in this study.The proposed improvement method has been tested through experiments on three well-known image interpolation-based data hiding methods.The results show that the proposed method can enhance the three data embedding rates by approximately 14%,13%,and 10%,respectively,create stego-images with good quality,and resist RS steganalysis attacks.These experimental results indicate that the use of the multiple-based number conversion technique to improve the three interpolation-based methods for embedding secret messages increases the number of message bits embedded in the images.For many image interpolation-based data hiding methods,which use power-of-two pixel-value ranges for message embedding,other than the three tested ones,the proposed improvement method is also expected to be effective for enhancing their data embedding capabilities.展开更多
The traditional detailed model of the dual active bridge(DAB)power electronic transformer is characterized by the high dimensionality of its nodal admittance matrix and the need for a small simulation step size,which ...The traditional detailed model of the dual active bridge(DAB)power electronic transformer is characterized by the high dimensionality of its nodal admittance matrix and the need for a small simulation step size,which limits the speed of electromagnetic transient(EMT)simulations.To overcome these limitations,a novel EMT equivalent model based on a generalized branch-cutting method is proposed to improve the simulation efficiency of the DAB model.The DAB topology is first decomposed into two subnetworks through branch-cutting and node-tearing methods without the introduction of a one-time-step delay.Sub-sequently,the internal nodes of each sub-network are eliminated through network simplification,and the equivalent circuit for the port cascade module is derived.The model is then validated through simulations across various operating conditions.The results demonstrate that the model avoids the loss of accuracy associated with one-time-step delay,the relative error across different conditions remains below 1%,and the simulation acceleration ratios improve as the number of modules increases.展开更多
Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces c...Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.展开更多
In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to ...In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.展开更多
In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical m...In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environmen...The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.展开更多
基金National Natural Science Foundation of China(Grant No.61573233)Guangdong Provincial Natural Science Foundation of China(Grant No.2018A0303130188)+1 种基金Guangdong Provincial Science and Technology Special Funds Project of China(Grant No.190805145540361)Special Projects in Key Fields of Colleges and Universities in Guangdong Province of China(Grant No.2020ZDZX2005).
文摘There may be several internal defects in railway track work that have different shapes and distribution rules,and these defects affect the safety of high-speed trains.Establishing reliable detection models and methods for these internal defects remains a challenging task.To address this challenge,in this study,an intelligent detection method based on a generalization feature cluster is proposed for internal defects of railway tracks.First,the defects are classified and counted according to their shape and location features.Then,generalized features of the internal defects are extracted and formulated based on the maximum difference between different types of defects and the maximum tolerance among same defects’types.Finally,the extracted generalized features are expressed by function constraints,and formulated as generalization feature clusters to classify and identify internal defects in the railway track.Furthermore,to improve the detection reliability and speed,a reduced-dimension method of the generalization feature clusters is presented in this paper.Based on this reduced-dimension feature and strongly constrained generalized features,the K-means clustering algorithm is developed for defect clustering,and good clustering results are achieved.Regarding the defects in the rail head region,the clustering accuracy is over 95%,and the Davies-Bouldin index(DBI)index is negligible,which indicates the validation of the proposed generalization features with strong constraints.Experimental results prove that the accuracy of the proposed method based on generalization feature clusters is up to 97.55%,and the average detection time is 0.12 s/frame,which indicates that it performs well in adaptability,high accuracy,and detection speed under complex working environments.The proposed algorithm can effectively detect internal defects in railway tracks using an established generalization feature cluster model.
基金funded by Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydney.Moreover,Ongoing Research Funding Program(ORF-2025-14)King Saud University,Riyadh,Saudi Arabia,under Project ORF-2025-。
文摘Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.
文摘The least squares method is one of the most fundamental methods in Statistics to estimate correlations among various data. On the other hand, Deep Learning is the heart of Artificial Intelligence and it is a learning method based on the least squares method, in which a parameter called learning rate plays an important role. It is in general very hard to determine its value. In this paper we generalize the preceding paper [K. Fujii: Least squares method from the view point of Deep Learning: Advances in Pure Mathematics, 8, 485-493, 2018] and give an admissible value of the learning rate, which is easily obtained.
基金Project supported by the Natural Science Foundation of Liaoning Province of China (No.20022021)
文摘Stair matrices and their generalizations are introduced. The definitions and some properties of the matrices were first given by Lu Hao. This class of matrices provide bases of matrix splittings for iterative methods. The remarkable feature of iterative methods based on the new class of matrices is that the methods are easily implemented for parallel computation. In particular, a generalization of the accelerated overrelaxation method (GAOR) is introduced. Some theories of the AOR method are extended to the generalized method to include a wide class of matrices. The convergence of the new method is derived for Hermitian positive definite matrices. Finally, some examples are given in order to show the superiority of the new method.
基金The project supported by the Major Project of National Natural Science Foundation of China under Grant No. 49894190 and the Knowledge Innovation Project of CAS under Grant No. KZCXl-sw-18
文摘A new generalized F-expansion method is introduced and applied to the study of the (2+1)-dimensional Boussinesq equation. The further extension of the method is discussed at the end of this paper.
基金supported by the Shandong Provincial Natural Science Foundation(ZR2022QH144).
文摘The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurrence of a traumatic event[1].Fear generalization within the normal range represents an adaptive evolutionary mechanism to facilitate prompt reactions to potential threats and to enhance the likelihood of survival.
基金Supported by the National Key R&D Program of China(No.2023YFB4502200)the National Natural Science Foundation of China(No.U22A2028,61925208,62222214,62341411,62102398,62102399,U20A20227,62302478,62302482,62302483,62302480,62302481)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDB0660300,XDB0660301,XDB0660302)the Chinese Academy of Sciences Project for Young Scientists in Basic Research(No.YSBR-029)the Youth Innovation Promotion Association of Chinese Academy of Sciences and Xplore Prize.
文摘The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggle with this aspect.A potential reason is that the benchmarks used for training and evaluation may not adequately offer a diverse set of transferable tasks.Although recent studies have developed bench-marking environments to address this shortcoming,they typically fall short in providing tasks that both ensure a solid foundation for generalization and exhibit significant variability.To overcome these limitations,this work introduces the concept that‘objects are composed of more fundamental components’in environment design,as implemented in the proposed environment called summon the magic(StM).This environment generates tasks where objects are derived from extensible and shareable basic components,facilitating strategy reuse and enhancing generalization.Furthermore,two new metrics,adaptation sensitivity range(ASR)and parameter correlation coefficient(PCC),are proposed to better capture and evaluate the generalization process of RL agents.Experimental results show that increasing the number of basic components of the object reduces the proximal policy optimization(PPO)agent’s training-testing gap by 60.9%(in episode reward),significantly alleviating overfitting.Additionally,linear variations in other environmental factors,such as the training monster set proportion and the total number of basic components,uniformly decrease the gap by at least 32.1%.These results highlight StM’s effectiveness in benchmarking and probing the generalization capabilities of RL algorithms.
基金Supported in part by the National Natural Science Foundation of China under Grant No. 10335012 and the National Key Basic Research Program and Cross Science of China under Grant No. 90503011
文摘On the basis of assuming that the narrow state X(3872) is a molecule state consisting of D0 and D*0, we apply the Mandelstam generalization of the Ge11-Mann-Low method to calculate the matrix element of quark current between the heavy meson states described by Bether-Salpeter wave function. In calculation of the matrix element of quark current the operator product expansion is used in order to include the nonperturbative contribution of the vacuum condensates. In this scheme we calculate the mass of X(3872). We believe that this scheme is closer to QCD than the previous work.
基金supported by the National Natural Science Foundation of China(62101575)the Research Project of NUDT(ZK22-57)the Self-directed Project of State Key Laboratory of High Performance Computing(202101-16).
文摘Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the real world,which makes the existing approaches perform poorly for recognition tasks in different scenes.In this paper,we propose a domain generaliza-tion framework is proposed to improve the adaptability of radar emitter signal recognition in changing environments.Specifically,we propose an end-to-end denoising based domain-invariant radar emitter recognition network(DDIRNet)consisting of a denoising model and a domain invariant representation learning model(IRLM),which mutually benefit from each other.For the signal denoising model,a loss function is proposed to match the feature of the radar signals and guarantee the effectiveness of the model.For the domain invariant representation learning model,contrastive learning is introduced to learn the cross-domain feature by aligning the source and unseen domain distri-bution.Moreover,we design a data augmentation method that improves the diversity of signal data for training.Extensive experiments on classification have shown that DDIRNet achieves up to 6.4%improvement compared with the state-of-the-art radar emitter recognition methods.The proposed method pro-vides a promising direction to solve the radar emitter signal recognition problem.
基金supported by the National Natural Science Foundation of China(62322315,61873237)the Zhejiang Provincial Natural Science Foundation of China(LR22F030003)+1 种基金supported by Research Grant Council of Hong Kong(11201023,11202224)Hong Kong Innovation and Technology Commission(InnoHK Project CIMDA).
文摘In actual industrial scenarios,the variation of operating conditions,the existence of data noise,and failure of measurement equipment will inevitably affect the distribution of perceptive data.Deep learning-based fault diagnosis algorithms strongly rely on the assumption that source and target data are independent and identically distributed,and the learned diagnosis knowledge is difficult to generalize to out-of-distribution data.Domain generalization(DG)aims to achieve the generalization of arbitrary target domain data by using only limited source domain data for diagnosis model training.The research of DG for fault diagnosis has made remarkable progress in recent years and lots of achievements have been obtained.In this article,for the first time a comprehensive literature review on DG for fault diagnosis from a learning mechanism-oriented perspective is provided to summarize the development in recent years.Specifically,we first conduct a comprehensive review on existing methods based on the similarity of basic principles and design motivations.Then,the recent trend of DG for fault diagnosis is also analyzed.Finally,the existing problems and future prospect is performed.
基金Supported by the National Natural Science Foundation of China(No.62001313)the Key Project of Liaoning Provincial Department of Science and Technology(No.2021JH2/10300134,2022JH1/10500004)。
文摘In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.
基金Supported by Education Science Planning Project of Hubei Province(2020GB198)Natural Science Foundation of Hubei Province(2023AFB523).
文摘This paper analyzes the generalization of minimax regret optimization(MRO)under distribution shift.A new learning framework is proposed by injecting the measure of con-ditional value at risk(CVaR)into MRO,and its generalization error bound is established through the lens of uniform convergence analysis.The CVaR-based MRO can achieve the polynomial decay rate on the excess risk,which extends the generalization analysis associated with the expected risk to the risk-averse case.
文摘In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and then uses spatial cognitive elements such as direction, area, height and their topological constraints to classify them precisely, so as to make them conform to the urban morphological characteristics. Delaunay triangulation network and boundary tracking synthesis algorithm are used to merge and summarize the models, and the models are stored hierarchically. The proposed algorithm should be verified experimentally with a typical urban complex model. The experimental results show that the efficiency of the method used in this paper is at least 20% higher than that of previous one, and with the growth of test data, the higher efficiency is improved. The classification results conform to human cognitive habits, and the generalization levels of different models can be relatively unified by adaptive control of each threshold in the clustering generalization process.
文摘Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduced and then enlarged through interpolation,followed by the embedding of secret data into the newly generated pixels.A general improving approach for embedding secret messages is proposed.The approach may be regarded a general model for enhancing the data embedding capacity of various existing image interpolation-based data hiding methods.This enhancement is achieved by expanding the range of pixel values available for embedding secret messages,removing the limitations of many existing methods,where the range is restricted to powers of two to facilitate the direct embedding of bit-based messages.This improvement is accomplished through the application of multiple-based number conversion to the secret message data.The method converts the message bits into a multiple-based number and uses an algorithm to embed each digit of this number into an individual pixel,thereby enhancing the message embedding efficiency,as proved by a theorem derived in this study.The proposed improvement method has been tested through experiments on three well-known image interpolation-based data hiding methods.The results show that the proposed method can enhance the three data embedding rates by approximately 14%,13%,and 10%,respectively,create stego-images with good quality,and resist RS steganalysis attacks.These experimental results indicate that the use of the multiple-based number conversion technique to improve the three interpolation-based methods for embedding secret messages increases the number of message bits embedded in the images.For many image interpolation-based data hiding methods,which use power-of-two pixel-value ranges for message embedding,other than the three tested ones,the proposed improvement method is also expected to be effective for enhancing their data embedding capabilities.
基金The Technology Project of State Grid Corporation of China Headquarters(No.5400-202318547A-3-2-ZN).
文摘The traditional detailed model of the dual active bridge(DAB)power electronic transformer is characterized by the high dimensionality of its nodal admittance matrix and the need for a small simulation step size,which limits the speed of electromagnetic transient(EMT)simulations.To overcome these limitations,a novel EMT equivalent model based on a generalized branch-cutting method is proposed to improve the simulation efficiency of the DAB model.The DAB topology is first decomposed into two subnetworks through branch-cutting and node-tearing methods without the introduction of a one-time-step delay.Sub-sequently,the internal nodes of each sub-network are eliminated through network simplification,and the equivalent circuit for the port cascade module is derived.The model is then validated through simulations across various operating conditions.The results demonstrate that the model avoids the loss of accuracy associated with one-time-step delay,the relative error across different conditions remains below 1%,and the simulation acceleration ratios improve as the number of modules increases.
基金supported by China National Astronomical Data Center(NADC),CAS Astronomical Data Center and Chinese Virtual Observatory(China-VO)supported by Astronomical Big Data Joint Research Center,co-founded by National Astronomical Observatories,Chinese Academy of Sciences and Alibaba Cloud。
文摘Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.
基金Supported in part by Natural Science Foundation of Guangxi(2023GXNSFAA026246)in part by the Central Government's Guide to Local Science and Technology Development Fund(GuikeZY23055044)in part by the National Natural Science Foundation of China(62363003)。
文摘In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.
基金supported by Anhui Provincial Natural Science Foundation(2408085QA030)Natural Science Research Project of Anhui Educational Committee,China(2022AH050825)+3 种基金Medical Special Cultivation Project of Anhui University of Science and Technology(YZ2023H2C008)the Excellent Research and Innovation Team of Anhui Province,China(2022AH010052)the Scientific Research Foundation for High-level Talents of Anhui University of Science and Technology,China(2021yjrc51)Collaborative Innovation Program of Hefei Science Center,CAS,China(2019HSC-CIP006).
文摘In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
基金supported by the National Key Research and Develop-ment Program(No.2022YFC3701103)the National Natural Science Foundation of China(Nos.42130714 and 41931287).
文摘The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.