Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonline...Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonlinearity,leading to delays in detecting time-varying data features.Additionally,the uncertain kernel function and kernel parameters limit the ability of the extracted features to express process characteristics,resulting in poor fault detection performance.To alleviate the above problems,a novel randomized auto-regressive dynamic slow feature analysis(RRDSFA)method is proposed to simultaneously monitor the operating point deviations and process dynamic faults,enabling real-time monitoring of data features in industrial processes.Firstly,the proposed Random Fourier mappingbased method achieves more effective nonlinear transformation,contrasting with the current kernelbased RDSFA algorithm that may lead to significant computational complexity.Secondly,a randomized RDSFA model is developed to extract nonlinear dynamic slow features.Furthermore,a Bayesian inference-based overall fault monitoring model including all RRDSFA sub-models is developed to overcome the randomness of random Fourier mapping.Finally,the superiority and effectiveness of the proposed monitoring method are demonstrated through a numerical case and a simulation of continuous stirred tank reactor.展开更多
Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently d...Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently dynamic and need to be monitored using dynamic algorithms.Mainstream dynamic algorithms rely on concatenating current measurement with past data.This work proposes a new,alternative dynamic process monitoring algorithm,using dot product feature analysis(DPFA).DPFA computes the dot product of consecutive samples,thus naturally capturing the process dynamics through temporal correlation.At the same time,DPFA's online computational complexity is lower than not just existing dynamic algorithms,but also classical static algorithms(e.g.,principal component analysis and slow feature analysis).The detectability of the new algorithm is analyzed for three types of faults typically seen in process systems:sensor bias,process fault and gain change fault.Through experiments with a numerical example and real data from a thermal power plant,the DPFA algorithm is shown to be superior to the state-of-the-art methods,in terms of better monitoring performance(fault detection rate and false alarm rate)and lower computational complexity.展开更多
The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significan...The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.展开更多
Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent int...Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent intergrated TCM and western medicine diagnosis of depression.Methods Using the Audio-Visual Emotion Challenge and Workshop(AVEC 2014)public dataset on depression,which conclude 150 interview videos,the samples were classified ac-cording to the TCM inspection of spirit classification:Deshen(得神,presence of spirit),Shaoshen(少神,insufficiency of spirit),and Shenluan(神乱,confusion of spirit).Meanwhile,based on Beck Depression Inventory-II(BDI-II)score for the severity grade of depression,the samples were divided into minimal(0-13,Q1),mild(14-19,Q2),moderate(20-28,Q3),and severe(29-63,Q4).Sixty-eight landmarks were extracted with a ResNet-50 network,and the feature extracion mode was stadardized.Random forest and support vectior machine(SVM)classifiers were used to predict TCM inspection of spirit classification and the severity grade of depression,respectively.A Chi-square test and Apriori association rule mining were then applied to quantify and explore the relationships.Results The analysis revealed a statistically significant and moderately strong association be-tween TCM spirit classification and the severity grade of depression,as confirmed by a Chi-square test(χ^(2)=14.04,P=0.029)with a Cramer’s V effect size of 0.243.Further exploration us-ing association rule mining identified the most compelling rule:“moderate depression(Q3)→Shenluan”.This rule demonstrated a support level of 5%,indicating this specific co-occur-rence was present in 5%of the cohort.Crucially,it achieved a high Confidence of 86%,mean-ing that among patients diagnosed with Q3,86%exhibited the Shenluan pattern according to TCM assessment.The substantial Lift of 2.37 signifies that the observed likelihood of Shenlu-an manifesting in Q3 patients is 2.37 times higher than would be expected by chance if these states were independent-compelling evidence of a highly non-random association.Conse-quently,Shenluan emerges as a distinct and core TCM diagnostic manifestation strongly linked to Q3,forming a clinically significant phenotype within this patient subgroup.展开更多
Multistage multi-cluster hydraulic fracturing has enabled the economic exploitation of shale reservoirs,but the interpretation of hydraulic fracture parameters is challenging.The pressure signals after pump shutdown a...Multistage multi-cluster hydraulic fracturing has enabled the economic exploitation of shale reservoirs,but the interpretation of hydraulic fracture parameters is challenging.The pressure signals after pump shutdown are influenced by hydraulic fractures,which can reflect the geometric features of hydraulic fracture.The shutdown pressure can be used to interpret the hydraulic fracture parameters in a real-time and cost-effective manner.In this paper,a mathematical model for shutdown pressure evolution is developed considering the effects of wellbore friction,perforation friction and fluid loss in fractures.An efficient numerical simulation method is established by using the method of characteristics.Based on this method,the impacts of fracture half-length,fracture height,opened cluster and perforation number,and filtration coefficient on the evolution of shutdown pressure are analyzed.The results indicate that a larger fracture half-length may hasten the decay of shutdown pressure,while a larger fracture height can slow down the decay of shutdown pressure.A smaller number of opened clusters and perforations can significantly increase the perforation friction and decrease the overall level of shutdown pressure.A larger filtration coefficient may accelerate the fluid filtration in the fracture and hasten the drop of the shutdown pressure.The simulation method of shutdown pressure,as well as the analysis results,has important implications for the interpretation of hydraulic fracture parameters.展开更多
The accuracy and reliability of non-destructive testing(NDT)approaches in detecting interior corrosion problems are critical,yet research in this field is limited.This work describes a novel way to monitor the structu...The accuracy and reliability of non-destructive testing(NDT)approaches in detecting interior corrosion problems are critical,yet research in this field is limited.This work describes a novel way to monitor the structural integrity of steel gas pipelines that uses advanced numerical modeling techniques to anticipate fracture development and corrosion effects.The objective is to increase pipeline dependability and safety through more precise,real-time health evaluations.Compared to previous approaches,our solution provides higher accuracy in fault detection and quantification,making it ideal for pipeline integritymonitoring in real-world applications.To solve this issue,statistical analysis was conducted on the size and directional distribution features of about 380,000 sets of internal corrosion faults,as well as simulations of erosion and wear patterns on bent pipes.Using real defectmorphologies,we developed a modeling framework for typical interior corrosion flaws.We evaluated and validated the applicability and effectiveness of in-service inspection processes,as well as conducted on-site comparison tests.The results show that(1)the length and width of corrosion defects follow a log-normal distribution,the clock orientation follows a normal distribution,and the peak depth follows a Freundlich EX function distribution pattern;(2)pipeline corrosion defect data can be classified into three classes using the K-means clustering algorithm,allowing rapid and convenient acquisition of typical size and orientation characteristics of internal corrosion defects;(3)the applicability range and boundary conditions of various NDT techniques were verified,establishing comprehensive selection principles for internal corrosion defect detection technology;(4)on-site inspection results showed a 31%The simulation and validation platform for typical interior corrosion issues greatly enhances the accuracy and reliability of detection data.展开更多
Fault degradation prognostic, which estimates the time before a failure occurs and process breakdowns, has been recognized as a key component in maintenance strategies nowadays. Fault degradation processes are, in gen...Fault degradation prognostic, which estimates the time before a failure occurs and process breakdowns, has been recognized as a key component in maintenance strategies nowadays. Fault degradation processes are, in general,slowly varying and can be modeled by autoregressive models. However, industrial processes always show typical nonstationary nature, which may bring two challenges: how to capture fault degradation information and how to model nonstationary processes. To address the critical issues, a novel fault degradation modeling and online fault prognostic strategy is developed in this paper. First, a fault degradation-oriented slow feature analysis(FDSFA) algorithm is proposed to extract fault degradation directions along which candidate fault degradation features are extracted. The trend ability assessment is then applied to select major fault degradation features. Second, a key fault degradation factor(KFDF) is calculated to characterize the fault degradation tendency by combining major fault degradation features and their stability weighting factors. After that, a time-varying regression model with temporal smoothness regularization is established considering nonstationary characteristics. On the basis of updating strategy, an online fault prognostic model is further developed by analyzing and modeling the prediction errors. The performance of the proposed method is illustrated with a real industrial process.展开更多
As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their characte...As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.展开更多
[ Objective] The research aimed to analyze characteristics of the atmospheric particulate pollutants ( PMlo and PM2.s) in Wenzhou City. [Method] We analyzed interannual change rule of the dust haze in Wenzhou during...[ Objective] The research aimed to analyze characteristics of the atmospheric particulate pollutants ( PMlo and PM2.s) in Wenzhou City. [Method] We analyzed interannual change rule of the dust haze in Wenzhou during 1978 -2008. Moreover, we respectively set monitoring points in urban district, industrial park and beauty spot of Wenzhou in summer and winter of 2010. Element, ion and polycyclic aromatic hydrocarbon com- positions and morphology of the particulate matter were analyzed. [ Result] Dust haze in Wenzhou City mainly appeared in winter and spring, which was related to local meteorological condition. In summer and winter, both PMlo and PM2.s concentrations presented the characteristic of industrial park 〉 commercial area 〉 beauty spot. Chain-like particle aggregates and ultrafine particles were main composition of the atmospheric particulate matter in Wenzhou. Contribution rate of the spherical particle amount was smaller than metropolis, which was related to local industry and traffic. Fe element had the most content in particulate matter. Mass concentration was mainly composed of 6 elements, such as Na, Si, S, K, Ca and Fe. Total concentration of the six elements occupied 70% -80% of the 16 elements. SO^- and NH4* in particulate matter were higher. They were mainly from human activity. Main compositions of the polycyclic aromatic hydrocarbon were naphthalene, anthracene, benzo (b) fluoranthene, indeno (1,2, 3-cd) pyrene and benzo (g, h, i) perylene, which was related to abrupt increase of the motor vehicle. [ Condusion] The research provided scientific basis and technology support for controlling atmospheric particulate matter pollution in Wenzhou City by government and related department.展开更多
Analysis of customers' satisfaction provides a guarantee to improve the service quality in call centers.In this paper,a novel satisfaction recognition framework is introduced to analyze the customers' satisfaction.I...Analysis of customers' satisfaction provides a guarantee to improve the service quality in call centers.In this paper,a novel satisfaction recognition framework is introduced to analyze the customers' satisfaction.In natural conversations,the interaction between a customer and its agent take place more than once.One of the difficulties insatisfaction analysis at call centers is that not all conversation turns exhibit customer satisfaction or dissatisfaction. To solve this problem,an intelligent system is proposed that utilizes acoustic features to recognize customers' emotion and utilizes the global features of emotion and duration to analyze the satisfaction. Experiments on real-call data show that the proposed system offers a significantly higher accuracy in analyzing the satisfaction than the baseline system. The average F value is improved to 0. 701 from 0. 664.展开更多
Dear Sir,Iam Dr.Kavitha S,from the Department of Electronics and Communication Engineering,Nandha Engineering College,Erode,Tamil Nadu,India.I write to present the detection of glaucoma using extreme learning machine(...Dear Sir,Iam Dr.Kavitha S,from the Department of Electronics and Communication Engineering,Nandha Engineering College,Erode,Tamil Nadu,India.I write to present the detection of glaucoma using extreme learning machine(ELM)and fractal feature analysis.Glaucoma is the second most frequent cause of permanent blindness in industrial展开更多
The theoretical positioning accuracy of multilateration(MLAT) with the time difference of arrival(TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location ...The theoretical positioning accuracy of multilateration(MLAT) with the time difference of arrival(TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival(TSOA) algorithm from the root mean square error(RMSE) and geometric dilution of precision(GDOP) in additive white Gaussian noise(AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.展开更多
The tool for analyzing and evaluating system characteristics based on the AADL model can achieve real-time,reliability,security,and schedulability analysis and evaluation for software-intensive systems.It provides a c...The tool for analyzing and evaluating system characteristics based on the AADL model can achieve real-time,reliability,security,and schedulability analysis and evaluation for software-intensive systems.It provides a complete solution for quality analysis of real-time,reliability,safety,and schedulability in the design and demonstration stages of software-intensive systems.By using the system′s multi-characteristic(real-time capability,reliability,safety,schedulability)analysis and evaluation tool based on AADL models,it can meet the software non-functional requirements stipulated by the existing model development standards and specifications.This effectively enhances the efficiency of demonstrating the compliance of the system′s non-functional quality attributes in the design work of our unit′s software-intensive system.It can also improve the performance of our unit′s software-intensive system in engineering inspections and requirement reviews conducted by various organizations.The improvement in the quality level of software-intensive systems can enhance the market competitiveness of our unit′s electronic products.展开更多
In order to develop precision or personalized medicine,identifying new quantitative imaging markers and building machine learning models to predict cancer risk and prognosis has been attracting broad research interest...In order to develop precision or personalized medicine,identifying new quantitative imaging markers and building machine learning models to predict cancer risk and prognosis has been attracting broad research interest recently.Most of these research approaches use the similar concepts of the conventional computer-aided detection schemes of medical images,which include steps in detecting and segmenting suspicious regions or tumors,followed by training machine learning models based on the fusion of multiple image features computed from the segmented regions or tumors.However,due to the heterogeneity and boundary fuzziness of the suspicious regions or tumors,segmenting subtle regions is often difficult and unreliable.Additionally,ignoring global and/or background parenchymal tissue characteristics may also be a limitation of the conventional approaches.In our recent studies,we investigated the feasibility of developing new computer-aided schemes implemented with the machine learning models that are trained by global image features to predict cancer risk and prognosis.We trained and tested several models using images obtained from full-field digital mammography,magnetic resonance imaging,and computed tomography of breast,lung,and ovarian cancers.Study results showed that many of these new models yielded higher performance than other approaches used in current clinical practice.Furthermore,the computed global image features also contain complementary information from the features computed from the segmented regions or tumors in predicting cancer prognosis.Therefore,the global image features can be used alone to develop new case-based prediction models or can be added to current tumor-based models to increase their discriminatory power.展开更多
Background:Diabetes is one of the fastest rising chronic illness worldwide,and early detection is very crucial for reducing complications.Traditional machine learning models often struggle with imbalanced data and mod...Background:Diabetes is one of the fastest rising chronic illness worldwide,and early detection is very crucial for reducing complications.Traditional machine learning models often struggle with imbalanced data and moderate accuracy.To overcome these limitations,we propose a SMOTE-based ensemble boosting strategy(SMOTEBEnDi)for more accurate diabetes classification.Methods:The framework uses the Pima Indians diabetes dataset(PIDD)consisting of eight clinical features.Preprocessing steps included normalization,feature relevance analysis,and handling of missing values.The class imbalance was corrected using the synthetic minority oversampling technique(SMOTE),and multiple classifiers such as K-nearest neighbor(KNN),decision tree(DT),random forest(RF),and support vector machine(SVM)were ensembled in a boosting architecture.Hyperparameter tuning with k-fold cross validation was applied to ensure robust performance.Results:Experimental analysis showed that the proposed SMOTEBEnDi model achieved 99.5%accuracy,99.39%sensitivity,and 99.59%specificity,outperforming baseline classifiers and demonstrating near-perfect detection.The improvements in performance metrics like area under curve(AUC),precision,and specificity confirm the effectiveness of addressing class imbalance.Conclusion:The study proves that combining SMOTE with ensemble boosting greatly enhances early diabetes detection.This reduces diagnostic errors,supports clinicians in timely intervention,and can serve as a strong base for computer-aided diagnostic tools.Future work should extend this framework for real-time prediction systems,integrate with IoT health devices,and adapt it across diverse clinical datasets to improve generalization and trust in real healthcare settings.展开更多
The rock mass failure induced by deep mining exhibits pronounced spatial heterogeneity and diverse mechanisms,with its microseismic responses serving as effective indicators of regional failure evolution and instabili...The rock mass failure induced by deep mining exhibits pronounced spatial heterogeneity and diverse mechanisms,with its microseismic responses serving as effective indicators of regional failure evolution and instability mechanisms.Focusing on the Level VI stope sublayers in the Jinchuan#2 mining area,this study constructs a 24-parameter index system encompassing time-domain features,frequency-domain features,and multifractal characteristics.Through manifold learning,clustering analysis,and hybrid feature selection,15 key indicators were extracted to construct a classification framework for failure responses.Integrated with focal mechanism inversion and numerical simulation,the failure patterns and corresponding instability mechanisms across different structural zones were further identified.The results reveal that multiscale microseismic characteristics exhibit clear regional similarities.Based on the morphological features of radar plots derived from the 15 indicators,acoustic responses were classified into four typical types,each reflecting distinct local failure mechanisms,stress conditions,and plastic zone evolution.Moreover,considering dominant instability factors and rupture modes,four representative rock mass instability models were proposed for typical failure zones within the stope.These findings provide theoretical guidance and methodological support for hazard prediction,structural optimization,and disturbance control in deep metal mining areas.展开更多
PM_(1.0),particulate matter with an aerodynamic diameter smaller than 1.0μm,can adversely affect human health.However,fewer stations are capable of measuring PM_(1.0) concentrations than PM2.5 and PM10 concentrations...PM_(1.0),particulate matter with an aerodynamic diameter smaller than 1.0μm,can adversely affect human health.However,fewer stations are capable of measuring PM_(1.0) concentrations than PM2.5 and PM10 concentrations in real time(i.e.,only 9 locations for PM_(1.0) vs.623 locations for PM2.5 or PM10)in South Korea,making it impossible to conduct a nationwide health risk analysis of PM_(1.0).Thus,this study aimed to develop a PM_(1.0) prediction model using a random forest algorithm based on PM_(1.0) data from the nine measurement stations and various environmental input factors.Cross validation,in which the model was trained in eight stations and tested in the remaining station,achieved an average R^(2) of 0.913.The high R^(2) value achieved undermutually exclusive training and test locations in the cross validation can be ascribed to the fact that all the locations had similar relationships between PM_(1.0) and the input factors,which were captured by our model.Moreover,results of feature importance analysis showed that PM2.5 and PM10 concentrations were the two most important input features in predicting PM_(1.0) concentration.Finally,the model was used to estimate the PM_(1.0) concentrations in 623 locations,where input factors such as PM2.5 and PM10 can be obtained.Based on the augmented profile,we identified Seoul and Ansan to be PM_(1.0) concentration hotspots.These regions are large cities or the center of anthropogenic and industrial activities.The proposed model and the augmented PM_(1.0) profiles can be used for large epidemiological studies to understand the health impacts of PM_(1.0).展开更多
Converting CO_(2)with green hydrogen to methanol as a carbon-neutral liquid fuel is a promising route for the long-term storage and distribution of intermittent renewable energy.Nevertheless,attaining highly efficient...Converting CO_(2)with green hydrogen to methanol as a carbon-neutral liquid fuel is a promising route for the long-term storage and distribution of intermittent renewable energy.Nevertheless,attaining highly efficient methanol synthesis catalysts from the vast composition space remains a significant challenge.Here we present a machine learning framework for accelerating the development of high space-time yield(STY)methanol synthesis catalysts.A database of methanol synthesis catalysts has been compiled,consisting of catalyst composition,preparation parameters,structural characteristics,reaction conditions and their corresponding catalytic performance.A methodology for constructing catalyst features based on the intrinsic physicochemical properties of the catalyst components has been developed,which significantly reduced the data dimensionality and enhanced the efficiency of machine learning operations.Two high-precision machine learning prediction models for the activities and product selectivity of catalysts were trained and obtained.Using this machine learning framework,an efficient search was achieved within the catalyst composition space,leading to the successful identification of high STY multielement oxide methanol synthesis catalysts.Notably,the CuZnAlTi catalyst achieved high STYs of 0.49 and 0.65 g_(MeOH)/(g_(catalyst)h)for CO_(2)and CO hydrogenation to methanol at 250℃,respectively,and the STY was further increased to 2.63 g_(Me OH)/(g_(catalyst)h)in CO and CO_(2)co-hydrogenation.展开更多
Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) a...Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.展开更多
The main purpose of nonlinear time series analysis is based on the rebuilding theory of phase space, and to study how to transform the response signal to rebuilt phase space in order to extract dynamic feature informa...The main purpose of nonlinear time series analysis is based on the rebuilding theory of phase space, and to study how to transform the response signal to rebuilt phase space in order to extract dynamic feature information, and to provide effective approach for nonlinear signal analysis and fault diagnosis of nonlinear dynamic system. Now, it has already formed an important offset of nonlinear science. But, traditional method cannot extract chaos features automatically, and it needs man's participation in the whole process. A new method is put forward, which can implement auto-extracting of chaos features for nonlinear time series. Firstly, to confirm time delay r by autocorrelation method; Secondly, to compute embedded dimension m and correlation dimension D; Thirdly, to compute the maximum Lyapunov index λmax; Finally, to calculate the chaos degree Dch of Poincare map, and the non-circle degree Dnc and non-order degree Dno of quasi-phase orbit. Chaos features extracting has important meaning to fault diagnosis of nonlinear system based on nonlinear chaos features. Examples show validity of the proposed method.展开更多
基金supported by the Program of National Natural Science Foundation of China(U23A20329,62163036)Youth Academic and Technical Leaders Reserve Talent Training project(202105AC160094)Industrial Innovation Talent Special Project of Xingdian Talent Support Program(XDYC-CYCX-2022-0010).
文摘Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonlinearity,leading to delays in detecting time-varying data features.Additionally,the uncertain kernel function and kernel parameters limit the ability of the extracted features to express process characteristics,resulting in poor fault detection performance.To alleviate the above problems,a novel randomized auto-regressive dynamic slow feature analysis(RRDSFA)method is proposed to simultaneously monitor the operating point deviations and process dynamic faults,enabling real-time monitoring of data features in industrial processes.Firstly,the proposed Random Fourier mappingbased method achieves more effective nonlinear transformation,contrasting with the current kernelbased RDSFA algorithm that may lead to significant computational complexity.Secondly,a randomized RDSFA model is developed to extract nonlinear dynamic slow features.Furthermore,a Bayesian inference-based overall fault monitoring model including all RRDSFA sub-models is developed to overcome the randomness of random Fourier mapping.Finally,the superiority and effectiveness of the proposed monitoring method are demonstrated through a numerical case and a simulation of continuous stirred tank reactor.
基金supported in part by the National Science Fund for Distinguished Young Scholars of China(62225303)the National Natural Science Fundation of China(62303039,62433004)+2 种基金the China Postdoctoral Science Foundation(BX20230034,2023M730190)the Fundamental Research Funds for the Central Universities(buctrc202201,QNTD2023-01)the High Performance Computing Platform,College of Information Science and Technology,Beijing University of Chemical Technology
文摘Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently dynamic and need to be monitored using dynamic algorithms.Mainstream dynamic algorithms rely on concatenating current measurement with past data.This work proposes a new,alternative dynamic process monitoring algorithm,using dot product feature analysis(DPFA).DPFA computes the dot product of consecutive samples,thus naturally capturing the process dynamics through temporal correlation.At the same time,DPFA's online computational complexity is lower than not just existing dynamic algorithms,but also classical static algorithms(e.g.,principal component analysis and slow feature analysis).The detectability of the new algorithm is analyzed for three types of faults typically seen in process systems:sensor bias,process fault and gain change fault.Through experiments with a numerical example and real data from a thermal power plant,the DPFA algorithm is shown to be superior to the state-of-the-art methods,in terms of better monitoring performance(fault detection rate and false alarm rate)and lower computational complexity.
文摘The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.
基金Research and Development Plan of Key Areas of Hunan Science and Technology Department (2022SK2044)Clinical Research Center for Depressive Disorder in Hunan Province (2021SK4022)。
文摘Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent intergrated TCM and western medicine diagnosis of depression.Methods Using the Audio-Visual Emotion Challenge and Workshop(AVEC 2014)public dataset on depression,which conclude 150 interview videos,the samples were classified ac-cording to the TCM inspection of spirit classification:Deshen(得神,presence of spirit),Shaoshen(少神,insufficiency of spirit),and Shenluan(神乱,confusion of spirit).Meanwhile,based on Beck Depression Inventory-II(BDI-II)score for the severity grade of depression,the samples were divided into minimal(0-13,Q1),mild(14-19,Q2),moderate(20-28,Q3),and severe(29-63,Q4).Sixty-eight landmarks were extracted with a ResNet-50 network,and the feature extracion mode was stadardized.Random forest and support vectior machine(SVM)classifiers were used to predict TCM inspection of spirit classification and the severity grade of depression,respectively.A Chi-square test and Apriori association rule mining were then applied to quantify and explore the relationships.Results The analysis revealed a statistically significant and moderately strong association be-tween TCM spirit classification and the severity grade of depression,as confirmed by a Chi-square test(χ^(2)=14.04,P=0.029)with a Cramer’s V effect size of 0.243.Further exploration us-ing association rule mining identified the most compelling rule:“moderate depression(Q3)→Shenluan”.This rule demonstrated a support level of 5%,indicating this specific co-occur-rence was present in 5%of the cohort.Crucially,it achieved a high Confidence of 86%,mean-ing that among patients diagnosed with Q3,86%exhibited the Shenluan pattern according to TCM assessment.The substantial Lift of 2.37 signifies that the observed likelihood of Shenlu-an manifesting in Q3 patients is 2.37 times higher than would be expected by chance if these states were independent-compelling evidence of a highly non-random association.Conse-quently,Shenluan emerges as a distinct and core TCM diagnostic manifestation strongly linked to Q3,forming a clinically significant phenotype within this patient subgroup.
基金The work is supported by the Sub-Project of“Research on Key Technologies and Equipment of Reservoir Stimulation”of China National Petroleum Corporation Post–14th Five-Year Plan Forward-Looking Major Science and Technology Project“Research on New Technology of Monitoring and Diagnosis of Horizontal Well Hydraulic Fracture Network Distribution Pattern”(2021DJ4502).
文摘Multistage multi-cluster hydraulic fracturing has enabled the economic exploitation of shale reservoirs,but the interpretation of hydraulic fracture parameters is challenging.The pressure signals after pump shutdown are influenced by hydraulic fractures,which can reflect the geometric features of hydraulic fracture.The shutdown pressure can be used to interpret the hydraulic fracture parameters in a real-time and cost-effective manner.In this paper,a mathematical model for shutdown pressure evolution is developed considering the effects of wellbore friction,perforation friction and fluid loss in fractures.An efficient numerical simulation method is established by using the method of characteristics.Based on this method,the impacts of fracture half-length,fracture height,opened cluster and perforation number,and filtration coefficient on the evolution of shutdown pressure are analyzed.The results indicate that a larger fracture half-length may hasten the decay of shutdown pressure,while a larger fracture height can slow down the decay of shutdown pressure.A smaller number of opened clusters and perforations can significantly increase the perforation friction and decrease the overall level of shutdown pressure.A larger filtration coefficient may accelerate the fluid filtration in the fracture and hasten the drop of the shutdown pressure.The simulation method of shutdown pressure,as well as the analysis results,has important implications for the interpretation of hydraulic fracture parameters.
基金The“13th Five-Year Plan”National Science and Technology Major Project,2016ZX05052,Changchao QiThe China National Petroleum Corporation Science and Technology Project,2021DJ6505,Changchao Qi.
文摘The accuracy and reliability of non-destructive testing(NDT)approaches in detecting interior corrosion problems are critical,yet research in this field is limited.This work describes a novel way to monitor the structural integrity of steel gas pipelines that uses advanced numerical modeling techniques to anticipate fracture development and corrosion effects.The objective is to increase pipeline dependability and safety through more precise,real-time health evaluations.Compared to previous approaches,our solution provides higher accuracy in fault detection and quantification,making it ideal for pipeline integritymonitoring in real-world applications.To solve this issue,statistical analysis was conducted on the size and directional distribution features of about 380,000 sets of internal corrosion faults,as well as simulations of erosion and wear patterns on bent pipes.Using real defectmorphologies,we developed a modeling framework for typical interior corrosion flaws.We evaluated and validated the applicability and effectiveness of in-service inspection processes,as well as conducted on-site comparison tests.The results show that(1)the length and width of corrosion defects follow a log-normal distribution,the clock orientation follows a normal distribution,and the peak depth follows a Freundlich EX function distribution pattern;(2)pipeline corrosion defect data can be classified into three classes using the K-means clustering algorithm,allowing rapid and convenient acquisition of typical size and orientation characteristics of internal corrosion defects;(3)the applicability range and boundary conditions of various NDT techniques were verified,establishing comprehensive selection principles for internal corrosion defect detection technology;(4)on-site inspection results showed a 31%The simulation and validation platform for typical interior corrosion issues greatly enhances the accuracy and reliability of detection data.
基金Project(U1709211) supported by NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization,ChinaProject(ICT2021A15) supported by the State Key Laboratory of Industrial Control Technology,Zhejiang University,ChinaProject(TPL2019C03) supported by Open Fund of Science and Technology on Thermal Energy and Power Laboratory,China。
文摘Fault degradation prognostic, which estimates the time before a failure occurs and process breakdowns, has been recognized as a key component in maintenance strategies nowadays. Fault degradation processes are, in general,slowly varying and can be modeled by autoregressive models. However, industrial processes always show typical nonstationary nature, which may bring two challenges: how to capture fault degradation information and how to model nonstationary processes. To address the critical issues, a novel fault degradation modeling and online fault prognostic strategy is developed in this paper. First, a fault degradation-oriented slow feature analysis(FDSFA) algorithm is proposed to extract fault degradation directions along which candidate fault degradation features are extracted. The trend ability assessment is then applied to select major fault degradation features. Second, a key fault degradation factor(KFDF) is calculated to characterize the fault degradation tendency by combining major fault degradation features and their stability weighting factors. After that, a time-varying regression model with temporal smoothness regularization is established considering nonstationary characteristics. On the basis of updating strategy, an online fault prognostic model is further developed by analyzing and modeling the prediction errors. The performance of the proposed method is illustrated with a real industrial process.
文摘As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.
基金Supported by Study on Formation Reason and Early Warning of the Dust Haze and Atmospheric Complex Pollution Control in Wenzhou City ( R20090124)
文摘[ Objective] The research aimed to analyze characteristics of the atmospheric particulate pollutants ( PMlo and PM2.s) in Wenzhou City. [Method] We analyzed interannual change rule of the dust haze in Wenzhou during 1978 -2008. Moreover, we respectively set monitoring points in urban district, industrial park and beauty spot of Wenzhou in summer and winter of 2010. Element, ion and polycyclic aromatic hydrocarbon com- positions and morphology of the particulate matter were analyzed. [ Result] Dust haze in Wenzhou City mainly appeared in winter and spring, which was related to local meteorological condition. In summer and winter, both PMlo and PM2.s concentrations presented the characteristic of industrial park 〉 commercial area 〉 beauty spot. Chain-like particle aggregates and ultrafine particles were main composition of the atmospheric particulate matter in Wenzhou. Contribution rate of the spherical particle amount was smaller than metropolis, which was related to local industry and traffic. Fe element had the most content in particulate matter. Mass concentration was mainly composed of 6 elements, such as Na, Si, S, K, Ca and Fe. Total concentration of the six elements occupied 70% -80% of the 16 elements. SO^- and NH4* in particulate matter were higher. They were mainly from human activity. Main compositions of the polycyclic aromatic hydrocarbon were naphthalene, anthracene, benzo (b) fluoranthene, indeno (1,2, 3-cd) pyrene and benzo (g, h, i) perylene, which was related to abrupt increase of the motor vehicle. [ Condusion] The research provided scientific basis and technology support for controlling atmospheric particulate matter pollution in Wenzhou City by government and related department.
基金Supported by the National Natural Science Foundation of China(61473041,61571044,11590772)
文摘Analysis of customers' satisfaction provides a guarantee to improve the service quality in call centers.In this paper,a novel satisfaction recognition framework is introduced to analyze the customers' satisfaction.In natural conversations,the interaction between a customer and its agent take place more than once.One of the difficulties insatisfaction analysis at call centers is that not all conversation turns exhibit customer satisfaction or dissatisfaction. To solve this problem,an intelligent system is proposed that utilizes acoustic features to recognize customers' emotion and utilizes the global features of emotion and duration to analyze the satisfaction. Experiments on real-call data show that the proposed system offers a significantly higher accuracy in analyzing the satisfaction than the baseline system. The average F value is improved to 0. 701 from 0. 664.
文摘Dear Sir,Iam Dr.Kavitha S,from the Department of Electronics and Communication Engineering,Nandha Engineering College,Erode,Tamil Nadu,India.I write to present the detection of glaucoma using extreme learning machine(ELM)and fractal feature analysis.Glaucoma is the second most frequent cause of permanent blindness in industrial
基金supported by the Joint Civil Aviation Fund of National Natural Science Foundation of China(Nos.U1533108 and U1233112)
文摘The theoretical positioning accuracy of multilateration(MLAT) with the time difference of arrival(TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival(TSOA) algorithm from the root mean square error(RMSE) and geometric dilution of precision(GDOP) in additive white Gaussian noise(AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.
文摘The tool for analyzing and evaluating system characteristics based on the AADL model can achieve real-time,reliability,security,and schedulability analysis and evaluation for software-intensive systems.It provides a complete solution for quality analysis of real-time,reliability,safety,and schedulability in the design and demonstration stages of software-intensive systems.By using the system′s multi-characteristic(real-time capability,reliability,safety,schedulability)analysis and evaluation tool based on AADL models,it can meet the software non-functional requirements stipulated by the existing model development standards and specifications.This effectively enhances the efficiency of demonstrating the compliance of the system′s non-functional quality attributes in the design work of our unit′s software-intensive system.It can also improve the performance of our unit′s software-intensive system in engineering inspections and requirement reviews conducted by various organizations.The improvement in the quality level of software-intensive systems can enhance the market competitiveness of our unit′s electronic products.
基金The studies mentioned in this paper were supported in part by Grants R01 CA160205 and R01 CA197150 from the National Cancer Institute,National Institutes of Health,USAGrant HR15-016 from Oklahoma Center for the Advancement of Science and Technology,USA.
文摘In order to develop precision or personalized medicine,identifying new quantitative imaging markers and building machine learning models to predict cancer risk and prognosis has been attracting broad research interest recently.Most of these research approaches use the similar concepts of the conventional computer-aided detection schemes of medical images,which include steps in detecting and segmenting suspicious regions or tumors,followed by training machine learning models based on the fusion of multiple image features computed from the segmented regions or tumors.However,due to the heterogeneity and boundary fuzziness of the suspicious regions or tumors,segmenting subtle regions is often difficult and unreliable.Additionally,ignoring global and/or background parenchymal tissue characteristics may also be a limitation of the conventional approaches.In our recent studies,we investigated the feasibility of developing new computer-aided schemes implemented with the machine learning models that are trained by global image features to predict cancer risk and prognosis.We trained and tested several models using images obtained from full-field digital mammography,magnetic resonance imaging,and computed tomography of breast,lung,and ovarian cancers.Study results showed that many of these new models yielded higher performance than other approaches used in current clinical practice.Furthermore,the computed global image features also contain complementary information from the features computed from the segmented regions or tumors in predicting cancer prognosis.Therefore,the global image features can be used alone to develop new case-based prediction models or can be added to current tumor-based models to increase their discriminatory power.
文摘Background:Diabetes is one of the fastest rising chronic illness worldwide,and early detection is very crucial for reducing complications.Traditional machine learning models often struggle with imbalanced data and moderate accuracy.To overcome these limitations,we propose a SMOTE-based ensemble boosting strategy(SMOTEBEnDi)for more accurate diabetes classification.Methods:The framework uses the Pima Indians diabetes dataset(PIDD)consisting of eight clinical features.Preprocessing steps included normalization,feature relevance analysis,and handling of missing values.The class imbalance was corrected using the synthetic minority oversampling technique(SMOTE),and multiple classifiers such as K-nearest neighbor(KNN),decision tree(DT),random forest(RF),and support vector machine(SVM)were ensembled in a boosting architecture.Hyperparameter tuning with k-fold cross validation was applied to ensure robust performance.Results:Experimental analysis showed that the proposed SMOTEBEnDi model achieved 99.5%accuracy,99.39%sensitivity,and 99.59%specificity,outperforming baseline classifiers and demonstrating near-perfect detection.The improvements in performance metrics like area under curve(AUC),precision,and specificity confirm the effectiveness of addressing class imbalance.Conclusion:The study proves that combining SMOTE with ensemble boosting greatly enhances early diabetes detection.This reduces diagnostic errors,supports clinicians in timely intervention,and can serve as a strong base for computer-aided diagnostic tools.Future work should extend this framework for real-time prediction systems,integrate with IoT health devices,and adapt it across diverse clinical datasets to improve generalization and trust in real healthcare settings.
基金financial support from the Distinguished Youth Funds of the National Natural Science Foundation of China(No.52425403)the Hunan Province Graduate Research Innovation Project of China(No.CX20230168)。
文摘The rock mass failure induced by deep mining exhibits pronounced spatial heterogeneity and diverse mechanisms,with its microseismic responses serving as effective indicators of regional failure evolution and instability mechanisms.Focusing on the Level VI stope sublayers in the Jinchuan#2 mining area,this study constructs a 24-parameter index system encompassing time-domain features,frequency-domain features,and multifractal characteristics.Through manifold learning,clustering analysis,and hybrid feature selection,15 key indicators were extracted to construct a classification framework for failure responses.Integrated with focal mechanism inversion and numerical simulation,the failure patterns and corresponding instability mechanisms across different structural zones were further identified.The results reveal that multiscale microseismic characteristics exhibit clear regional similarities.Based on the morphological features of radar plots derived from the 15 indicators,acoustic responses were classified into four typical types,each reflecting distinct local failure mechanisms,stress conditions,and plastic zone evolution.Moreover,considering dominant instability factors and rupture modes,four representative rock mass instability models were proposed for typical failure zones within the stope.These findings provide theoretical guidance and methodological support for hazard prediction,structural optimization,and disturbance control in deep metal mining areas.
基金supported by the Fine Particle Research Initiative in East Asia Considering National Differences Project through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(No.NRF-2023M3G1A1090660)supported by a grant from the National Institute of Environmental Research(NIER),funded by the Ministry of Environment of the Republic of Korea(No.NIER-2023-04-02-056).
文摘PM_(1.0),particulate matter with an aerodynamic diameter smaller than 1.0μm,can adversely affect human health.However,fewer stations are capable of measuring PM_(1.0) concentrations than PM2.5 and PM10 concentrations in real time(i.e.,only 9 locations for PM_(1.0) vs.623 locations for PM2.5 or PM10)in South Korea,making it impossible to conduct a nationwide health risk analysis of PM_(1.0).Thus,this study aimed to develop a PM_(1.0) prediction model using a random forest algorithm based on PM_(1.0) data from the nine measurement stations and various environmental input factors.Cross validation,in which the model was trained in eight stations and tested in the remaining station,achieved an average R^(2) of 0.913.The high R^(2) value achieved undermutually exclusive training and test locations in the cross validation can be ascribed to the fact that all the locations had similar relationships between PM_(1.0) and the input factors,which were captured by our model.Moreover,results of feature importance analysis showed that PM2.5 and PM10 concentrations were the two most important input features in predicting PM_(1.0) concentration.Finally,the model was used to estimate the PM_(1.0) concentrations in 623 locations,where input factors such as PM2.5 and PM10 can be obtained.Based on the augmented profile,we identified Seoul and Ansan to be PM_(1.0) concentration hotspots.These regions are large cities or the center of anthropogenic and industrial activities.The proposed model and the augmented PM_(1.0) profiles can be used for large epidemiological studies to understand the health impacts of PM_(1.0).
基金supported by the Zhejiang Provincial Natural Science Foundation of China(LDT23E06012E06)National Key R&D Program of China(2023YFC3710800)+3 种基金the National EnergySaving and Low-Carbon Materials Production and Application Demonstration Platform Program(TC220H06N)Pioneer R&D Program of Zhejiang Province-China(2024SSYS0066,2023C03016)National Natural Science Foundation of China(42341208)Zhejiang Energy Group Research Fund(ZNKJ-2023-100)。
文摘Converting CO_(2)with green hydrogen to methanol as a carbon-neutral liquid fuel is a promising route for the long-term storage and distribution of intermittent renewable energy.Nevertheless,attaining highly efficient methanol synthesis catalysts from the vast composition space remains a significant challenge.Here we present a machine learning framework for accelerating the development of high space-time yield(STY)methanol synthesis catalysts.A database of methanol synthesis catalysts has been compiled,consisting of catalyst composition,preparation parameters,structural characteristics,reaction conditions and their corresponding catalytic performance.A methodology for constructing catalyst features based on the intrinsic physicochemical properties of the catalyst components has been developed,which significantly reduced the data dimensionality and enhanced the efficiency of machine learning operations.Two high-precision machine learning prediction models for the activities and product selectivity of catalysts were trained and obtained.Using this machine learning framework,an efficient search was achieved within the catalyst composition space,leading to the successful identification of high STY multielement oxide methanol synthesis catalysts.Notably,the CuZnAlTi catalyst achieved high STYs of 0.49 and 0.65 g_(MeOH)/(g_(catalyst)h)for CO_(2)and CO hydrogenation to methanol at 250℃,respectively,and the STY was further increased to 2.63 g_(Me OH)/(g_(catalyst)h)in CO and CO_(2)co-hydrogenation.
基金Under the auspices of Priority Academic Program Development of Jiangsu Higher Education Institutions,National Natural Science Foundation of China(No.41271438,41471316,41401440,41671389)
文摘Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.
文摘The main purpose of nonlinear time series analysis is based on the rebuilding theory of phase space, and to study how to transform the response signal to rebuilt phase space in order to extract dynamic feature information, and to provide effective approach for nonlinear signal analysis and fault diagnosis of nonlinear dynamic system. Now, it has already formed an important offset of nonlinear science. But, traditional method cannot extract chaos features automatically, and it needs man's participation in the whole process. A new method is put forward, which can implement auto-extracting of chaos features for nonlinear time series. Firstly, to confirm time delay r by autocorrelation method; Secondly, to compute embedded dimension m and correlation dimension D; Thirdly, to compute the maximum Lyapunov index λmax; Finally, to calculate the chaos degree Dch of Poincare map, and the non-circle degree Dnc and non-order degree Dno of quasi-phase orbit. Chaos features extracting has important meaning to fault diagnosis of nonlinear system based on nonlinear chaos features. Examples show validity of the proposed method.