Traveling salesman problem(TSP)is a classic non-deterministic polynomial-hard optimization prob-lem.Based on the characteristics of self-organizing mapping(SOM)network,this paper proposes an improved SOM network from ...Traveling salesman problem(TSP)is a classic non-deterministic polynomial-hard optimization prob-lem.Based on the characteristics of self-organizing mapping(SOM)network,this paper proposes an improved SOM network from the perspectives of network update strategy,initialization method,and parameter selection.This paper compares the performance of the proposed algorithms with the performance of existing SOM network algorithms on the TSP and compares them with several heuristic algorithms.Simulations show that compared with existing SOM networks,the improved SOM network proposed in this paper improves the convergence rate and algorithm accuracy.Compared with iterated local search and heuristic algorithms,the improved SOM net-work algorithms proposed in this paper have the advantage of fast calculation speed on medium-scale TSP.展开更多
Due to the widespread use of the Internet,customer information is vulnerable to computer systems attack,which brings urgent need for the intrusion detection technology.Recently,network intrusion detection has been one...Due to the widespread use of the Internet,customer information is vulnerable to computer systems attack,which brings urgent need for the intrusion detection technology.Recently,network intrusion detection has been one of the most important technologies in network security detection.The accuracy of network intrusion detection has reached higher accuracy so far.However,these methods have very low efficiency in network intrusion detection,even the most popular SOM neural network method.In this paper,an efficient and fast network intrusion detection method was proposed.Firstly,the fundamental of the two different methods are introduced respectively.Then,the selforganizing feature map neural network based on K-means clustering(KSOM)algorithms was presented to improve the efficiency of network intrusion detection.Finally,the NSLKDD is used as network intrusion data set to demonstrate that the KSOM method can significantly reduce the number of clustering iteration than SOM method without substantially affecting the clustering results and the accuracy is much higher than Kmeans method.The Experimental results show that our method can relatively improve the accuracy of network intrusion and significantly reduce the number of clustering iteration.展开更多
The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddi...The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings,such as manifold learning.However,these methods are all based on manual intervention,which have some shortages in stability,and suppressing the disturbance noise.To extract features automatically,a manifold learning method with self-organization mapping is introduced for the first time.Under the non-uniform sample distribution reconstructed by the phase space,the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention.After that,the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation.Finally,the signal is reconstructed by the kernel regression.Several typical states include the Lorenz system,engine fault with piston pin defect,and bearing fault with outer-race defect are analyzed.Compared with the LTSA and continuous wavelet transform,the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified.A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.展开更多
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f...A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection.展开更多
Due to rapid urbanization, waterlogging induced by torrential rainfall has become a global concern and a potential risk affecting urban habitant's safety. Widespread waterlogging disasters haveoccurred almost annu...Due to rapid urbanization, waterlogging induced by torrential rainfall has become a global concern and a potential risk affecting urban habitant's safety. Widespread waterlogging disasters haveoccurred almost annuallyinthe urban area of Beijing, the capital of China. Based on a selforganizing map(SOM) artificial neural network(ANN), a graded waterlogging risk assessment was conducted on 56 low-lying points in Beijing, China. Social risk factors, such as Gross domestic product(GDP), population density, and traffic congestion, were utilized as input datasets in this study. The results indicate that SOM-ANNis suitable for automatically and quantitatively assessing risks associated with waterlogging. The greatest advantage of SOM-ANN in the assessment of waterlogging risk is that a priori knowledge about classification categories and assessment indicator weights is not needed. As a result, SOM-ANN can effectively overcome interference from subjective factors,producing classification results that are more objective and accurate. In this paper, the risk level of waterlogging in Beijing was divided into five grades. The points that were assigned risk grades of IV or Vwere located mainly in the districts of Chaoyang, Haidian, Xicheng, and Dongcheng.展开更多
The artificial neural networks (ANNs), among different soft computing methodologies are widely used to meet the challenges thrown by the main objectives of data mining classification techniques, due to their robust, p...The artificial neural networks (ANNs), among different soft computing methodologies are widely used to meet the challenges thrown by the main objectives of data mining classification techniques, due to their robust, powerful, distributed, fault tolerant computing and capability to learn in a data-rich environment. ANNs has been used in several fields, showing high performance as classifiers. The problem of dealing with non numerical data is one major obstacle prevents using them with various data sets and several domains. Another problem is their complex structure and how hands to interprets. Self-Organizing Map (SOM) is type of neural systems that can be easily interpreted, but still can’t be used with non numerical data directly. This paper presents an enhanced SOM structure to cope with non numerical data. It used DNA sequences as the training dataset. Results show very good performance compared to other classifiers. For better evaluation both micro-array structure and their sequential representation as proteins were targeted as dataset accuracy is measured accordingly.展开更多
Considering that growing hierarchical self-organizing map(GHSOM) ignores the influence of individual component in sample vector analysis, and its accurate rate in detecting unknown network attacks is relatively lower,...Considering that growing hierarchical self-organizing map(GHSOM) ignores the influence of individual component in sample vector analysis, and its accurate rate in detecting unknown network attacks is relatively lower, an improved GHSOM method combined with mutual information is proposed. After theoretical analysis, experiments are conducted to illustrate the effectiveness of the proposed method by accurately clustering the input data. Based on different clusters, the complex relationship within the data can be revealed effectively.展开更多
We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic bi...We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic binary neural networks that are pre-trained to produce such mappings. The pre-training is implemented by a back propagating supervised learning algorithm which converges almost surely to the probabilities induced by the environment, under general ergodicity conditions.展开更多
A new sub-pixel mapping method based on BP neural network is proposed in order to determine the spatial distribution of class components in each mixed pixel.The network was used to train a model that describes the rel...A new sub-pixel mapping method based on BP neural network is proposed in order to determine the spatial distribution of class components in each mixed pixel.The network was used to train a model that describes the relationship between spatial distribution of target components in mixed pixel and its neighboring information.Then the sub-pixel scaled target could be predicted by the trained model.In order to improve the performance of BP network,BP learning algorithm with momentum was employed.The experiments were conducted both on synthetic images and on hyperspectral imagery(HSI).The results prove that this method is capable of estimating land covers fairly accurately and has a great superiority over some other sub-pixel mapping methods in terms of computational complexity.展开更多
An extended self-organizing map for supervised classification is proposed in this paper. Unlike other traditional SOMs, the model has an input layer, a Kohonen layer, and an output layer. The number of neurons in the ...An extended self-organizing map for supervised classification is proposed in this paper. Unlike other traditional SOMs, the model has an input layer, a Kohonen layer, and an output layer. The number of neurons in the input layer depends on the dimensionality of input patterns. The number of neurons in the output layer equals the number of the desired classes. The number of neurons in the Kohonen layer may be a few to several thousands, which depends on the complexity of classification problems and the classification precision. Each training sample is expressed by a pair of vectors : an input vector and a class codebook vector. When a training sample is input into the model, Kohonen's competitive learning rule is applied to selecting the winning neuron from the Kohouen layer and the weight coefficients connecting all the neurons in the input layer with both the winning neuron and its neighbors in the Kohonen layer are modified to be closer to the input vector, and those connecting all the neurons around the winning neuron within a certain diameter in the Kohonen layer with all the neurons in the output layer are adjusted to be closer to the class codebook vector. If the number of training sam- ples is sufficiently large and the learning epochs iterate enough times, the model will be able to serve as a supervised classifier. The model has been tentatively applied to the supervised classification of multispectral remotely sensed data. The author compared the performances of the extended SOM and BPN in remotely sensed data classification. The investigation manifests that the extended SOM is feasible for supervised classification.展开更多
Memristive neural network has attracted tremendous attention since the memristor array can perform parallel multiplyaccumulate calculation(MAC)operations and memory-computation operations as compared with digital CMOS...Memristive neural network has attracted tremendous attention since the memristor array can perform parallel multiplyaccumulate calculation(MAC)operations and memory-computation operations as compared with digital CMOS hardware systems.However,owing to the variability of the memristor,the implementation of high-precision neural network in memristive computation units is still difficult.Existing learning algorithms for memristive artificial neural network(ANN)is unable to achieve the performance comparable to high-precision by using CMOS-based system.Here,we propose an algorithm based on off-chip learning for memristive ANN in low precision.Training the ANN in the high-precision in digital CPUs and then quantifying the weight of the network to low precision,the quantified weights are mapped to the memristor arrays based on VTEAM model through using the pulse coding weight-mapping rule.In this work,we execute the inference of trained 5-layers convolution neural network on the memristor arrays and achieve an accuracy close to the inference in the case of high precision(64-bit).Compared with other algorithms-based off-chip learning,the algorithm proposed in the present study can easily implement the mapping process and less influence of the device variability.Our result provides an effective approach to implementing the ANN on the memristive hardware platform.展开更多
Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5...Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.展开更多
The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learn...The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.展开更多
Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly ...Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly created and analyzed.In geophysics,both supervised and unsupervised ML methods have dramatically contributed to the development of seismic and well-log data interpretation.In well-logging,ML algorithms are well-suited for lithologic reconstruction problems,once there is no analytical expressions for computing well-log data produced by a particular rock unit.Additionally,supervised ML methods are strongly dependent on a accurate-labeled training data-set,which is not a simple task to achieve,due to data absences or corruption.Once an adequate supervision is performed,the classification outputs tend to be more accurate than unsupervised methods.This work presents a supervised version of a Self-Organizing Map,named as SSOM,to solve a lithologic reconstruction problem from well-log data.Firstly,we go for a more controlled problem and simulate well-log data directly from an interpreted geologic cross-section.We then define two specific training data-sets composed by density(RHOB),sonic(DT),spontaneous potential(SP)and gamma-ray(GR)logs,all simulated through a Gaussian distribution function per lithology.Once the training data-set is created,we simulate a particular pseudo-well,referred to as classification well,for defining controlled tests.First one comprises a training data-set with no labeled log data of the simulated fault zone.In the second test,we intentionally improve the training data-set with the fault.To bespeak the obtained results for each test,we analyze confusion matrices,logplots,accuracy and precision.Apart from very thin layer misclassifications,the SSOM provides reasonable lithologic reconstructions,especially when the improved training data-set is considered for supervision.The set of numerical experiments shows that our SSOM is extremely well-suited for a supervised lithologic reconstruction,especially to recover lithotypes that are weakly-sampled in the training log-data.On the other hand,some misclassifications are also observed when the cortex could not group the slightly different lithologies.展开更多
Landslide is considered as one of the most severe threats to human life and property in the hilly areas of the world.The number of landslides and the level of damage across the globe has been increasing over time.Ther...Landslide is considered as one of the most severe threats to human life and property in the hilly areas of the world.The number of landslides and the level of damage across the globe has been increasing over time.Therefore,landslide management is essential to maintain the natural and socio-economic dynamics of the hilly region.Rorachu river basin is one of the most landslide-prone areas of the Sikkim selected for the present study.The prime goal of the study is to prepare landslide susceptibility maps(LSMs)using computer-based advanced machine learning techniques and compare the performance of the models.To properly understand the existing spatial relation with the landslide,twenty factors,including triggering and causative factors,were selected.A deep learning algorithm viz.convolutional neural network model(CNN)and three popular machine learning techniques,i.e.,random forest model(RF),artificial neural network model(ANN),and bagging model,were employed to prepare the LSMs.Two separate datasets including training and validation were designed by randomly taken landslide and nonlandslide points.A ratio of 70:30 was considered for the selection of both training and validation points.Multicollinearity was assessed by tolerance and variance inflation factor,and the role of individual conditioning factors was estimated using information gain ratio.The result reveals that there is no severe multicollinearity among the landslide conditioning factors,and the triggering factor rainfall appeared as the leading cause of the landslide.Based on the final prediction values of each model,LSM was constructed and successfully portioned into five distinct classes,like very low,low,moderate,high,and very high susceptibility.The susceptibility class-wise distribution of landslides shows that more than 90%of the landslide area falls under higher landslide susceptibility grades.The precision of models was examined using the area under the curve(AUC)of the receiver operating characteristics(ROC)curve and statistical methods like root mean square error(RMSE)and mean absolute error(MAE).In both datasets(training and validation),the CNN model achieved the maximum AUC value of 0.903 and 0.939,respectively.The lowest value of RMSE and MAE also reveals the better performance of the CNN model.So,it can be concluded that all the models have performed well,but the CNN model has outperformed the other models in terms of precision.展开更多
Deep learning has become popular and the mainstream technology in many researches related to learning,and has shown its impact on photogrammetry.According to the definition of photogrammetry,that is,a subject that res...Deep learning has become popular and the mainstream technology in many researches related to learning,and has shown its impact on photogrammetry.According to the definition of photogrammetry,that is,a subject that researches shapes,locations,sizes,characteristics and inter-relationships of real objects from optical images,photogrammetry considers two aspects,geometry and semantics.From the two aspects,we review the history of deep learning and discuss its current applications on photogrammetry,and forecast the future development of photogrammetry.In geometry,the deep convolutional neural network(CNN)has been widely applied in stereo matching,SLAM and 3D reconstruction,and has made some effects but needs more improvement.In semantics,conventional methods that have to design empirical and handcrafted features have failed to extract the semantic information accurately and failed to produce types of“semantic thematic map”as 4D productions(DEM,DOM,DLG,DRG)of photogrammetry.This causes the semantic part of photogrammetry be ignored for a long time.The powerful generalization capacity,ability to fit any functions and stability under types of situations of deep leaning is making the automatic production of thematic maps possible.We review the achievements that have been obtained in road network extraction,building detection and crop classification,etc.,and forecast that producing high-accuracy semantic thematic maps directly from optical images will become reality and these maps will become a type of standard products of photogrammetry.At last,we introduce our two current researches related to geometry and semantics respectively.One is stereo matching of aerial images based on deep learning and transfer learning;the other is precise crop classification from satellite spatio-temporal images based on 3D CNN.展开更多
Ahealth monitoring scheme is developed in this work by using hybrid machine learning strategies to iden-tify the fault severity and assess the health status of the aircraft gas turbine engine that is subject to compon...Ahealth monitoring scheme is developed in this work by using hybrid machine learning strategies to iden-tify the fault severity and assess the health status of the aircraft gas turbine engine that is subject to component degrada-tions that are caused by fouling and erosion.The proposed hybrid framework involves integrating both supervised recur-rent neural networks and unsupervised self-organizing maps methodologies,where the former is developed to extract ef-fective features that can be associated with the engine health condition and the latter is constructed for fault severity modeling and tracking of each considered degradation mode.Advantages of our proposed methodology are that it ac-complishes fault identification and health monitoring objectives by only discovering inherent health information that are available in the system I/O data at each operating point.The effectiveness of our approach is validated and justified with engine data under various degradation modes in compressors and turbines.展开更多
Multi-layer connected self-organizing feature maps(SOFMs) and the associated learning procedure were proposed to achieve efficient recognition and clustering of messily grown nanowire morphologies. The network is made...Multi-layer connected self-organizing feature maps(SOFMs) and the associated learning procedure were proposed to achieve efficient recognition and clustering of messily grown nanowire morphologies. The network is made up by several paratactic 2-D SOFMs with inter-layer connections. By means of Monte Carlo simulations, virtual morphologies were generated to be the training samples. With the unsupervised inner-layer and inter-layer learning, the neural network can cluster different morphologies of messily grown nanowires and build connections between the morphological microstructure and geometrical features of nanowires within. Then, the as-proposed networks were applied on recognitions and quantitative estimations of the experimental morphologies. Results show that the as-trained SOFMs are able to cluster the morphologies and recognize the average length and quantity of the messily grown nanowires within. The inter-layer connections between winning neurons on each competitive layer have significant influence on the relations between the microstructure of the morphology and physical parameters of the nanowires within.展开更多
Inter-purchase time is a critical factor for predicting customer churn.Improving the prediction accuracy can exploit consumer’s preference and allow businesses to learn about product or pricing plan weak points,opera...Inter-purchase time is a critical factor for predicting customer churn.Improving the prediction accuracy can exploit consumer’s preference and allow businesses to learn about product or pricing plan weak points,operation issues,as well as customer expectations to proactively reduce reasons for churn.Although remarkable progress has been made,classic statistical models are difficult to capture behavioral characteristics in transaction data because transaction data are dependent and short-,medium-,and long-term data are likely to interfere with each other sequentially.Different from literature,this study proposed a hybrid inter-purchase time prediction model for customers of on-line retailers.Moreover,the analysis of differences in the purchase behavior of customers has been particularly highlighted.The integrated self-organizing map and Recurrent Neural Network technique is proposed to not only address the problem of purchase behavior but also improve the prediction accuracy of inter-purchase time.The permutation importance method was used to identify crucial variables in the prediction model and to interpret customer purchase behavior.The performance of the proposed method is evaluated by comparing the prediction with the results of three competing approaches on the transaction data provided by a leading e-retailer in Taiwan.This study provides a valuable reference for marketing professionals to better understand and develop strategies to attract customers to shorten their inter-purchase times.展开更多
基金the National Natural Science Foundation of China (No.61627810)the National Science and Technology Major Program of China (No.2018YFB1305003)the National Defense Science and Technology Outstanding Youth Science Foundation (No.2017-JCJQ-ZQ-031)。
文摘Traveling salesman problem(TSP)is a classic non-deterministic polynomial-hard optimization prob-lem.Based on the characteristics of self-organizing mapping(SOM)network,this paper proposes an improved SOM network from the perspectives of network update strategy,initialization method,and parameter selection.This paper compares the performance of the proposed algorithms with the performance of existing SOM network algorithms on the TSP and compares them with several heuristic algorithms.Simulations show that compared with existing SOM networks,the improved SOM network proposed in this paper improves the convergence rate and algorithm accuracy.Compared with iterated local search and heuristic algorithms,the improved SOM net-work algorithms proposed in this paper have the advantage of fast calculation speed on medium-scale TSP.
文摘Due to the widespread use of the Internet,customer information is vulnerable to computer systems attack,which brings urgent need for the intrusion detection technology.Recently,network intrusion detection has been one of the most important technologies in network security detection.The accuracy of network intrusion detection has reached higher accuracy so far.However,these methods have very low efficiency in network intrusion detection,even the most popular SOM neural network method.In this paper,an efficient and fast network intrusion detection method was proposed.Firstly,the fundamental of the two different methods are introduced respectively.Then,the selforganizing feature map neural network based on K-means clustering(KSOM)algorithms was presented to improve the efficiency of network intrusion detection.Finally,the NSLKDD is used as network intrusion data set to demonstrate that the KSOM method can significantly reduce the number of clustering iteration than SOM method without substantially affecting the clustering results and the accuracy is much higher than Kmeans method.The Experimental results show that our method can relatively improve the accuracy of network intrusion and significantly reduce the number of clustering iteration.
基金supported by National Natural Science Foundation of China(Grant No.51075323)
文摘The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings,such as manifold learning.However,these methods are all based on manual intervention,which have some shortages in stability,and suppressing the disturbance noise.To extract features automatically,a manifold learning method with self-organization mapping is introduced for the first time.Under the non-uniform sample distribution reconstructed by the phase space,the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention.After that,the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation.Finally,the signal is reconstructed by the kernel regression.Several typical states include the Lorenz system,engine fault with piston pin defect,and bearing fault with outer-race defect are analyzed.Compared with the LTSA and continuous wavelet transform,the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified.A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.
基金supported by the National Key Research and Development Program of China(2021YFB3901205)the National Institute of Natural Hazards,Ministry of Emergency Management of China(2023-JBKY-57)。
文摘A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection.
基金supported by the National Key R&D Program of China (GrantN o.2016YFC0401407)National Natural Science Foundation of China (Grant Nos. 51479003 and 51279006)
文摘Due to rapid urbanization, waterlogging induced by torrential rainfall has become a global concern and a potential risk affecting urban habitant's safety. Widespread waterlogging disasters haveoccurred almost annuallyinthe urban area of Beijing, the capital of China. Based on a selforganizing map(SOM) artificial neural network(ANN), a graded waterlogging risk assessment was conducted on 56 low-lying points in Beijing, China. Social risk factors, such as Gross domestic product(GDP), population density, and traffic congestion, were utilized as input datasets in this study. The results indicate that SOM-ANNis suitable for automatically and quantitatively assessing risks associated with waterlogging. The greatest advantage of SOM-ANN in the assessment of waterlogging risk is that a priori knowledge about classification categories and assessment indicator weights is not needed. As a result, SOM-ANN can effectively overcome interference from subjective factors,producing classification results that are more objective and accurate. In this paper, the risk level of waterlogging in Beijing was divided into five grades. The points that were assigned risk grades of IV or Vwere located mainly in the districts of Chaoyang, Haidian, Xicheng, and Dongcheng.
文摘The artificial neural networks (ANNs), among different soft computing methodologies are widely used to meet the challenges thrown by the main objectives of data mining classification techniques, due to their robust, powerful, distributed, fault tolerant computing and capability to learn in a data-rich environment. ANNs has been used in several fields, showing high performance as classifiers. The problem of dealing with non numerical data is one major obstacle prevents using them with various data sets and several domains. Another problem is their complex structure and how hands to interprets. Self-Organizing Map (SOM) is type of neural systems that can be easily interpreted, but still can’t be used with non numerical data directly. This paper presents an enhanced SOM structure to cope with non numerical data. It used DNA sequences as the training dataset. Results show very good performance compared to other classifiers. For better evaluation both micro-array structure and their sequential representation as proteins were targeted as dataset accuracy is measured accordingly.
基金Supported by the Natural Science Foundation of Tianjin(No.15JCQNJC00200)
文摘Considering that growing hierarchical self-organizing map(GHSOM) ignores the influence of individual component in sample vector analysis, and its accurate rate in detecting unknown network attacks is relatively lower, an improved GHSOM method combined with mutual information is proposed. After theoretical analysis, experiments are conducted to illustrate the effectiveness of the proposed method by accurately clustering the input data. Based on different clusters, the complex relationship within the data can be revealed effectively.
文摘We consider qualitatively robust predictive mappings of stochastic environmental models, where protection against outlier data is incorporated. We utilize digital representations of the models and deploy stochastic binary neural networks that are pre-trained to produce such mappings. The pre-training is implemented by a back propagating supervised learning algorithm which converges almost surely to the probabilities induced by the environment, under general ergodicity conditions.
基金Sponsored by the National Natural Science Foundation of China(Grant No. 60272073, 60402025 and 60802059)by Foundation for the Doctoral Program of Higher Education of China (Grant No. 200802171003)
文摘A new sub-pixel mapping method based on BP neural network is proposed in order to determine the spatial distribution of class components in each mixed pixel.The network was used to train a model that describes the relationship between spatial distribution of target components in mixed pixel and its neighboring information.Then the sub-pixel scaled target could be predicted by the trained model.In order to improve the performance of BP network,BP learning algorithm with momentum was employed.The experiments were conducted both on synthetic images and on hyperspectral imagery(HSI).The results prove that this method is capable of estimating land covers fairly accurately and has a great superiority over some other sub-pixel mapping methods in terms of computational complexity.
基金Supported by National Natural Science Foundation of China (No. 40872193)
文摘An extended self-organizing map for supervised classification is proposed in this paper. Unlike other traditional SOMs, the model has an input layer, a Kohonen layer, and an output layer. The number of neurons in the input layer depends on the dimensionality of input patterns. The number of neurons in the output layer equals the number of the desired classes. The number of neurons in the Kohonen layer may be a few to several thousands, which depends on the complexity of classification problems and the classification precision. Each training sample is expressed by a pair of vectors : an input vector and a class codebook vector. When a training sample is input into the model, Kohonen's competitive learning rule is applied to selecting the winning neuron from the Kohouen layer and the weight coefficients connecting all the neurons in the input layer with both the winning neuron and its neighbors in the Kohonen layer are modified to be closer to the input vector, and those connecting all the neurons around the winning neuron within a certain diameter in the Kohonen layer with all the neurons in the output layer are adjusted to be closer to the class codebook vector. If the number of training sam- ples is sufficiently large and the learning epochs iterate enough times, the model will be able to serve as a supervised classifier. The model has been tentatively applied to the supervised classification of multispectral remotely sensed data. The author compared the performances of the extended SOM and BPN in remotely sensed data classification. The investigation manifests that the extended SOM is feasible for supervised classification.
基金the National Natural Science Foundation of China(Grant Nos.62076208,62076207,and U20A20227)the National Key Research and Development Program of China(Grant No.2018YFB1306600)。
文摘Memristive neural network has attracted tremendous attention since the memristor array can perform parallel multiplyaccumulate calculation(MAC)operations and memory-computation operations as compared with digital CMOS hardware systems.However,owing to the variability of the memristor,the implementation of high-precision neural network in memristive computation units is still difficult.Existing learning algorithms for memristive artificial neural network(ANN)is unable to achieve the performance comparable to high-precision by using CMOS-based system.Here,we propose an algorithm based on off-chip learning for memristive ANN in low precision.Training the ANN in the high-precision in digital CPUs and then quantifying the weight of the network to low precision,the quantified weights are mapped to the memristor arrays based on VTEAM model through using the pulse coding weight-mapping rule.In this work,we execute the inference of trained 5-layers convolution neural network on the memristor arrays and achieve an accuracy close to the inference in the case of high precision(64-bit).Compared with other algorithms-based off-chip learning,the algorithm proposed in the present study can easily implement the mapping process and less influence of the device variability.Our result provides an effective approach to implementing the ANN on the memristive hardware platform.
文摘Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.
文摘The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.
文摘Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly created and analyzed.In geophysics,both supervised and unsupervised ML methods have dramatically contributed to the development of seismic and well-log data interpretation.In well-logging,ML algorithms are well-suited for lithologic reconstruction problems,once there is no analytical expressions for computing well-log data produced by a particular rock unit.Additionally,supervised ML methods are strongly dependent on a accurate-labeled training data-set,which is not a simple task to achieve,due to data absences or corruption.Once an adequate supervision is performed,the classification outputs tend to be more accurate than unsupervised methods.This work presents a supervised version of a Self-Organizing Map,named as SSOM,to solve a lithologic reconstruction problem from well-log data.Firstly,we go for a more controlled problem and simulate well-log data directly from an interpreted geologic cross-section.We then define two specific training data-sets composed by density(RHOB),sonic(DT),spontaneous potential(SP)and gamma-ray(GR)logs,all simulated through a Gaussian distribution function per lithology.Once the training data-set is created,we simulate a particular pseudo-well,referred to as classification well,for defining controlled tests.First one comprises a training data-set with no labeled log data of the simulated fault zone.In the second test,we intentionally improve the training data-set with the fault.To bespeak the obtained results for each test,we analyze confusion matrices,logplots,accuracy and precision.Apart from very thin layer misclassifications,the SSOM provides reasonable lithologic reconstructions,especially when the improved training data-set is considered for supervision.The set of numerical experiments shows that our SSOM is extremely well-suited for a supervised lithologic reconstruction,especially to recover lithotypes that are weakly-sampled in the training log-data.On the other hand,some misclassifications are also observed when the cortex could not group the slightly different lithologies.
文摘Landslide is considered as one of the most severe threats to human life and property in the hilly areas of the world.The number of landslides and the level of damage across the globe has been increasing over time.Therefore,landslide management is essential to maintain the natural and socio-economic dynamics of the hilly region.Rorachu river basin is one of the most landslide-prone areas of the Sikkim selected for the present study.The prime goal of the study is to prepare landslide susceptibility maps(LSMs)using computer-based advanced machine learning techniques and compare the performance of the models.To properly understand the existing spatial relation with the landslide,twenty factors,including triggering and causative factors,were selected.A deep learning algorithm viz.convolutional neural network model(CNN)and three popular machine learning techniques,i.e.,random forest model(RF),artificial neural network model(ANN),and bagging model,were employed to prepare the LSMs.Two separate datasets including training and validation were designed by randomly taken landslide and nonlandslide points.A ratio of 70:30 was considered for the selection of both training and validation points.Multicollinearity was assessed by tolerance and variance inflation factor,and the role of individual conditioning factors was estimated using information gain ratio.The result reveals that there is no severe multicollinearity among the landslide conditioning factors,and the triggering factor rainfall appeared as the leading cause of the landslide.Based on the final prediction values of each model,LSM was constructed and successfully portioned into five distinct classes,like very low,low,moderate,high,and very high susceptibility.The susceptibility class-wise distribution of landslides shows that more than 90%of the landslide area falls under higher landslide susceptibility grades.The precision of models was examined using the area under the curve(AUC)of the receiver operating characteristics(ROC)curve and statistical methods like root mean square error(RMSE)and mean absolute error(MAE).In both datasets(training and validation),the CNN model achieved the maximum AUC value of 0.903 and 0.939,respectively.The lowest value of RMSE and MAE also reveals the better performance of the CNN model.So,it can be concluded that all the models have performed well,but the CNN model has outperformed the other models in terms of precision.
基金National Natural Science Foundation of China(41471288).
文摘Deep learning has become popular and the mainstream technology in many researches related to learning,and has shown its impact on photogrammetry.According to the definition of photogrammetry,that is,a subject that researches shapes,locations,sizes,characteristics and inter-relationships of real objects from optical images,photogrammetry considers two aspects,geometry and semantics.From the two aspects,we review the history of deep learning and discuss its current applications on photogrammetry,and forecast the future development of photogrammetry.In geometry,the deep convolutional neural network(CNN)has been widely applied in stereo matching,SLAM and 3D reconstruction,and has made some effects but needs more improvement.In semantics,conventional methods that have to design empirical and handcrafted features have failed to extract the semantic information accurately and failed to produce types of“semantic thematic map”as 4D productions(DEM,DOM,DLG,DRG)of photogrammetry.This causes the semantic part of photogrammetry be ignored for a long time.The powerful generalization capacity,ability to fit any functions and stability under types of situations of deep leaning is making the automatic production of thematic maps possible.We review the achievements that have been obtained in road network extraction,building detection and crop classification,etc.,and forecast that producing high-accuracy semantic thematic maps directly from optical images will become reality and these maps will become a type of standard products of photogrammetry.At last,we introduce our two current researches related to geometry and semantics respectively.One is stereo matching of aerial images based on deep learning and transfer learning;the other is precise crop classification from satellite spatio-temporal images based on 3D CNN.
基金The Natural Sciences and Engineering Research Council of Canada(NSERC)the Department of National Defence(DND)under the Discovery Grant and DND Supplemental Programs。
文摘Ahealth monitoring scheme is developed in this work by using hybrid machine learning strategies to iden-tify the fault severity and assess the health status of the aircraft gas turbine engine that is subject to component degrada-tions that are caused by fouling and erosion.The proposed hybrid framework involves integrating both supervised recur-rent neural networks and unsupervised self-organizing maps methodologies,where the former is developed to extract ef-fective features that can be associated with the engine health condition and the latter is constructed for fault severity modeling and tracking of each considered degradation mode.Advantages of our proposed methodology are that it ac-complishes fault identification and health monitoring objectives by only discovering inherent health information that are available in the system I/O data at each operating point.The effectiveness of our approach is validated and justified with engine data under various degradation modes in compressors and turbines.
基金supported by the National Natural Science Foundation of China under Grant Nos. 51727804 and 51672223supported by the “111” project under grant No. B08040
文摘Multi-layer connected self-organizing feature maps(SOFMs) and the associated learning procedure were proposed to achieve efficient recognition and clustering of messily grown nanowire morphologies. The network is made up by several paratactic 2-D SOFMs with inter-layer connections. By means of Monte Carlo simulations, virtual morphologies were generated to be the training samples. With the unsupervised inner-layer and inter-layer learning, the neural network can cluster different morphologies of messily grown nanowires and build connections between the morphological microstructure and geometrical features of nanowires within. Then, the as-proposed networks were applied on recognitions and quantitative estimations of the experimental morphologies. Results show that the as-trained SOFMs are able to cluster the morphologies and recognize the average length and quantity of the messily grown nanowires within. The inter-layer connections between winning neurons on each competitive layer have significant influence on the relations between the microstructure of the morphology and physical parameters of the nanowires within.
基金gratefully acknowledge financial support of the MOST 110-2221-E-027-110.
文摘Inter-purchase time is a critical factor for predicting customer churn.Improving the prediction accuracy can exploit consumer’s preference and allow businesses to learn about product or pricing plan weak points,operation issues,as well as customer expectations to proactively reduce reasons for churn.Although remarkable progress has been made,classic statistical models are difficult to capture behavioral characteristics in transaction data because transaction data are dependent and short-,medium-,and long-term data are likely to interfere with each other sequentially.Different from literature,this study proposed a hybrid inter-purchase time prediction model for customers of on-line retailers.Moreover,the analysis of differences in the purchase behavior of customers has been particularly highlighted.The integrated self-organizing map and Recurrent Neural Network technique is proposed to not only address the problem of purchase behavior but also improve the prediction accuracy of inter-purchase time.The permutation importance method was used to identify crucial variables in the prediction model and to interpret customer purchase behavior.The performance of the proposed method is evaluated by comparing the prediction with the results of three competing approaches on the transaction data provided by a leading e-retailer in Taiwan.This study provides a valuable reference for marketing professionals to better understand and develop strategies to attract customers to shorten their inter-purchase times.