Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous ...Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous derivative is constructed. The method is based on TNN model. In the learning process of TNN, the gradient descent method is adopted to solve the adaptive optimal thresholds of different scales and directions in Curvelet domain, and to achieve an optimal mean square error performance. In this paper, the specific implementation steps are presented, and the superiority of this method is verified by simulation. Finally, the proposed method is used to process the ice radar data obtained during the 28th Chinese National Antarctic Research Expedition in the region of Zhongshan Station, Antarctica. Experimental results show that the proposed method can reduce the noise effectively, while preserving the edge of the ice layers.展开更多
In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the...In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the conventional back propagation (BP) model, an improved BP model was proposed, with main modifications of back propagation of error, self-adapting algorithm, and activation function, also a prediction program was developed. The improved BP model was successfully applied to predicting the lithology of formations to be drilled in the Kela-2 gas field.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural ...Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural network analysis are used in lithological interpretation in Jibei coal field. The prediction results indicate that this method can provide reliable data for thin coal exploitation and promising area evaluation.展开更多
The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause of...The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.展开更多
In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving...In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving process with improved accuracy and to reduce the searching time.Since,in the data recommendation system,this type of data searching becomes complex to search for the best matching for given query data and fails in the accuracy of the query recommendation process.To improve the performance of data validation,this paper proposed a novel model of data similarity estimation and clustering method to retrieve the relevant data with the best matching in the big data processing.In this paper advanced model of the Logarithmic Directionality Texture Pattern(LDTP)method with a Metaheuristic Pattern Searching(MPS)system was used to estimate the similarity between the query data in the entire database.The overall work was implemented for the application of the data recommendation process.These are all indexed and grouped as a cluster to form a paged format of database structure which can reduce the computation time while at the searching period.Also,with the help of a neural network,the relevancies of feature attributes in the database are predicted,and the matching index was sorted to provide the recommended data for given query data.This was achieved by using the Distributional Recurrent Neural Network(DRNN).This is an enhanced model of Neural Network technology to find the relevancy based on the correlation factor of the feature set.The training process of the DRNN classifier was carried out by estimating the correlation factor of the attributes of the dataset.These are formed as clusters and paged with proper indexing based on the MPS parameter of similarity metric.The overall performance of the proposed work can be evaluated by varying the size of the training database by 60%,70%,and 80%.The parameters that are considered for performance analysis are Precision,Recall,F1-score and the accuracy of data retrieval,the query recommendation output,and comparison with other state-of-art methods.展开更多
Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5...Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.展开更多
Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vect...Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.展开更多
Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lith...Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lithologic ones. Summarized in this paper is the integrated seismic processing/interpretation technique established on the basis of pre-stack AVO processing and interpretation.Information feedbacks occurred between the pre-stack and post-stack processes so as to improve the accuracy in utilization of data and avoid pitfalls in seismic attributes. Through the integration of seismic data with geologic data, parameters that were most essential to describing hydrocarbon characteristics were determined and comprehensively appraised, and regularities of reservoir generation and distribution were described so as to accurately appraise reservoirs, delineate favorite traps and pinpoint wells.展开更多
The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting o...The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting of the model.A processing method including data preprocessing,feature extraction,feature selection,flow type classification and flow field parameters estimation,is proposed based on the data of the pressure sensors in an artificial lateral line.Probabilistic Neural Network(PNN)is used to classify the flow field type and the Generalized Regressive Neural Network(GRNN)is the best choice for estimating the flow field parameters.Also,a few filtering methods for data preprocessing,three methods for feature selection and nine parameters estimation methods are analysis for choosing better method.The proposed method is verified by the experiments with both simulation and real data.展开更多
Increasing the resolution of seismic data has long been a major topic in seismic exploration.Due to the effect of high-frequency noises,traditional methods could only improve the resolution limitedly.To end this,this ...Increasing the resolution of seismic data has long been a major topic in seismic exploration.Due to the effect of high-frequency noises,traditional methods could only improve the resolution limitedly.To end this,this paper newly proposed a high-resolution seismic data processing method based on welleseismic combination after summarizing the research status on high resolution.Synthetic record and seismogram are similar in effective signals but dissimilar in noises.Their effective signals are regular and noises are irregular.And they are similar in adjacent frequency.Based on these“three-regularity”characteristics,the relationship between synthetic record and seismogram was established using the neural network algorithm.Then,the corresponding extrapolation algorithm was proposed based on the self-adaptive geological and geophysical variation of multi-layer network structure.And a model was established by virtue of this method and the theoretical simulation was carried out.In addition,it was tested from the aspects of frequency component and amplitude energy recovery,phase correction,regularity elimination and stochastic noise.And the following research results were obtained.First,this new method can extract high-frequency information as much as possible and remain middle and low-frequency effective information while eliminating the noises.Second,in this method,the idea of traditional methods to denoisefirst and then expand frequency is changed completely and the limitation of traditional methods is broken.It establishes the idea of expanding frequency and denoising simultaneously and increases the resolution to the uttermost.Third,this new method has been applied to a variety of reservoir descriptions and the high-resolution processing results have been improved significantly in precision and accuracy.展开更多
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the trainin...In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the training results,in addition to the optimization achieved through the model structure.However,the lack of open-source agricultural data,combined with the absence of a comprehensive open-source data sharing platform,remains a substantial obstacle.This issue is closely related to the difficulty and high cost of obtaining high-quality agricultural data,the low level of education of most employees,underdeveloped distributed training systems and unsecured data security.To address these challenges,this paper proposes a novel idea of constructing an agricultural data sharing platform based on a federated learning(FL)framework,aiming to overcome the deficiency of high-quality data in agricultural field training.展开更多
In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along...In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along with time-consuming to process a massive amount of data.Thus,to design the Distribution Preserving Framework for BD,a novel methodology has been proposed utilizing Manhattan Distance(MD)-centered Partition Around Medoid(MD–PAM)along with Conjugate Gradient Artificial Neural Network(CG-ANN),which undergoes various steps to reduce the complications of BD.Firstly,the data are processed in the pre-processing phase by mitigating the data repetition utilizing the map-reduce function;subsequently,the missing data are handled by substituting or by ignoring the missed values.After that,the data are transmuted into a normalized form.Next,to enhance the classification performance,the data’s dimensionalities are minimized by employing Gaussian Kernel(GK)-Fisher Discriminant Analysis(GK-FDA).Afterwards,the processed data is submitted to the partitioning phase after transmuting it into a structured format.In the partition phase,by utilizing the MD-PAM,the data are partitioned along with grouped into a cluster.Lastly,by employing CG-ANN,the data are classified in the classification phase so that the needed data can be effortlessly retrieved by the user.To analogize the outcomes of the CG-ANN with the prevailing methodologies,the NSL-KDD openly accessible datasets are utilized.The experiential outcomes displayed that an efficient result along with a reduced computation cost was shown by the proposed CG-ANN.The proposed work outperforms well in terms of accuracy,sensitivity and specificity than the existing systems.展开更多
How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable i...How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].展开更多
基金Supported by the National High Technology Research and Development Program of China (No. 2011AA040202)the National Natural Science Foundation of China (No. 40976114)
文摘Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous derivative is constructed. The method is based on TNN model. In the learning process of TNN, the gradient descent method is adopted to solve the adaptive optimal thresholds of different scales and directions in Curvelet domain, and to achieve an optimal mean square error performance. In this paper, the specific implementation steps are presented, and the superiority of this method is verified by simulation. Finally, the proposed method is used to process the ice radar data obtained during the 28th Chinese National Antarctic Research Expedition in the region of Zhongshan Station, Antarctica. Experimental results show that the proposed method can reduce the noise effectively, while preserving the edge of the ice layers.
文摘In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the conventional back propagation (BP) model, an improved BP model was proposed, with main modifications of back propagation of error, self-adapting algorithm, and activation function, also a prediction program was developed. The improved BP model was successfully applied to predicting the lithology of formations to be drilled in the Kela-2 gas field.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
文摘Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural network analysis are used in lithological interpretation in Jibei coal field. The prediction results indicate that this method can provide reliable data for thin coal exploitation and promising area evaluation.
基金supported this research through the National Research Foundation of Korea (NRF)funded by the Ministry of Science,ICT (2019M3F2A1073387)this work was supported by the Institute for Information&communications Technology Promotion (IITP) (NO.2022-0-00980Cooperative Intelligence Framework of Scene Perception for Autonomous IoT Device).
文摘The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.
文摘In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving process with improved accuracy and to reduce the searching time.Since,in the data recommendation system,this type of data searching becomes complex to search for the best matching for given query data and fails in the accuracy of the query recommendation process.To improve the performance of data validation,this paper proposed a novel model of data similarity estimation and clustering method to retrieve the relevant data with the best matching in the big data processing.In this paper advanced model of the Logarithmic Directionality Texture Pattern(LDTP)method with a Metaheuristic Pattern Searching(MPS)system was used to estimate the similarity between the query data in the entire database.The overall work was implemented for the application of the data recommendation process.These are all indexed and grouped as a cluster to form a paged format of database structure which can reduce the computation time while at the searching period.Also,with the help of a neural network,the relevancies of feature attributes in the database are predicted,and the matching index was sorted to provide the recommended data for given query data.This was achieved by using the Distributional Recurrent Neural Network(DRNN).This is an enhanced model of Neural Network technology to find the relevancy based on the correlation factor of the feature set.The training process of the DRNN classifier was carried out by estimating the correlation factor of the attributes of the dataset.These are formed as clusters and paged with proper indexing based on the MPS parameter of similarity metric.The overall performance of the proposed work can be evaluated by varying the size of the training database by 60%,70%,and 80%.The parameters that are considered for performance analysis are Precision,Recall,F1-score and the accuracy of data retrieval,the query recommendation output,and comparison with other state-of-art methods.
文摘Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.
基金supported by the funds of Ningde Normal University Youth Teacher Research Program(2015Q15)The Education Science Project of the Junior Teacher in the Education Department of Fujian province(JAT160532).
文摘Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.
文摘Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lithologic ones. Summarized in this paper is the integrated seismic processing/interpretation technique established on the basis of pre-stack AVO processing and interpretation.Information feedbacks occurred between the pre-stack and post-stack processes so as to improve the accuracy in utilization of data and avoid pitfalls in seismic attributes. Through the integration of seismic data with geologic data, parameters that were most essential to describing hydrocarbon characteristics were determined and comprehensively appraised, and regularities of reservoir generation and distribution were described so as to accurately appraise reservoirs, delineate favorite traps and pinpoint wells.
基金National Natural Science Foundation of China(NSFC)under Grant 62073017.
文摘The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting of the model.A processing method including data preprocessing,feature extraction,feature selection,flow type classification and flow field parameters estimation,is proposed based on the data of the pressure sensors in an artificial lateral line.Probabilistic Neural Network(PNN)is used to classify the flow field type and the Generalized Regressive Neural Network(GRNN)is the best choice for estimating the flow field parameters.Also,a few filtering methods for data preprocessing,three methods for feature selection and nine parameters estimation methods are analysis for choosing better method.The proposed method is verified by the experiments with both simulation and real data.
文摘Increasing the resolution of seismic data has long been a major topic in seismic exploration.Due to the effect of high-frequency noises,traditional methods could only improve the resolution limitedly.To end this,this paper newly proposed a high-resolution seismic data processing method based on welleseismic combination after summarizing the research status on high resolution.Synthetic record and seismogram are similar in effective signals but dissimilar in noises.Their effective signals are regular and noises are irregular.And they are similar in adjacent frequency.Based on these“three-regularity”characteristics,the relationship between synthetic record and seismogram was established using the neural network algorithm.Then,the corresponding extrapolation algorithm was proposed based on the self-adaptive geological and geophysical variation of multi-layer network structure.And a model was established by virtue of this method and the theoretical simulation was carried out.In addition,it was tested from the aspects of frequency component and amplitude energy recovery,phase correction,regularity elimination and stochastic noise.And the following research results were obtained.First,this new method can extract high-frequency information as much as possible and remain middle and low-frequency effective information while eliminating the noises.Second,in this method,the idea of traditional methods to denoisefirst and then expand frequency is changed completely and the limitation of traditional methods is broken.It establishes the idea of expanding frequency and denoising simultaneously and increases the resolution to the uttermost.Third,this new method has been applied to a variety of reservoir descriptions and the high-resolution processing results have been improved significantly in precision and accuracy.
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.
基金National Key Research and Development Program of China(2021ZD0113704).
文摘In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the training results,in addition to the optimization achieved through the model structure.However,the lack of open-source agricultural data,combined with the absence of a comprehensive open-source data sharing platform,remains a substantial obstacle.This issue is closely related to the difficulty and high cost of obtaining high-quality agricultural data,the low level of education of most employees,underdeveloped distributed training systems and unsecured data security.To address these challenges,this paper proposes a novel idea of constructing an agricultural data sharing platform based on a federated learning(FL)framework,aiming to overcome the deficiency of high-quality data in agricultural field training.
文摘In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along with time-consuming to process a massive amount of data.Thus,to design the Distribution Preserving Framework for BD,a novel methodology has been proposed utilizing Manhattan Distance(MD)-centered Partition Around Medoid(MD–PAM)along with Conjugate Gradient Artificial Neural Network(CG-ANN),which undergoes various steps to reduce the complications of BD.Firstly,the data are processed in the pre-processing phase by mitigating the data repetition utilizing the map-reduce function;subsequently,the missing data are handled by substituting or by ignoring the missed values.After that,the data are transmuted into a normalized form.Next,to enhance the classification performance,the data’s dimensionalities are minimized by employing Gaussian Kernel(GK)-Fisher Discriminant Analysis(GK-FDA).Afterwards,the processed data is submitted to the partitioning phase after transmuting it into a structured format.In the partition phase,by utilizing the MD-PAM,the data are partitioned along with grouped into a cluster.Lastly,by employing CG-ANN,the data are classified in the classification phase so that the needed data can be effortlessly retrieved by the user.To analogize the outcomes of the CG-ANN with the prevailing methodologies,the NSL-KDD openly accessible datasets are utilized.The experiential outcomes displayed that an efficient result along with a reduced computation cost was shown by the proposed CG-ANN.The proposed work outperforms well in terms of accuracy,sensitivity and specificity than the existing systems.
文摘How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].