期刊文献+
共找到107篇文章
< 1 2 6 >
每页显示 20 50 100
Time Predictable Modeling Method for GPU Architecture with SIMT and Cache Miss Awareness
1
作者 Shaojie Zhang 《Journal of Electronic Research and Application》 2024年第2期109-115,共7页
Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,etc.In the past decade or so,researchers have done a lot of work on GPU ... Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,etc.In the past decade or so,researchers have done a lot of work on GPU architecture and proposed a variety of theories and methods to study the microarchitectural characteristics of various GPUs.In this study,the GPU serves as a co-processor and works together with the CPU in an embedded real-time system to handle computationally intensive tasks.It models the architecture of the GPU and further considers it based on some excellent work.The SIMT mechanism and Cache-miss situation provide a more detailed analysis of the GPU architecture.In order to verify the GPU architecture model proposed in this article,10 GPU kernel_task and an Nvidia GPU device were used to perform experiments.The experimental results showed that the minimum error between the kernel task execution time predicted by the GPU architecture model proposed in this article and the actual measured kernel task execution time was 3.80%,and the maximum error was 8.30%. 展开更多
关键词 Heterogeneous computing GPU Architecture modeling time predictability
在线阅读 下载PDF
A Correntropy-Based Echo State Network With Application to Time Series Prediction
2
作者 Xiufang Chen Zhenming Su +1 位作者 Long Jin Shuai Li 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期425-435,共11页
As a category of recurrent neural networks,echo state networks(ESNs)have been the topic of in-depth investigations and extensive applications in a diverse array of fields,with spectacular triumphs achieved.Nevertheles... As a category of recurrent neural networks,echo state networks(ESNs)have been the topic of in-depth investigations and extensive applications in a diverse array of fields,with spectacular triumphs achieved.Nevertheless,the traditional ESN and the majority of its variants are devised in the light of the second-order statistical information of data(e.g.,variance and covariance),while more information is neglected.In the context of information theoretic learning,correntropy demonstrates the capacity to grab more information from data.Therefore,under the guidelines of the maximum correntropy criterion,this paper proposes a correntropy-based echo state network(CESN)in which the first-order and higher-order information of data is captured,promoting robustness to noise.Furthermore,an incremental learning algorithm for the CESN is presented,which has the expertise to update the CESN when new data arrives,eliminating the need to retrain the network from scratch.Finally,experiments on benchmark problems and comparisons with existing works are provided to verify the effectiveness and superiority of the proposed CESN. 展开更多
关键词 Correntropy echo state network(ESN) noise time series prediction
在线阅读 下载PDF
Fusion of Time-Frequency Features in Contrastive Learning for Shipboard Wind Speed Correction
3
作者 SONG Jian HUANG Meng +3 位作者 LI Xiang ZHANG Zhenqiang WANG Chunxiao ZHAO Zhigang 《Journal of Ocean University of China》 2025年第2期377-386,共10页
Accurate wind speed measurements on maritime vessels are crucial for weather forecasting,sea state prediction,and safe navigation.However,vessel motion and challenging environmental conditions often affect measurement... Accurate wind speed measurements on maritime vessels are crucial for weather forecasting,sea state prediction,and safe navigation.However,vessel motion and challenging environmental conditions often affect measurement precision.To address this issue,this study proposes an innovative framework for correcting and predicting shipborne wind speed.By integrating a main network with a momentum updating network,the proposed framework effectively extracts features from the time and frequency domains,thereby allowing for precise adjustments and predictions of shipborne wind speed data.Validation using real sensor data collected at the Qingdao Oceanographic Institute demonstrates that the proposed method outperforms existing approaches in single-and multi-step predictions compared to existing methods,achieving higher accuracy in wind speed forecasting.The proposed innovative approach offers a promising direction for future validation in more realistic maritime onboard scenarios. 展开更多
关键词 time series prediction wind speed correction comparative learning shipborne sensor
在线阅读 下载PDF
Chaotic phenomenon and the maximum predictable time scale of observation series of urban hourly water consumption 被引量:2
4
作者 柳景青 张士乔 俞申凯 《Journal of Zhejiang University Science》 EI CSCD 2004年第9期1053-1059,共7页
The chaotic characteristics and maximum predictable time scale of the observation series of hourly water consumption in Hangzhou were investigated using the advanced algorithm presented here is based on the convention... The chaotic characteristics and maximum predictable time scale of the observation series of hourly water consumption in Hangzhou were investigated using the advanced algorithm presented here is based on the conventional Wolf's algorithm for the largest Lyapunov exponent. For comparison, the largest Lyapunov exponents of water consumption series with one-hour and 24-hour intervals were calculated respectively. The results indicated that chaotic characteristics obviously exist in the hourly water consumption system; and that observation series with 24-hour interval have longer maximum predictable scale than hourly series. These findings could have significant practical application for better prediction of urban hourly water consumption. 展开更多
关键词 Hourly water consumption series Lyapunov exponent CHAOS Maximum predictable time scale
在线阅读 下载PDF
Time series prediction of reservoir bank landslide failure probability considering the spatial variability of soil properties 被引量:2
5
作者 Luqi Wang Lin Wang +3 位作者 Wengang Zhang Xuanyu Meng Songlin Liu Chun Zhu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第10期3951-3960,共10页
Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stab... Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models. 展开更多
关键词 Machine learning(ML) Reservoir bank landslide Spatial variability time series prediction Failure probability
在线阅读 下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
6
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
在线阅读 下载PDF
AFSTGCN:Prediction for multivariate time series using an adaptive fused spatial-temporal graph convolutional network
7
作者 Yuteng Xiao Kaijian Xia +5 位作者 Hongsheng Yin Yu-Dong Zhang Zhenjiang Qian Zhaoyang Liu Yuehan Liang Xiaodan Li 《Digital Communications and Networks》 SCIE CSCD 2024年第2期292-303,共12页
The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries an... The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models. 展开更多
关键词 Adaptive adjacency matrix Digital twin Graph convolutional network Multivariate time series prediction Spatial-temporal graph
在线阅读 下载PDF
Application of Random Search Methods in the Determination of Learning Rate for Training Container Dwell Time Data Using Artificial Neural Networks
8
作者 Justice Awosonviri Akodia Clement K. Dzidonu +1 位作者 David King Boison Philip Kisembe 《Intelligent Control and Automation》 2024年第4期109-124,共16页
Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for ... Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations. 展开更多
关键词 Container Dwell time Prediction Artificial Neural Networks (ANNs) Learning Rate Optimization RandomizedSearchCV Algorithm and Port Operations Efficiency
在线阅读 下载PDF
A Comparative Study of Optimized-LSTM Models Using Tree-Structured Parzen Estimator for Traffic Flow Forecasting in Intelligent Transportation 被引量:1
9
作者 Hamza Murad Khan Anwar Khan +3 位作者 Santos Gracia Villar Luis Alonso DzulLopez Abdulaziz Almaleh Abdullah M.Al-Qahtani 《Computers, Materials & Continua》 2025年第5期3369-3388,共20页
Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models... Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models often have accurate results at the expense of high computational complexity.To address this problem,this paper uses the Tree-structured Parzen Estimator(TPE)to tune the hyperparameters of the Long Short-term Memory(LSTM)deep learning framework.The Tree-structured Parzen Estimator(TPE)uses a probabilistic approach with an adaptive searching mechanism by classifying the objective function values into good and bad samples.This ensures fast convergence in tuning the hyperparameter values in the deep learning model for performing prediction while still maintaining a certain degree of accuracy.It also overcomes the problem of converging to local optima and avoids timeconsuming random search and,therefore,avoids high computational complexity in prediction accuracy.The proposed scheme first performs data smoothing and normalization on the input data,which is then fed to the input of the TPE for tuning the hyperparameters.The traffic data is then input to the LSTM model with tuned parameters to perform the traffic prediction.The three optimizers:Adaptive Moment Estimation(Adam),Root Mean Square Propagation(RMSProp),and Stochastic Gradient Descend with Momentum(SGDM)are also evaluated for accuracy prediction and the best optimizer is then chosen for final traffic prediction in TPE-LSTM model.Simulation results verify the effectiveness of the proposed model in terms of accuracy of prediction over the benchmark schemes. 展开更多
关键词 Short-term traffic prediction sequential time series prediction TPE tree-structured parzen estimator LSTM hyperparameter tuning hybrid prediction model
在线阅读 下载PDF
SDVformer:A Resource Prediction Method for Cloud Computing Systems
10
作者 Shui Liu Ke Xiong +3 位作者 Yeshen Li Zhifei Zhang Yu Zhang Pingyi Fan 《Computers, Materials & Continua》 2025年第9期5077-5093,共17页
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh... Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems. 展开更多
关键词 Cloud computing time series prediction DVSA SG filter T-MOE
暂未订购
Heuristic Feature Engineering for Enhancing Neural Network Performance in Spatiotemporal Traffic Prediction
11
作者 Bin Sun Yinuo Wang +2 位作者 Tao Shen Lu Zhang Renkang Geng 《Computers, Materials & Continua》 2025年第3期4219-4236,共18页
Traffic datasets exhibit complex spatiotemporal characteristics,including significant fluctuations in traffic volume and intricate periodical patterns,which pose substantial challenges for the accurate forecasting and... Traffic datasets exhibit complex spatiotemporal characteristics,including significant fluctuations in traffic volume and intricate periodical patterns,which pose substantial challenges for the accurate forecasting and effective management of traffic conditions.Traditional forecasting models often struggle to adequately capture these complexities,leading to suboptimal predictive performance.While neural networks excel at modeling intricate and nonlinear data structures,they are also highly susceptible to overfitting,resulting in inefficient use of computational resources and decreased model generalization.This paper introduces a novel heuristic feature extraction method that synergistically combines the strengths of non-neural network algorithms with neural networks to enhance the identification and representation of relevant features from traffic data.We begin by evaluating the significance of various temporal characteristics using three distinct assessment strategies grounded in non-neural methodologies.These evaluated features are then aggregated through a weighted fusion mechanism to create heuristic features,which are subsequently integrated into neural network models for more accurate and robust traffic prediction.Experimental results derived from four real-world datasets,collected from diverse urban environments,show that the proposed method significantly improves the accuracy of long-term traffic forecasting without compromising performance.Additionally,the approach helps streamline neural network architectures,leading to a considerable reduction in computational overhead.By addressing both prediction accuracy and computational efficiency,this study not only presents an innovative and effective method for traffic condition forecasting but also offers valuable insights that can inform the future development of data-driven traffic management systems and transportation strategies. 展开更多
关键词 Machine learning deep learning traffic time series prediction forecasting AGGREGATION
在线阅读 下载PDF
Application of uncertainty reasoning based on cloud model in time series prediction 被引量:11
12
作者 张锦春 胡谷雨 《Journal of Zhejiang University Science》 EI CSCD 2003年第5期578-583,共6页
Time series prediction has been successfully used in several application areas, such as meteoro-logical forecasting, market prediction, network traffic forecasting, etc. , and a number of techniques have been develop... Time series prediction has been successfully used in several application areas, such as meteoro-logical forecasting, market prediction, network traffic forecasting, etc. , and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets. 展开更多
关键词 time series prediction Cloud model Simple expo nential smoothing method
在线阅读 下载PDF
Travel time prediction model of freeway based on gradient boosting decision tree 被引量:9
13
作者 Cheng Juan Chen Xianhua 《Journal of Southeast University(English Edition)》 EI CAS 2019年第3期393-398,共6页
To investigate the travel time prediction method of the freeway, a model based on the gradient boosting decision tree (GBDT) is proposed. Eleven variables (namely, travel time in current period T i , traffic flow in c... To investigate the travel time prediction method of the freeway, a model based on the gradient boosting decision tree (GBDT) is proposed. Eleven variables (namely, travel time in current period T i , traffic flow in current period Q i , speed in current period V i , density in current period K i , the number of vehicles in current period N i , occupancy in current period R i , traffic state parameter in current period X i , travel time in previous time period T i -1 , etc.) are selected to predict the travel time for 10 min ahead in the proposed model. Data obtained from VISSIM simulation is used to train and test the model. The results demonstrate that the prediction error of the GBDT model is smaller than those of the back propagation (BP) neural network model and the support vector machine (SVM) model. Travel time in current period T i is the most important variable among all variables in the GBDT model. The GBDT model can produce more accurate prediction results and mine the hidden nonlinear relationships deeply between variables and the predicted travel time. 展开更多
关键词 gradient boosting decision tree (GBDT) travel time prediction FREEWAY traffic state parameter
在线阅读 下载PDF
Time series online prediction algorithm based on least squares support vector machine 被引量:8
14
作者 吴琼 刘文颖 杨以涵 《Journal of Central South University of Technology》 EI 2007年第3期442-446,共5页
Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive cal... Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to timc series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75 1 600 ms), that of the proposed method in different time windows is 40-60 ms, proposed method is above 0.8. So the improved method is online prediction. and the prediction accuracy(normalized root mean squared error) of the better than the traditional LS-SVM and more suitable for time series online prediction. 展开更多
关键词 time series prediction machine learning support vector machine statistical learning theory
在线阅读 下载PDF
Bus Arrival Time Prediction Based on Mixed Model 被引量:4
15
作者 Jinglin Li Jie Gao +1 位作者 Yu Yang Heran Wei 《China Communications》 SCIE CSCD 2017年第5期38-47,共10页
How to predict the bus arrival time accurately is a crucial problem to be solved in Internet of Vehicle. Existed methods cannot solve the problem effectively for ignoring the traffic delay jitter. In this paper,a thre... How to predict the bus arrival time accurately is a crucial problem to be solved in Internet of Vehicle. Existed methods cannot solve the problem effectively for ignoring the traffic delay jitter. In this paper,a three-stage mixed model is proposed for bus arrival time prediction. The first stage is pattern training. In this stage,the traffic delay jitter patterns(TDJP)are mined by K nearest neighbor and K-means in the historical traffic time data. The second stage is the single-step prediction,which is based on real-time adjusted Kalman filter with a modification of historical TDJP. In the third stage,as the influence of historical law is increasing in long distance prediction,we combine the single-step prediction dynamically with Markov historical transfer model to conduct the multi-step prediction. The experimental results show that the proposed single-step prediction model performs better in accuracy and efficiency than short-term traffic flow prediction and dynamic Kalman filter. The multi-step prediction provides a higher level veracity and reliability in travel time forecasting than short-term traffic flow and historical traffic pattern prediction models. 展开更多
关键词 bus arrival time prediction traffic delay jitter pattern internet of vehicle
在线阅读 下载PDF
Continuous-Time Prediction of Industrial Paste Thickener System With Differential ODE-Net 被引量:3
16
作者 Zhaolin Yuan Xiaorui Li +4 位作者 Di Wu Xiaojuan Ban Nai-Qi Wu Hong-Ning Dai Hao Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第4期686-698,共13页
It is crucial to predict the outputs of a thickening system,including the underflow concentration(UC)and mud pressure,for optimal control of the process.The proliferation of industrial sensors and the availability of ... It is crucial to predict the outputs of a thickening system,including the underflow concentration(UC)and mud pressure,for optimal control of the process.The proliferation of industrial sensors and the availability of thickening-system data make this possible.However,the unique properties of thickening systems,such as the non-linearities,long-time delays,partially observed data,and continuous time evolution pose challenges on building data-driven predictive models.To address the above challenges,we establish an integrated,deep-learning,continuous time network structure that consists of a sequential encoder,a state decoder,and a derivative module to learn the deterministic state space model from thickening systems.Using a case study,we examine our methods with a tailing thickener manufactured by the FLSmidth installed with massive sensors and obtain extensive experimental results.The results demonstrate that the proposed continuous-time model with the sequential encoder achieves better prediction performances than the existing discrete-time models and reduces the negative effects from long time delays by extracting features from historical system trajectories.The proposed method also demonstrates outstanding performances for both short and long term prediction tasks with the two proposed derivative types. 展开更多
关键词 Industrial 24 paste thickener ordinary differential equation(ODE)-net recurrent neural network time series prediction
在线阅读 下载PDF
Numerical Prediction Methods for Clock Deviation Based on Two-Way Satellite Time and Frequency Transfer Data 被引量:3
17
作者 GUO Hairong YANG Yuanxi HE Haibo 《Geo-Spatial Information Science》 2008年第2期143-147,共5页
Three functional models, polynomial, spectral analysis, and modified AR model, are studied and compared in fitting and predicting clock deviation based on the data sequence derived from two-way satellite time and freq... Three functional models, polynomial, spectral analysis, and modified AR model, are studied and compared in fitting and predicting clock deviation based on the data sequence derived from two-way satellite time and frequency transfer. A robust equivalent weight is applied, which controls the significant influence of outlying observations. Some conclusions show that the prediction precision of robust estimation is better than that of LS. The prediction precision calculated from smoothed observations is higher than that calculated from sampling observations. As a count of the obvious period variations in the clock deviation sequence, the predicted values of polynomial model are implausible. The prediction precision of spectral analysis model is very low, but the principal periods can be determined. The prediction RMS of 6-hour extrapolation interval is Ins or so, when modified AR model is used. 展开更多
关键词 time prediction time transfer two-way satellite time and frequency transfer
在线阅读 下载PDF
Ship motion extreme short time prediction of ship pitch based on diagonal recurrent neural network 被引量:3
18
作者 SHEN Yan XIE Mei-ping 《Journal of Marine Science and Application》 2005年第2期56-60,共5页
A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The prin... A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The principle of RPE learning algorithm is to adjust weights along the direction of Gauss-Newton. Meanwhile, it is unnecessary to calculate the second local derivative and the inverse matrixes, whose unbiasedness is proved. With application to the extremely short time prediction of large ship pitch, satisfactory results are obtained. Prediction effect of this algorithm is compared with that of auto-regression and periodical diagram method, and comparison results show that the proposed algorithm is feasible. 展开更多
关键词 extreme short time prediction diagonal recursive neural network recurrent prediction error learning algorithm UNBIASEDNESS
在线阅读 下载PDF
STUDY ON THE PREDICTION METHOD OF LOW-DIMENSION TIME SERIES THAT ARISE FROM THE INTRINSIC NONLINEAR DYNAMICS 被引量:2
19
作者 MA Junhai(马军海) +1 位作者 CHEN Yushu(陈予恕) 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2001年第5期501-509,共9页
The prediction methods and its applications of the nonlinear dynamic systems determined from chaotic time series of low-dimension are discussed mainly. Based on the work of the foreign researchers, the chaotic time se... The prediction methods and its applications of the nonlinear dynamic systems determined from chaotic time series of low-dimension are discussed mainly. Based on the work of the foreign researchers, the chaotic time series in the phase space adopting one kind of nonlinear chaotic model were reconstructed. At first, the model parameters were estimated by using the improved least square method. Then as the precision was satisfied, the optimization method was used to estimate these parameters. At the end by using the obtained chaotic model, the future data of the chaotic time series in the phase space was predicted. Some representative experimental examples were analyzed to testify the models and the algorithms developed in this paper. ne results show that if the algorithms developed here are adopted, the parameters of the corresponding chaotic model will be easily calculated well and true. Predictions of chaotic series in phase space make the traditional methods change from outer iteration to interpolations. And if the optimal model rank is chosen, the prediction precision will increase notably. Long term superior predictability of nonlinear chaotic models is proved to be irrational and unreasonable. 展开更多
关键词 NONLINEAR chaotic model parameter identification time series prediction
在线阅读 下载PDF
A PSO-SVM Model for Short-Term Travel Time Prediction Based on Bluetooth Technology 被引量:3
20
作者 Qun Wang Zhuyun Liu Zhongren Peng 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2015年第3期7-14,共8页
The accurate prediction of travel time along roadway provides valuable traffic information for travelers and traffic managers. Aiming at short-term travel time forecasting on urban arterials,a prediction model( PSOSVM... The accurate prediction of travel time along roadway provides valuable traffic information for travelers and traffic managers. Aiming at short-term travel time forecasting on urban arterials,a prediction model( PSOSVM) combining support vector machine( SVM) and particle swarm optimization( PSO) is developed. Travel time data collected with Bluetooth devices are used to calibrate the proposed model. Field experiments show that the PSO-SVM model 's error indicators are lower than the single SVM model and the BP neural network( BPNN) model. Particularly,the mean-absolute percentage error( MAPE) of PSO-SVM is only 9. 453 4 %which is less than that of the single SVM model( 12. 230 2 %) and the BPNN model( 15. 314 7 %). The results indicate that the proposed PSO-SVM model is feasible and more effective than other models for shortterm travel time prediction on urban arterials. 展开更多
关键词 urban arterials travel time prediction Bluetooth detection support vector machine(SVM) particle swarm optimization(PSO)
在线阅读 下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部