In the past few years,deep learning has developed rapidly,and many researchers try to combine their subjects with deep learning.The algorithm based on Recurrent Neural Network(RNN)has been successfully applied in the ...In the past few years,deep learning has developed rapidly,and many researchers try to combine their subjects with deep learning.The algorithm based on Recurrent Neural Network(RNN)has been successfully applied in the fields of weather forecasting,stock forecasting,action recognition,etc.because of its excellent performance in processing Spatio-temporal sequence data.Among them,algorithms based on LSTM and GRU have developed most rapidly because of their good design.This paper reviews the RNN-based Spatio-temporal sequence prediction algorithm,introduces the development history of RNN and the common application directions of the Spatio-temporal sequence prediction,and includes precipitation nowcasting algorithms and traffic flow forecasting algorithms.At the same time,it also compares the advantages and disadvantages,and innovations of each algorithm.The purpose of this article is to give readers a clear understanding of solutions to such problems.Finally,it prospects the future development of RNN in the Spatio-temporal sequence prediction algorithm.展开更多
Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence predicti...Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence prediction scheme for cognitive radio networks with multiple spectrum bands to decrease the spectrum sensing time and increase the throughput of secondary users. The scheme is based on recent advances in computational learning theory, which has shown that prediction is synonymous with data compression. A Ziv-Lempel data compression algorithm is used to design our spectrum sensing sequence prediction scheme. The spectrum band usage history is used for the prediction in our proposed scheme. Simulation results show that the proposed scheme can reduce the average sensing time and improve the system throughput significantly.展开更多
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma...In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.展开更多
Carbon dioxide Enhanced Oil Recovery(CO_(2)-EOR)technology guarantees substantial underground CO_(2) sequestration while simultaneously boosting the production capacity of subsurface hydrocarbons(oil and gas).However,...Carbon dioxide Enhanced Oil Recovery(CO_(2)-EOR)technology guarantees substantial underground CO_(2) sequestration while simultaneously boosting the production capacity of subsurface hydrocarbons(oil and gas).However,unreasonable CO_(2)-EOR strategies,encompassing well placement and well control parameters,will lead to premature gas channeling in production wells,resulting in large amounts of CO_(2) escape without any beneficial effect.Due to the lack of prediction and optimization tools that integrate complex geological and engineering information for the widely used CO_(2)-EOR technology in promising industries,it is imperative to conduct thorough process simulations and optimization evaluations of CO_(2)-EOR technology.In this paper,a novel optimization workflow that couples the AST-GraphTrans-based proxy model(Attention-based Spatio-temporal Graph Transformer)and multi-objective optimization algorithm MOPSO(Multi-objective Particle Swarm Optimization)is established to optimize CO_(2)-EOR strategies.The workflow consists of two outstanding components.The AST-GraphTrans-based proxy model is utilized to forecast the dynamics of CO_(2) flooding and sequestration,which includes cumulative oil production,CO_(2) sequestration volume,and CO_(2) plume front.And the MOPSO algorithm is employed for achieving maximum oil production and maximum sequestration volume by coordinating well placement and well control parameters with the containment of gas channeling.By the collaborative coordination of the two aforementioned components,the AST-GraphTrans proxy-assisted optimization workflow overcomes the limitations of rapid optimization in CO_(2)-EOR technology,which cannot consider high-dimensional spatio-temporal information.The effectiveness of the proposed workflow is validated on a 2D synthetic model and a 3D field-scale reservoir model.The proposed workflow yields optimizations that lead to a significant increase in cumulative oil production by 87%and 49%,and CO_(2) sequestration volume enhancement by 78%and 50%across various reservoirs.These findings underscore the superior stability and generalization capabilities of the AST-GraphTrans proxy-assisted framework.The contribution of this study is to provide a more efficient prediction and optimization tool that maximizes CO_(2) sequestration and oil recovery while mitigating CO_(2) gas channeling,thereby ensuring cleaner oil production.展开更多
Sea-surface wind is a vital meteorological element in marine activities and climate research.This study proposed the spectral attention enhanced multidimensional feature fusion convolutional long short-term memory(LST...Sea-surface wind is a vital meteorological element in marine activities and climate research.This study proposed the spectral attention enhanced multidimensional feature fusion convolutional long short-term memory(LSTM)network(SAMFF-Conv-LSTM),a novel approach for sea-surface wind-speed prediction that emphasizes the temporal characteristics of data samples.The model incorporates the Fourier transform to extract time-and frequency-domain features from wave and wind variables.For the 12 h prediction,the SAMFF-ConvLSTM achieved a correlation coefficient of 0.960 and a root mean square error(RMSE)of 1.350 m/s,implying a high prediction accuracy.For the 24 h prediction,the RMSE of the SAMFF-ConvLSTM was reduced by 38.11%,14.26%,and 13.36%compared with those of the convolutional neural network,gated recurrent units,and convolutional LSTM(ConvLSTM),respectively.These results confirm the superior reliability and accuracy of the SAMFF-ConvLSTM over traditional models in theoretical and practical applications.展开更多
The environment shear stress of Tangshan main earthquake and 38 great aftershocks have been calculated by the acceleration data of Tangshan earthquake sequence. The environment shear stress for 52 smaller aftershocks ...The environment shear stress of Tangshan main earthquake and 38 great aftershocks have been calculated by the acceleration data of Tangshan earthquake sequence. The environment shear stress for 52 smaller aftershocks from July of 1982 to July of 1984 have also been calculated by use of the digital data of the Sino-American cooperation recorded by the instrumental arrays in Tangshan. The results represent that the environment shear stress τ0 values have a weak dependence on the seismic moment, only the small and moderate earthquakes will be able to occur in the region with smaller τ0 value and the large earthquakes are only in the region with greater τ0 value. The peak acceleration, velocity and displacement will be larger for the earthquakes occurred in the region with greater τ0 value, Therefore, the measurement of environment shear stress τ0 value for the significant region will play an important role in earthquske prediction and engineering shock-proof. The environment shear stress values for the great aftershocks occurred in the two ends of the main fault are often higher than that for the main shock. This case may represent the stress concentration in the two ends of the fault. This phenomenon provides the references for the place where the great aftershock will occur.展开更多
Sensing signals of many real-world network systems,such as traffic network or microgrid,could be sparse and irregular in both spatial and temporal domains due to reasons such as cost reduction,noise corruption,or devi...Sensing signals of many real-world network systems,such as traffic network or microgrid,could be sparse and irregular in both spatial and temporal domains due to reasons such as cost reduction,noise corruption,or device malfunction.It is a fundamental but challenging problem to model the continuous dynamics of a system from the sporadic observations on the network of nodes,which is generally represented as a graph.In this paper,we propose a deep learning model called Evolved Differential Model(EDM)to model the continuous-time stochastic process from partial observations on graph.Our model incorporates diffusion convolutional network to parameterize continuous-time system dynamics by graph Ordinary Differential Equation(ODE)and graph Stochastic Differential Equation(SDE).The graph ODE is applied to accurately capture the spatial-temporal relation and extract hidden features from the data.The graph SDE can efficiently capture the underlying uncertainty of the network systems.With the recurrent ODE-SDE scheme,EDM can serve as an accurate online predictive model that is effective for either monitoring or analyzing the real-world networked objects.Through extensive experiments on several datasets,we demonstrate that EDM outperforms existing methods in online prediction tasks.展开更多
With the rapid growth of the electric vehicle(EV)market,accurately predicting user charging behavior has become particularly important.This not only guides power distribution and charging station planning but is also ...With the rapid growth of the electric vehicle(EV)market,accurately predicting user charging behavior has become particularly important.This not only guides power distribution and charging station planning but is also crucial for improving user satisfaction and operational efficiency.This study aims to predict the charging behavior of EV users using Large Language Models(LLMs).Unlike traditional methods such as Long Short-Term Memory(LSTM)and XGBoost,or single-task prediction models,our proposed model,EVCharging-GPT,is the first to integrate the text generation capabilities of LLMs with a multi-task learning framework for EV user behavior prediction.We construct an EV user charging data processing flow and create a dataset of real scenarios for fine-tuning and testing the model.By carefully designing prompt templates,we transform the charging behavior prediction task into a text-to-text format,allowing the model to leverage its rich pre-trained knowledge base to make effective predictions.Additionally,we integrate temporal and static categorical features through natural language prompts and employ LoRA(Low-Rank Adaptation)technology to achieve efficient domain adaptation.To verify the effectiveness of the EVCharging-GPT model,we conduct extensive comparative experiments with various LLMs and traditional models.The results demonstrate the potential of the LLM-based approach for predicting user behavior in EVs and provide a solid foundation for future research and applications.展开更多
Antifreeze protein(AFP)can inhibit the growth of ice crystals to protect organisms from freezing damage,and demonstrates broad application prospects in food industry.Antifreeze peptides(AFPP)are specifi c peptides wit...Antifreeze protein(AFP)can inhibit the growth of ice crystals to protect organisms from freezing damage,and demonstrates broad application prospects in food industry.Antifreeze peptides(AFPP)are specifi c peptides with functional domains showing antifreeze activity in AFP.Bioinformatics-based molecular simulation technology can more accurately explain the properties and mechanisms of biological macromolecules.Therefore,the binding stability of antifreeze peptides and antifreeze proteins(AFP(P))to ice and the molecular-scale growth kinetics of ice were analyzed by molecular simulation,which can make up for the limitations of experimental technology.This review concludes the molecular simulation-based research in the inhibition’s study of AFP(P)on ice growth,including sequence prediction,structure construction,molecular docking and molecular dynamics(MD)studies of AFP(P)on ice applications in growth inhibition.Finally,the review prospects the future direction of designing new antifreeze biomimetic materials through molecular simulation and machine learning.The information presented in this paper will help enrich our understanding of AFPP.展开更多
基金This work was supported by the National Natural Science Foundation of China(Grant No.42075007)the Open Project of Provincial Key Laboratory for Computer Information Processing Technology under Grant KJS1935Soochow University,and the Priority Academic Program Development of Jiangsu Higher Education Institutions。
文摘In the past few years,deep learning has developed rapidly,and many researchers try to combine their subjects with deep learning.The algorithm based on Recurrent Neural Network(RNN)has been successfully applied in the fields of weather forecasting,stock forecasting,action recognition,etc.because of its excellent performance in processing Spatio-temporal sequence data.Among them,algorithms based on LSTM and GRU have developed most rapidly because of their good design.This paper reviews the RNN-based Spatio-temporal sequence prediction algorithm,introduces the development history of RNN and the common application directions of the Spatio-temporal sequence prediction,and includes precipitation nowcasting algorithms and traffic flow forecasting algorithms.At the same time,it also compares the advantages and disadvantages,and innovations of each algorithm.The purpose of this article is to give readers a clear understanding of solutions to such problems.Finally,it prospects the future development of RNN in the Spatio-temporal sequence prediction algorithm.
基金Supported by the National Natural Science Foundation of China(No.60832009), the Natural Science Foundation of Beijing (No.4102044) and the National Nature Science Foundation for Young Scholars of China (No.61001115)
文摘Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence prediction scheme for cognitive radio networks with multiple spectrum bands to decrease the spectrum sensing time and increase the throughput of secondary users. The scheme is based on recent advances in computational learning theory, which has shown that prediction is synonymous with data compression. A Ziv-Lempel data compression algorithm is used to design our spectrum sensing sequence prediction scheme. The spectrum band usage history is used for the prediction in our proposed scheme. Simulation results show that the proposed scheme can reduce the average sensing time and improve the system throughput significantly.
基金supported by the National Natural Science Foundation of China(Grant Nos.41976193 and 42176243).
文摘In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.
基金supported by the National Natural Science Foundation of China(Nos.52374064,52274056)China Scholarship Council(No.202406450086).
文摘Carbon dioxide Enhanced Oil Recovery(CO_(2)-EOR)technology guarantees substantial underground CO_(2) sequestration while simultaneously boosting the production capacity of subsurface hydrocarbons(oil and gas).However,unreasonable CO_(2)-EOR strategies,encompassing well placement and well control parameters,will lead to premature gas channeling in production wells,resulting in large amounts of CO_(2) escape without any beneficial effect.Due to the lack of prediction and optimization tools that integrate complex geological and engineering information for the widely used CO_(2)-EOR technology in promising industries,it is imperative to conduct thorough process simulations and optimization evaluations of CO_(2)-EOR technology.In this paper,a novel optimization workflow that couples the AST-GraphTrans-based proxy model(Attention-based Spatio-temporal Graph Transformer)and multi-objective optimization algorithm MOPSO(Multi-objective Particle Swarm Optimization)is established to optimize CO_(2)-EOR strategies.The workflow consists of two outstanding components.The AST-GraphTrans-based proxy model is utilized to forecast the dynamics of CO_(2) flooding and sequestration,which includes cumulative oil production,CO_(2) sequestration volume,and CO_(2) plume front.And the MOPSO algorithm is employed for achieving maximum oil production and maximum sequestration volume by coordinating well placement and well control parameters with the containment of gas channeling.By the collaborative coordination of the two aforementioned components,the AST-GraphTrans proxy-assisted optimization workflow overcomes the limitations of rapid optimization in CO_(2)-EOR technology,which cannot consider high-dimensional spatio-temporal information.The effectiveness of the proposed workflow is validated on a 2D synthetic model and a 3D field-scale reservoir model.The proposed workflow yields optimizations that lead to a significant increase in cumulative oil production by 87%and 49%,and CO_(2) sequestration volume enhancement by 78%and 50%across various reservoirs.These findings underscore the superior stability and generalization capabilities of the AST-GraphTrans proxy-assisted framework.The contribution of this study is to provide a more efficient prediction and optimization tool that maximizes CO_(2) sequestration and oil recovery while mitigating CO_(2) gas channeling,thereby ensuring cleaner oil production.
基金supported by the National Natural Science Foundation(No.42176020)the Open Research Fund of State Key Laboratory of Target Vulnerability Assessment(No.YSX2024KFYS001)+1 种基金the National Key Research and Development Program(No.2022YFC3105002)the Project from Key Laboratory of Marine Environmental Information Technology(No.2023GFW-1047).
文摘Sea-surface wind is a vital meteorological element in marine activities and climate research.This study proposed the spectral attention enhanced multidimensional feature fusion convolutional long short-term memory(LSTM)network(SAMFF-Conv-LSTM),a novel approach for sea-surface wind-speed prediction that emphasizes the temporal characteristics of data samples.The model incorporates the Fourier transform to extract time-and frequency-domain features from wave and wind variables.For the 12 h prediction,the SAMFF-ConvLSTM achieved a correlation coefficient of 0.960 and a root mean square error(RMSE)of 1.350 m/s,implying a high prediction accuracy.For the 24 h prediction,the RMSE of the SAMFF-ConvLSTM was reduced by 38.11%,14.26%,and 13.36%compared with those of the convolutional neural network,gated recurrent units,and convolutional LSTM(ConvLSTM),respectively.These results confirm the superior reliability and accuracy of the SAMFF-ConvLSTM over traditional models in theoretical and practical applications.
文摘The environment shear stress of Tangshan main earthquake and 38 great aftershocks have been calculated by the acceleration data of Tangshan earthquake sequence. The environment shear stress for 52 smaller aftershocks from July of 1982 to July of 1984 have also been calculated by use of the digital data of the Sino-American cooperation recorded by the instrumental arrays in Tangshan. The results represent that the environment shear stress τ0 values have a weak dependence on the seismic moment, only the small and moderate earthquakes will be able to occur in the region with smaller τ0 value and the large earthquakes are only in the region with greater τ0 value. The peak acceleration, velocity and displacement will be larger for the earthquakes occurred in the region with greater τ0 value, Therefore, the measurement of environment shear stress τ0 value for the significant region will play an important role in earthquske prediction and engineering shock-proof. The environment shear stress values for the great aftershocks occurred in the two ends of the main fault are often higher than that for the main shock. This case may represent the stress concentration in the two ends of the fault. This phenomenon provides the references for the place where the great aftershock will occur.
文摘Sensing signals of many real-world network systems,such as traffic network or microgrid,could be sparse and irregular in both spatial and temporal domains due to reasons such as cost reduction,noise corruption,or device malfunction.It is a fundamental but challenging problem to model the continuous dynamics of a system from the sporadic observations on the network of nodes,which is generally represented as a graph.In this paper,we propose a deep learning model called Evolved Differential Model(EDM)to model the continuous-time stochastic process from partial observations on graph.Our model incorporates diffusion convolutional network to parameterize continuous-time system dynamics by graph Ordinary Differential Equation(ODE)and graph Stochastic Differential Equation(SDE).The graph ODE is applied to accurately capture the spatial-temporal relation and extract hidden features from the data.The graph SDE can efficiently capture the underlying uncertainty of the network systems.With the recurrent ODE-SDE scheme,EDM can serve as an accurate online predictive model that is effective for either monitoring or analyzing the real-world networked objects.Through extensive experiments on several datasets,we demonstrate that EDM outperforms existing methods in online prediction tasks.
基金supported by the National Natural Science Foundation of China(No.62276026).
文摘With the rapid growth of the electric vehicle(EV)market,accurately predicting user charging behavior has become particularly important.This not only guides power distribution and charging station planning but is also crucial for improving user satisfaction and operational efficiency.This study aims to predict the charging behavior of EV users using Large Language Models(LLMs).Unlike traditional methods such as Long Short-Term Memory(LSTM)and XGBoost,or single-task prediction models,our proposed model,EVCharging-GPT,is the first to integrate the text generation capabilities of LLMs with a multi-task learning framework for EV user behavior prediction.We construct an EV user charging data processing flow and create a dataset of real scenarios for fine-tuning and testing the model.By carefully designing prompt templates,we transform the charging behavior prediction task into a text-to-text format,allowing the model to leverage its rich pre-trained knowledge base to make effective predictions.Additionally,we integrate temporal and static categorical features through natural language prompts and employ LoRA(Low-Rank Adaptation)technology to achieve efficient domain adaptation.To verify the effectiveness of the EVCharging-GPT model,we conduct extensive comparative experiments with various LLMs and traditional models.The results demonstrate the potential of the LLM-based approach for predicting user behavior in EVs and provide a solid foundation for future research and applications.
基金This work was supported by Natural Science Foundation of China(U1905202)Fujian Major Project of Provincial Science&Technology Hall of China(2020NZ010008)Xiamen Ocean and Fishery Development Special Fund Project(21CZP006HJ04).
文摘Antifreeze protein(AFP)can inhibit the growth of ice crystals to protect organisms from freezing damage,and demonstrates broad application prospects in food industry.Antifreeze peptides(AFPP)are specifi c peptides with functional domains showing antifreeze activity in AFP.Bioinformatics-based molecular simulation technology can more accurately explain the properties and mechanisms of biological macromolecules.Therefore,the binding stability of antifreeze peptides and antifreeze proteins(AFP(P))to ice and the molecular-scale growth kinetics of ice were analyzed by molecular simulation,which can make up for the limitations of experimental technology.This review concludes the molecular simulation-based research in the inhibition’s study of AFP(P)on ice growth,including sequence prediction,structure construction,molecular docking and molecular dynamics(MD)studies of AFP(P)on ice applications in growth inhibition.Finally,the review prospects the future direction of designing new antifreeze biomimetic materials through molecular simulation and machine learning.The information presented in this paper will help enrich our understanding of AFPP.