期刊文献+
共找到823篇文章
< 1 2 42 >
每页显示 20 50 100
The Supplementary Motor Area as a Flexible Hub Mediating Behavioral and Neuroplastic Changes in Motor Sequence Learning:A TMS and TMS-EEG Study 被引量:1
1
作者 Jing Chen Yanzi Fan +6 位作者 Xize Jia Fengmei Fan Jinhui Wang Qihong Zou Bing Chen Xianwei Che Yating Lv 《Neuroscience Bulletin》 2025年第5期837-852,共16页
Attempts have been made to modulate motor sequence learning(MSL)through repetitive transcranial magnetic stimulation,targeting different sites within the sensorimotor network.However,the target with the optimum modula... Attempts have been made to modulate motor sequence learning(MSL)through repetitive transcranial magnetic stimulation,targeting different sites within the sensorimotor network.However,the target with the optimum modulatory effect on neural plasticity associated with MSL remains unclarified.This study was therefore designed to compare the role of the left primary motor cortex and the left supplementary motor area proper(SMAp)in modulating MSL across different complexity levels and for both hands,as well as the associated neuroplasticity by applying intermittent theta burst stimulation together with the electroencephalogram and concurrent transcranial magnetic stimulation.Our data demonstrated the role of SMAp stimulation in modulating neural communication to support MSL,which is achieved by facilitating regional activation and orchestrating neural coupling across distributed brain regions,particularly in interhemispheric connections.These findings may have important clinical implications,particularly for motor rehabilitation in populations such as post-stroke patients. 展开更多
关键词 Motor sequence learning Intermittent theta burst stimulation Concurrent transcranial magnetic stimulation and electroencephalogram NEUROPLASTICITY Functional connectivity
原文传递
Machine Learning-based Analysis of the 2015 M5.8 Alxa Left Banner Earthquake Sequence
2
作者 Zhang Fan Han Xiao-Ming +3 位作者 Pei Dong-Yang Cui Feng-Zhi Bai Yi-Hang Yang Xiao-Zhong 《Applied Geophysics》 2025年第3期711-728,894,共19页
Machine learning(ML)efficiently and accurately processes dense seismic array data,improving earthquake catalog creation,which is crucial for understanding earthquake sequences and fault systems;analyzing its reliabili... Machine learning(ML)efficiently and accurately processes dense seismic array data,improving earthquake catalog creation,which is crucial for understanding earthquake sequences and fault systems;analyzing its reliability is also essential.An M5.8 earthquake struck Alxa Left Banner,Inner Mongolia,China on April 15,2015,a region with limited CENC monitoring capabilities,making analysis challenging.However,abundant data from ChinArray provided valuable observations for assessing the event.This study leveraged ChinArray data from the 2015 Alxa Left Banner earthquake sequence,employing machine learning(specifically PhaseNet,a deep learning method,and GaMMA,a Bayesian approach)for automated seismic phase picking,association,and location analysis.Our generated catalog,comprising 10,432 phases from 708 events,is roughly ten times larger than the CENC catalog,encompassing all CENC events with strong consistency.A slight magnitude overestimation is observed only at lower magnitudes.Furthermore,the catalog adheres to the Gutenberg-Richter and Omori laws spatially,temporally,and in magnitude distribution,demonstrating its high reliability.Double-difference tomography refined locations for 366 events,yielding a more compact spatial distribution with horizontal errors within 100m,vertical errors within 300m,and travel-time residuals within 0.05s.Depths predominantly range from 10-30km.Aftershocks align primarily NEE,with the mainshock east of the aftershock zone.The near-vertical main fault plane dips northwestward,exhibiting a Y-shaped branching structure,converging at depth and expanding towards the surface.FOCMEC analysis,using first motion and amplitude ratios,yielded focal mechanism solutions for 10 events,including the mainshock.These solutions consistently indicate a strike-slip mechanism with a minor extensional component.Integrating the earthquake sequence's spatial distribution and focal mechanisms suggests the seismogenic structure is a negative flower structure,consistent with the Dengkou-Benjing fault.Comparing the CENC and ML-generated catalogs using the maximum curvature(MAXC)method reveals a 0.6 decrease in completeness magnitude(M_(C)).However,magnitude-frequency distribution discrepancies above the MAXC-estimated M_(C)suggest MAXC may underestimate both M_(C)and the b-value.This study analyzes the 2015 Alxa Left Banner M5.8 earthquake using a reliable,MLgenerated earthquake catalog,revealing detailed information about the sequence,faulting structure,aftershock distribution,and stress characteristics. 展开更多
关键词 Deep learning M5.8 Alxa Left Banner earthquake Seismogenic structure Earthquake sequence Focal mechanism
在线阅读 下载PDF
Deep learning on chromatin profiles reveals the cis-regulatory sequence code of the rice genome
3
作者 Xinkai Zhou Zhonghao Ruan +2 位作者 Chenlu Zhang Kerstin Kaufmann Dijun Chen 《Journal of Genetics and Genomics》 2025年第6期848-851,共4页
Rice(Oryza sativa)is a staple food for more than half of the world's population and a critical crop for global agriculture.Understanding the regulatory mechanisms that control gene expression in the rice genome is... Rice(Oryza sativa)is a staple food for more than half of the world's population and a critical crop for global agriculture.Understanding the regulatory mechanisms that control gene expression in the rice genome is fundamental for advancing agricultural productivity and food security.In mechanism,cis-regulatory elements(including promoters,enhancers,silencers,and insulators)are key DNA sequences whose activities determine the spatial and temporal expression patterns of nearby genes(Yocca and Edger,2022;Schmitz et al.,2022). 展开更多
关键词 cis regulatory elements deep learning chromatin profiles agricultural productivity rice genome cis regulatory sequence code gene expression food security
原文传递
Predicting Diabetic Retinopathy Using a Machine Learning Approach Informed by Whole-Exome Sequencing Studies
4
作者 Chongyang She Wenying Fan +2 位作者 Yunyun Li Yong Tao Zufei Li 《Biomedical and Environmental Sciences》 2025年第1期67-78,共12页
Objective To establish and validate a novel diabetic retinopathy(DR)risk-prediction model using a whole-exome sequencing(WES)-based machine learning(ML)method.Methods WES was performed to identify potential single nuc... Objective To establish and validate a novel diabetic retinopathy(DR)risk-prediction model using a whole-exome sequencing(WES)-based machine learning(ML)method.Methods WES was performed to identify potential single nucleotide polymorphism(SNP)or mutation sites in a DR pedigree comprising 10 members.A prediction model was established and validated in a cohort of 420 type 2 diabetic patients based on both genetic and demographic features.The contribution of each feature was assessed using Shapley Additive explanation analysis.The efficacies of the models with and without SNP were compared.Results WES revealed that seven SNPs/mutations(rs116911833 in TRIM7,1997T>C in LRBA,1643T>C in PRMT10,rs117858678 in C9orf152,rs201922794 in CLDN25,rs146694895 in SH3GLB2,and rs201407189 in FANCC)were associated with DR.Notably,the model including rs146694895 and rs201407189 achieved better performance in predicting DR(accuracy:80.2%;sensitivity:83.3%;specificity:76.7%;area under the receiver operating characteristic curve[AUC]:80.0%)than the model without these SNPs(accuracy:79.4%;sensitivity:80.3%;specificity:78.3%;AUC:79.3%).Conclusion Novel SNP sites associated with DR were identified in the DR pedigree.Inclusion of rs146694895 and rs201407189 significantly enhanced the performance of the ML-based DR prediction model. 展开更多
关键词 Machine learning Diabetic retinopathy Whole exome sequencing Type 2 diabetes mellitus
暂未订购
CADGen:Computer-aided design sequence construction with a guided codebook learning
5
作者 Shengdi Zhou Xiaoqiang Zan +1 位作者 Zhuqing Li Bin Zhou 《Digital Twins and Applications》 2024年第1期75-87,共13页
Computer-aided design(CAD)software continues to be a crucial tool in digital twin application and manufacturing,facilitating the design of various products.We present a novel CAD generation method,an agent that constr... Computer-aided design(CAD)software continues to be a crucial tool in digital twin application and manufacturing,facilitating the design of various products.We present a novel CAD generation method,an agent that constructs the CAD sequences containing the sketch-and-extrude modelling operations efficiently and with high quality.Starting from the sketch and extrusion operation sequences,we utilise the transformer encoder to encode them into different disentangled codebooks to represent their distribution properties while considering their correlations.Then,a combination of auto-regressive and non-autoregressive samplers is trained to sample the code for CAD sequence con-struction.Extensive experiments demonstrate that our model generates diverse and high-quality CAD models.We also show some cases of real digital twin applications and indicate that our generated model can be used as the data source for the digital twin platform,exhibiting designers'potential. 展开更多
关键词 CAD sequence construction code sample computer‐aided design digital twins hierarchical code learning
在线阅读 下载PDF
Bayesian network structure learning by dynamic programming algorithm based on node block sequence constraints
6
作者 Chuchao He Ruohai Di +1 位作者 Bo Li Evgeny Neretin 《CAAI Transactions on Intelligence Technology》 2024年第6期1605-1622,共18页
The use of dynamic programming(DP)algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large-scale networks.Therefore,this study propose... The use of dynamic programming(DP)algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large-scale networks.Therefore,this study proposes a DP algorithm based on node block sequence constraints.The proposed algorithm constrains the traversal process of the parent graph by using the M-sequence matrix to considerably reduce the time consumption and space complexity by pruning the traversal process of the order graph using the node block sequence.Experimental results show that compared with existing DP algorithms,the proposed algorithm can obtain learning results more efficiently with less than 1%loss of accuracy,and can be used for learning larger-scale networks. 展开更多
关键词 Bayesian network(BN) dynamic programming(DP) node block sequence strongly connected component(SCC) structure learning
在线阅读 下载PDF
A critical evaluation of deep-learning based phylogenetic inference programs using simulated datasets
7
作者 Yixiao Zhu Yonglin Li +2 位作者 Chuhao Li Xing-Xing Shen Xiaofan Zhou 《Journal of Genetics and Genomics》 2025年第5期714-717,共4页
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o... Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively. 展开更多
关键词 phylogenetic inference explicit models sequence evolution deep learning deep learning dl techniques molecular sequences simulated datasets phylogenetic methods such evolutionary biologymany
原文传递
A Comparative Study of Data Representation Techniques for Deep Learning-Based Classification of Promoter and Histone-Associated DNA Regions
8
作者 Sarab Almuhaideb Najwa Altwaijry +2 位作者 Isra Al-Turaiki Ahmad Raza Khan Hamza Ali Rizvi 《Computers, Materials & Continua》 2025年第11期3095-3128,共34页
Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where la... Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where large biomedical datasets are transformed into valuable knowledge.Existing methods rely on a feature extraction step and suffer from high computational time requirements.In contrast,newer approaches leveraging deep learning have shown significant promise in enhancing accuracy and efficiency.In this paper,we investigate the performance of various deep learning architectures:Convolutional Neural Network(CNN),CNN-Long Short-Term Memory(CNNLSTM),CNN-Bidirectional Long Short-Term Memory(CNN-BiLSTM),Residual Network(ResNet),and InceptionV3 for DNA sequence classification.Various numerical and visual data representation techniques are utilized to represent the input datasets,including:label encoding,k-mer sentence encoding,k-mer one-hot vector,Frequency Chaos Game Representation(FCGR)and 5-Color Map(ColorSquare).Three datasets are used for the training of the models including H3,H4 and DNA Sequence Dataset(Yeast,Human,Arabidopsis Thaliana).Experiments are performed to determine which combination of DNA representation and deep learning architecture yields improved performance for the classification task.Our results indicate that using a hybrid CNN-LSTM neural network trained on DNA sequences represented as one-hot encoded k-mer sequences yields the best performance,achieving an accuracy of 92.1%. 展开更多
关键词 DNA sequence classification deep learning data visualization
在线阅读 下载PDF
Breed identification using breed‑informative SNPs and machine learning based on whole genome sequence data and SNP chip data 被引量:4
9
作者 Changheng Zhao Dan Wang +4 位作者 Jun Teng Cheng Yang Xinyi Zhang Xianming Wei Qin Zhang 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2023年第5期1941-1953,共13页
Background Breed identification is useful in a variety of biological contexts.Breed identification usually involves two stages,i.e.,detection of breed-informative SNPs and breed assignment.For both stages,there are se... Background Breed identification is useful in a variety of biological contexts.Breed identification usually involves two stages,i.e.,detection of breed-informative SNPs and breed assignment.For both stages,there are several methods proposed.However,what is the optimal combination of these methods remain unclear.In this study,using the whole genome sequence data available for 13 cattle breeds from Run 8 of the 1,000 Bull Genomes Project,we compared the combinations of three methods(Delta,FST,and In)for breed-informative SNP detection and five machine learning methods(KNN,SVM,RF,NB,and ANN)for breed assignment with respect to different reference population sizes and difference numbers of most breed-informative SNPs.In addition,we evaluated the accuracy of breed identification using SNP chip data of different densities.Results We found that all combinations performed quite well with identification accuracies over 95%in all scenarios.However,there was no combination which performed the best and robust across all scenarios.We proposed to inte-grate the three breed-informative detection methods,named DFI,and integrate the three machine learning methods,KNN,SVM,and RF,named KSR.We found that the combination of these two integrated methods outperformed the other combinations with accuracies over 99%in most cases and was very robust in all scenarios.The accuracies from using SNP chip data were only slightly lower than that from using sequence data in most cases.Conclusions The current study showed that the combination of DFI and KSR was the optimal strategy.Using sequence data resulted in higher accuracies than using chip data in most cases.However,the differences were gener-ally small.In view of the cost of genotyping,using chip data is also a good option for breed identification. 展开更多
关键词 Breed identification Breed-informative SNPs Genomic breed composition Machine learning Whole genome sequence data
在线阅读 下载PDF
Length matters:Scalable fast encrypted internet traffic service classification based on multiple protocol data unit length sequence with composite deep learning 被引量:3
10
作者 Zihan Chen Guang Cheng +3 位作者 Ziheng Xu Shuyi Guo Yuyang Zhou Yuyu Zhao 《Digital Communications and Networks》 SCIE CSCD 2022年第3期289-302,共14页
As an essential function of encrypted Internet traffic analysis,encrypted traffic service classification can support both coarse-grained network service traffic management and security supervision.However,the traditio... As an essential function of encrypted Internet traffic analysis,encrypted traffic service classification can support both coarse-grained network service traffic management and security supervision.However,the traditional plaintext-based Deep Packet Inspection(DPI)method cannot be applied to such a classification.Moreover,machine learning-based existing methods encounter two problems during feature selection:complex feature overcost processing and Transport Layer Security(TLS)version discrepancy.In this paper,we consider differences between encryption network protocol stacks and propose a composite deep learning-based method in multiprotocol environments using a sliding multiple Protocol Data Unit(multiPDU)length sequence as features by fully utilizing the Markov property in a multiPDU length sequence and maintaining suitability with a TLS-1.3 environment.Control experiments show that both Length-Sensitive(LS)composite deep learning model using a capsule neural network and LS-long short time memory achieve satisfactory effectiveness in F1-score and performance.Owing to faster feature extraction,our method is suitable for actual network environments and superior to state-of-the-art methods. 展开更多
关键词 Encrypted internet traffic Encrypted traffic service classification Multi PDU length sequence Length sensitive composite deep learning TLS-1.3
在线阅读 下载PDF
Non-Supervised Learning for Spread Spectrum Signal Pseudo-Noise Sequence Acquisition
11
作者 Hao Cheng Na Yu Tai-Jun Wang 《Journal of Electronic Science and Technology》 CAS CSCD 2015年第1期83-86,共4页
An idea of estimating the direct sequence spread spectrum(DSSS) signal pseudo-noise(PN) sequence is presented. Without the apriority knowledge about the DSSS signal in the non-cooperation condition, we propose a s... An idea of estimating the direct sequence spread spectrum(DSSS) signal pseudo-noise(PN) sequence is presented. Without the apriority knowledge about the DSSS signal in the non-cooperation condition, we propose a self-organizing feature map(SOFM) neural network algorithm to detect and identify the PN sequence. A non-supervised learning algorithm is proposed according the Kohonen rule in SOFM. The blind algorithm can also estimate the PN sequence in a low signal-to-noise(SNR) and computer simulation demonstrates that the algorithm is effective. Compared with the traditional correlation algorithm based on slip-correlation, the proposed algorithm's bit error rate(BER) and complexity are lower. 展开更多
关键词 Blind estimation direct sequence spread spectrum signal non-supervised learning pseudo-noise sequence
在线阅读 下载PDF
Deep-Learning-Based Production Decline Curve Analysis in the Gas Reservoir through Sequence Learning Models
12
作者 Shaohua Gu Jiabao Wang +3 位作者 Liang Xue Bin Tu Mingjin Yang Yuetian Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第6期1579-1599,共21页
Production performance prediction of tight gas reservoirs is crucial to the estimation of ultimate recovery,which has an important impact on gas field development planning and economic evaluation.Owing to the model’s... Production performance prediction of tight gas reservoirs is crucial to the estimation of ultimate recovery,which has an important impact on gas field development planning and economic evaluation.Owing to the model’s simplicity,the decline curve analysis method has been widely used to predict production performance.The advancement of deep-learning methods provides an intelligent way of analyzing production performance in tight gas reservoirs.In this paper,a sequence learning method to improve the accuracy and efficiency of tight gas production forecasting is proposed.The sequence learning methods used in production performance analysis herein include the recurrent neural network(RNN),long short-term memory(LSTM)neural network,and gated recurrent unit(GRU)neural network,and their performance in the tight gas reservoir production prediction is investigated and compared.To further improve the performance of the sequence learning method,the hyperparameters in the sequence learning methods are optimized through a particle swarm optimization algorithm,which can greatly simplify the optimization process of the neural network model in an automated manner.Results show that the optimized GRU and RNN models have more compact neural network structures than the LSTM model and that the GRU is more efficiently trained.The predictive performance of LSTM and GRU is similar,and both are better than the RNN and the decline curve analysis model and thus can be used to predict tight gas production. 展开更多
关键词 Tight gas production forecasting deep learning sequence learning models
在线阅读 下载PDF
Aortic Dissection Diagnosis Based on Sequence Information and Deep Learning
13
作者 Haikuo Peng Yun Tan +4 位作者 Hao Tang Ling Tan Xuyu Xiang Yongjun Wang Neal N.Xiong 《Computers, Materials & Continua》 SCIE EI 2022年第11期2757-2771,共15页
Aortic dissection(AD)is one of the most serious diseases with high mortality,and its diagnosis mainly depends on computed tomography(CT)results.Most existing automatic diagnosis methods of AD are only suitable for AD ... Aortic dissection(AD)is one of the most serious diseases with high mortality,and its diagnosis mainly depends on computed tomography(CT)results.Most existing automatic diagnosis methods of AD are only suitable for AD recognition,which usually require preselection of CT images and cannot be further classified to different types.In this work,we constructed a dataset of 105 cases with a total of 49021 slices,including 31043 slices expertlevel annotation and proposed a two-stage AD diagnosis structure based on sequence information and deep learning.The proposed region of interest(RoI)extraction algorithm based on sequence information(RESI)can realize high-precision for RoI identification in the first stage.Then DenseNet-121 is applied for further diagnosis.Specially,the proposed method can judge the type of AD without preselection of CT images.The experimental results show that the accuracy of Stanford typing classification of AD is 89.19%,and the accuracy at the slice-level reaches 97.41%,which outperform the state-ofart methods.It can provide important decision-making information for the determination of further surgical treatment plan for patients. 展开更多
关键词 Aortic dissection deep learning sequence information ROI
在线阅读 下载PDF
A Service Composition Approach Based on Sequence Mining for Migrating E-learning Legacy System to SOA 被引量:1
14
作者 Zhuo Zhang Dong-Dai Zhou +1 位作者 Hong-Ji Yang Shao-Chun Zhong 《International Journal of Automation and computing》 EI 2010年第4期584-595,共12页
With the fast development of business logic and information technology, today's best solutions are tomorrow's legacy systems. In China, the situation in the education domain follows the same path. Currently, there e... With the fast development of business logic and information technology, today's best solutions are tomorrow's legacy systems. In China, the situation in the education domain follows the same path. Currently, there exists a number of e-learning legacy assets with accumulated practical business experience, such as program resource, usage behaviour data resource, and so on. In order to use these legacy assets adequately and efficiently, we should not only utilize the explicit assets but also discover the hidden assets. The usage behaviour data resource is the set of practical operation sequences requested by all users. The hidden patterns in this data resource will provide users' practical experiences, which can benefit the service composition in service-oriented architecture (SOA) migration. Namely, these discovered patterns will be the candidate composite services (coarse-grained) in SOA systems. Although data mining techniques have been used for software engineering tasks, little is known about how they can be used for service composition of migrating an e-learning legacy system (MELS) to SOA. In this paper, we propose a service composition approach based on sequence mining techniques for MELS. Composite services found by this approach will be the complementation of business logic analysis results of MELS. The core of this approach is to develop an appropriate sequence mining algorithm for mining related data collected from an e-learning legacy system. According to the features of execution trace data on usage behaviour from this e-learning legacy system and needs of further pattern analysis, we propose a sequential mining algorithm to mine this kind of data of tile legacy system. For validation, this approach has been applied to the corresponding real data, which was collected from the e-learning legacy system; meanwhile, some investigation questionnaires were set up to collect satisfaction data. The investigation result is 90% the same with the result obtained through our approach. 展开更多
关键词 Service composition E-learning sequence mining algorithm service-oriented architecture (SOA) legacy system
在线阅读 下载PDF
Multiple Action Sequence Learning and Automatic Generation for a Humanoid Robot Using RNNPB and Reinforcement Learning
15
作者 Takashi Kuremoto Koichi Hashiguchi +4 位作者 Keita Morisaki Shun Watanabe Kunikazu Kobayashi Shingo Mabu Masanao Obayashi 《Journal of Software Engineering and Applications》 2012年第12期128-133,共6页
This paper proposes how to learn and generate multiple action sequences of a humanoid robot. At first, all the basic action sequences, also called primitive behaviors, are learned by a recurrent neural network with pa... This paper proposes how to learn and generate multiple action sequences of a humanoid robot. At first, all the basic action sequences, also called primitive behaviors, are learned by a recurrent neural network with parametric bias (RNNPB) and the value of the internal nodes which are parametric bias (PB) determining the output with different primitive behaviors are obtained. The training of the RNN uses back propagation through time (BPTT) method. After that, to generate the learned behaviors, or a more complex behavior which is the combination of the primitive behaviors, a reinforcement learning algorithm: Q-learning (QL) is adopt to determine which PB value is adaptive for the generation. Finally, using a real humanoid robot, the proposed method was confirmed its effectiveness by the results of experiment. 展开更多
关键词 RNNPB HUMANOID robot BPTT REINFORCEMENT learning MULTIPLE action sequenceS
暂未订购
Single Machine Scheduling with Time-Dependent Learning Effect and Non-Linear Past-Sequence-Dependent Setup Times 被引量:1
16
作者 Yuling Yeh Chinyao Low Wen-Yi Lin 《Journal of Applied Mathematics and Physics》 2015年第1期10-15,共6页
This paper studies a single machine scheduling problem with time-dependent learning and setup times. Time-dependent learning means that the actual processing time of a job is a function of the sum of the normal proces... This paper studies a single machine scheduling problem with time-dependent learning and setup times. Time-dependent learning means that the actual processing time of a job is a function of the sum of the normal processing times of the jobs already scheduled. The setup time of a job is proportional to the length of the already processed jobs, that is, past-sequence-dependent (psd) setup time. We show that the addressed problem remains polynomially solvable for the objectives, i.e., minimization of the total completion time and minimization of the total weighted completion time. We also show that the smallest processing time (SPT) rule provides the optimum sequence for the addressed problem. 展开更多
关键词 Scheduling TIME-DEPENDENT learning SETUP TIME Past-sequence-Dependent Total COMPLETION TIME
暂未订购
Sequence-To-Sequence Learning for Online Imputation of Sensory Data
17
作者 Kaitai TONG Teng LI 《Instrumentation》 2019年第2期63-70,共8页
Online sensing can provide useful information in monitoring applications,for example,machine health monitoring,structural condition monitoring,environmental monitoring,and many more.Missing data is generally a signifi... Online sensing can provide useful information in monitoring applications,for example,machine health monitoring,structural condition monitoring,environmental monitoring,and many more.Missing data is generally a significant issue in the sensory data that is collected online by sensing systems,which may affect the goals of monitoring programs.In this paper,a sequence-to-sequence learning model based on a recurrent neural network(RNN)architecture is presented.In the proposed method,multivariate time series of the monitored parameters is embedded into the neural network through layer-by-layer encoders where the hidden features of the inputs are adaptively extracted.Afterwards,predictions of the missing data are generated by network decoders,which are one-step-ahead predictive data sequences of the monitored parameters.The prediction performance of the proposed model is validated based on a real-world sensory dataset.The experimental results demonstrate the performance of the proposed RNN-encoder-decoder model with its capability in sequence-to-sequence learning for online imputation of sensory data. 展开更多
关键词 DATA IMPUTATION RECURRENT NEURAL Network sequence-To-sequence learning sequence Prediction
原文传递
A spatiotemporal deep learning method for excavation-induced wall deflections 被引量:3
18
作者 Yuanqin Tao Shaoxiang Zeng +3 位作者 Honglei Sun Yuanqiang Cai Jinzhang Zhang Xiaodong Pan 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期3327-3338,共12页
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da... Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues. 展开更多
关键词 Braced excavation Wall deflections Deep learning Convolutional layer Long short-term memory(LSTM) sequence to sequence(seq2seq)
在线阅读 下载PDF
Assessments of Data-Driven Deep Learning Models on One-Month Predictions of Pan-Arctic Sea Ice Thickness 被引量:1
19
作者 Chentao SONG Jiang ZHU Xichen LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1379-1390,共12页
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma... In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications. 展开更多
关键词 Arctic sea ice thickness deep learning spatiotemporal sequence prediction transfer learning
在线阅读 下载PDF
上一页 1 2 42 下一页 到第
使用帮助 返回顶部