期刊文献+
共找到18篇文章
< 1 >
每页显示 20 50 100
Parallel Learning:a Perspective and a Framework 被引量:39
1
作者 Li Li Yilun Lin +1 位作者 Nanning Zheng Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期389-395,共7页
The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of... The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of-art data-driven algorithm.The second problem is the lack of a general theory which can be used to analyze and implement a complex learning system.In this paper,we proposed a general methods that can address both two issues.We combine the concepts of descriptive learning,predictive learning,and prescriptive learning into a uniform framework,so as to build a parallel system allowing learning system improved by self-boosting.Formulating a new perspective of data,knowledge and action,we provide a new methodology called parallel learning to design machine learning system for real-world problems. 展开更多
关键词 Descriptive learning machine learning parallel learning parallel systems predictive learning prescriptive learning
在线阅读 下载PDF
Parallel Reinforcement Learning:A Framework and Case Study 被引量:10
2
作者 Teng Liu Bin Tian +3 位作者 Yunfeng Ai Li Li Dongpu Cao Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第4期827-835,共9页
In this paper, a new machine learning framework is developed for complex system control, called parallel reinforcement learning. To overcome data deficiency of current data-driven algorithms, a parallel system is buil... In this paper, a new machine learning framework is developed for complex system control, called parallel reinforcement learning. To overcome data deficiency of current data-driven algorithms, a parallel system is built to improve complex learning system by self-guidance. Based on the Markov chain(MC) theory, we combine the transfer learning, predictive learning, deep learning and reinforcement learning to tackle the data and action processes and to express the knowledge. Parallel reinforcement learning framework is formulated and several case studies for real-world problems are finally introduced. 展开更多
关键词 Deep learning machine learning parallel reinforcement learning parallel system predictive learning transfer learning
在线阅读 下载PDF
Computational Machine Learning Representation for the Flexoelectricity Effect in Truncated Pyramid Structures 被引量:10
3
作者 Khader M.Hamdia Hamid Ghasemi +2 位作者 Xiaoying Zhuang Naif Alajlan Timon Rabczuk 《Computers, Materials & Continua》 SCIE EI 2019年第4期79-87,共9页
In this study,machine learning representation is introduced to evaluate the flexoelectricity effect in truncated pyramid nanostructure under compression.A Non-Uniform Rational B-spline(NURBS)based IGA formulation is e... In this study,machine learning representation is introduced to evaluate the flexoelectricity effect in truncated pyramid nanostructure under compression.A Non-Uniform Rational B-spline(NURBS)based IGA formulation is employed to model the flexoelectricity.We investigate 2D system with an isotropic linear elastic material under plane strain conditions discretized by 45×30 grid of B-spline elements.Six input parameters are selected to construct a deep neural network(DNN)model.They are the Young's modulus,two dielectric permittivity constants,the longitudinal and transversal flexoelectric coefficients and the order of the shape function.The outputs of interest are the strain in the stress direction and the electric potential due flexoelectricity.The dataset are generated from the forward analysis of the flexoelectric model.80%of the dataset is used for training purpose while the remaining is used for validation by checking the mean squared error.In addition to the input and output layers,the developed DNN model is composed of four hidden layers.The results showed high predictions capabilities of the proposed method with much lower computational time in comparison to the numerical model. 展开更多
关键词 FLEXOELECTRICITY Isogeometric analysis Machine learning prediction deep neural networks
在线阅读 下载PDF
Multimodality Prediction of Chaotic Time Series with Sparse Hard-Cut EM Learning of the Gaussian Process Mixture Model 被引量:1
4
作者 周亚同 樊煜 +1 位作者 陈子一 孙建成 《Chinese Physics Letters》 SCIE CAS CSCD 2017年第5期22-26,共5页
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It au... The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expec- tation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHO-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval SHC-EM outperforms the traditional variational 1earning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. 展开更多
关键词 GPM Multimodality Prediction of Chaotic Time Series with Sparse Hard-Cut EM learning of the Gaussian Process Mixture Model EM SHC
原文传递
A Novel Hidden Danger Prediction Method in CloudBased Intelligent Industrial Production Management Using Timeliness Managing Extreme Learning Machine
5
作者 Xiong Luo Xiaona Yang +3 位作者 Weiping Wang Xiaohui Chang Xinyan Wang Zhigang Zhao 《China Communications》 SCIE CSCD 2016年第7期74-82,共9页
To prevent possible accidents,the study of data-driven analytics to predict hidden dangers in cloud service-based intelligent industrial production management has been the subject of increasing interest recently.A mac... To prevent possible accidents,the study of data-driven analytics to predict hidden dangers in cloud service-based intelligent industrial production management has been the subject of increasing interest recently.A machine learning algorithm that uses timeliness managing extreme learning machine is utilized in this article to achieve the above prediction.Compared with traditional learning algorithms,extreme learning machine(ELM) exhibits high performance because of its unique feature of a high generalization capability at a fast learning speed.Timeliness managing ELM is proposed by incorporating timeliness management scheme into ELM.When using the timeliness managing ELM scheme to predict hidden dangers,newly incremental data could be added prior to the historical data to maximize the contribution of the newly incremental training data,because the incremental data may be able to contribute reasonable weights to represent the current production situation according to practical analysis of accidents in some industrial productions.Experimental results from a coal mine show that the use of timeliness managing ELM can improve the prediction accuracy of hidden dangers with better stability compared with other similar machine learning methods. 展开更多
关键词 prediction incremental learning extreme learning machine cloud service
在线阅读 下载PDF
A Hybrid Deep Learning Approach for Green Energy Forecasting in Asian Countries
6
作者 Tao Yan Javed Rashid +2 位作者 Muhammad Shoaib Saleem Sajjad Ahmad Muhammad Faheem 《Computers, Materials & Continua》 SCIE EI 2024年第11期2685-2708,共24页
Electricity is essential for keeping power networks balanced between supply and demand,especially since it costs a lot to store.The article talks about different deep learning methods that are used to guess how much g... Electricity is essential for keeping power networks balanced between supply and demand,especially since it costs a lot to store.The article talks about different deep learning methods that are used to guess how much green energy different Asian countries will produce.The main goal is to make reliable and accurate predictions that can help with the planning of new power plants to meet rising demand.There is a new deep learning model called the Green-electrical Production Ensemble(GP-Ensemble).It combines three types of neural networks:convolutional neural networks(CNNs),gated recurrent units(GRUs),and feedforward neural networks(FNNs).The model promises to improve prediction accuracy.The 1965–2023 dataset covers green energy generation statistics from ten Asian countries.Due to the rising energy supply-demand mismatch,the primary goal is to develop the best model for predicting future power production.The GP-Ensemble deep learning model outperforms individual models(GRU,FNN,and CNN)and alternative approaches such as fully convolutional networks(FCN)and other ensemble models in mean squared error(MSE),mean absolute error(MAE)and root mean squared error(RMSE)metrics.This study enhances our ability to predict green electricity production over time,with MSE of 0.0631,MAE of 0.1754,and RMSE of 0.2383.It may influence laws and enhance energy management. 展开更多
关键词 Green energy advanced predictive techniques convolutional neural networks(CNNs) gated recurrent units(GRUs) deep learning for electricity prediction green-electrical production ensemble technique
在线阅读 下载PDF
Prediction of Abidjan Groundwater Quality Using Machine Learning Approaches: An Exploratory Study
7
作者 Dion Gueu Edith Kressy 《Intelligent Control and Automation》 2024年第4期215-248,共34页
Continuous groundwater quality monitoring poses significant challenges affecting the environment and public health. Groundwater in Abidjan, specifically from the Continental Terminal (CT), is the primary supply source... Continuous groundwater quality monitoring poses significant challenges affecting the environment and public health. Groundwater in Abidjan, specifically from the Continental Terminal (CT), is the primary supply source. Therefore, ensuring safe drinking water and environmental protection requires a thorough evaluation and surveillance of this resource. Our present research evaluates the quality of the CT groundwater in Abidjan using the water quality index (WQI) based on the analytical hierarchy process (AHP). This study also explores the application of machine learning predictions as a time-efficient and cost-effective approach for groundwater resource management. Therefore, three Machine Learning regression algorithms (Ridge, Lasso, and Gradient Boosting (GB)) were executed and compared. The AHP-based WQI results classified 98.98% of samples as “good” water quality, while 0.68% and 0.34% of samples were respectively categorized as “excellent” and “poor” water. Afterward, the prediction performance evaluation highlighted that the GB outperformed the other models with the highest accuracy and consistency (MSE = 0.097, RMSE = 0.300, r = 0.766, rs = 0.757, and τ = 0.804). In contrast, the Lasso model recorded the lowest prediction accuracy, with an MSE of 148.921, an RMSE of 6.828, and consistency parameters of r = 0.397, rs = 0.079, and τ = 0.082. Gradient Boosting regression effectively learns nonlinear events and interactions by iteratively fitting new models to errors of previous models, enabling a more realistic groundwater quality prediction. This study provides a novel perspective for improving groundwater quality management in Abidjan, promoting real-time tracking and risk mitigations. 展开更多
关键词 GROUNDWATER AHP Weight-Based WQI Machine learning Prediction Regression Models
在线阅读 下载PDF
A Distributed Framework for Large-scale Protein-protein Interaction Data Analysis and Prediction Using MapReduce 被引量:3
8
作者 Lun Hu Shicheng Yang +3 位作者 Xin Luo Huaqiang Yuan Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期160-172,共13页
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti... Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy. 展开更多
关键词 Distributed computing large-scale prediction machine learning MAPREDUCE protein-protein interaction(PPI)
在线阅读 下载PDF
Ship motion extreme short time prediction of ship pitch based on diagonal recurrent neural network 被引量:3
9
作者 SHEN Yan XIE Mei-ping 《Journal of Marine Science and Application》 2005年第2期56-60,共5页
A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The prin... A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The principle of RPE learning algorithm is to adjust weights along the direction of Gauss-Newton. Meanwhile, it is unnecessary to calculate the second local derivative and the inverse matrixes, whose unbiasedness is proved. With application to the extremely short time prediction of large ship pitch, satisfactory results are obtained. Prediction effect of this algorithm is compared with that of auto-regression and periodical diagram method, and comparison results show that the proposed algorithm is feasible. 展开更多
关键词 extreme short time prediction diagonal recursive neural network recurrent prediction error learning algorithm UNBIASEDNESS
在线阅读 下载PDF
Prediction of maximum upward displacement of shield tunnel linings during construction using particle swarm optimization-random forest algorithm 被引量:2
10
作者 Xiaowei YE Xiaolong ZHANG +2 位作者 Yanbo CHEN Yujun WEI Yang DING 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2024年第1期1-17,共17页
During construction,the shield linings of tunnels often face the problem of local or overall upward movement after leaving the shield tail in soft soil areas or during some large diameter shield projects.Differential ... During construction,the shield linings of tunnels often face the problem of local or overall upward movement after leaving the shield tail in soft soil areas or during some large diameter shield projects.Differential floating will increase the initial stress on the segments and bolts which is harmful to the service performance of the tunnel.In this study we used a random forest(RF)algorithm combined particle swarm optimization(PSO)and 5-fold cross-validation(5-fold CV)to predict the maximum upward displacement of tunnel linings induced by shield tunnel excavation.The mechanism and factors causing upward movement of the tunnel lining are comprehensively summarized.Twelve input variables were selected according to results from analysis of influencing factors.The prediction performance of two models,PSO-RF and RF(default)were compared.The Gini value was obtained to represent the relative importance of the influencing factors to the upward displacement of linings.The PSO-RF model successfully predicted the maximum upward displacement of the tunnel linings with a low error(mean absolute error(MAE)=4.04 mm,root mean square error(RMSE)=5.67 mm)and high correlation(R^(2)=0.915).The thrust and depth of the tunnel were the most important factors in the prediction model influencing the upward displacement of the tunnel linings. 展开更多
关键词 Random forest(RF) Particle swarm optimization(PSO) Upward displacement of lining Machine learning prediction Shieldtunneling construction
原文传递
Microstructural image based convolutional neural networks for efficient prediction of full-field stress maps in short fiber polymer composites 被引量:1
11
作者 S.Gupta T.Mukhopadhyay V.Kushvaha 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2023年第6期58-82,共25页
The increased demand for superior materials has highlighted the need of investigating the mechanical properties of composites to achieve enhanced constitutive relationships.Fiber-reinforced polymer composites have eme... The increased demand for superior materials has highlighted the need of investigating the mechanical properties of composites to achieve enhanced constitutive relationships.Fiber-reinforced polymer composites have emerged as an integral part of materials development with tailored mechanical properties.However,the complexity and heterogeneity of such composites make it considerably more challenging to have precise quantification of properties and attain an optimal design of structures through experimental and computational approaches.In order to avoid the complex,cumbersome,and labor-intensive experimental and numerical modeling approaches,a machine learning(ML)model is proposed here such that it takes the microstructural image as input with a different range of Young’s modulus of carbon fibers and neat epoxy,and obtains output as visualization of the stress component S11(principal stress in the x-direction).For obtaining the training data of the ML model,a short carbon fiberfilled specimen under quasi-static tension is modeled based on 2D Representative Area Element(RAE)using finite element analysis.The composite is inclusive of short carbon fibers with an aspect ratio of 7.5that are infilled in the epoxy systems at various random orientations and positions generated using the Simple Sequential Inhibition(SSI)process.The study reveals that the pix2pix deep learning Convolutional Neural Network(CNN)model is robust enough to predict the stress fields in the composite for a given arrangement of short fibers filled in epoxy over the specified range of Young’s modulus with high accuracy.The CNN model achieves a correlation score of about 0.999 and L2 norm of less than 0.005 for a majority of the samples in the design spectrum,indicating excellent prediction capability.In this paper,we have focused on the stage-wise chronological development of the CNN model with optimized performance for predicting the full-field stress maps of the fiber-reinforced composite specimens.The development of such a robust and efficient algorithm would significantly reduce the amount of time and cost required to study and design new composite materials through the elimination of numerical inputs by direct microstructural images. 展开更多
关键词 Micromechanics of fiber-reinforced composites Machine learning assisted stress prediction Microstructural image-based machine learning CNN based stress analysis
在线阅读 下载PDF
Acknowledgments
12
《The Journal of Biomedical Research》 CAS CSCD 2018年第1期I0007-I0007,共1页
Epilepsy is the most common neurological disorder of the brain that affects people worldwide at any age from newborn to adult. It is characterized by recurrent seizures, which are brief episodes of signs or symptoms d... Epilepsy is the most common neurological disorder of the brain that affects people worldwide at any age from newborn to adult. It is characterized by recurrent seizures, which are brief episodes of signs or symptoms due to abnormal excessive or synchronous neuronal activity in the brain. The electroencephalogram, or EEG, is a physiological method to measure and record the electrical 展开更多
关键词 EEG The Journal of Biomedical Research plans to publish a special issue on Advances in EEG Signal Processing and Machine learning for Epileptic Seizure Detection and Prediction
暂未订购
FDNet:A Deep Learning Approach with Two Parallel Cross Encoding Pathways for Precipitation Nowcasting 被引量:2
13
作者 闫碧莹 杨超 +2 位作者 陈峰 Kohei Takeda Changjun Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第5期1002-1020,共19页
With the goal of predicting the future rainfall intensity in a local region over a relatively short period time,precipitation nowcasting has been a long-time scientific challenge with great social and economic impact.... With the goal of predicting the future rainfall intensity in a local region over a relatively short period time,precipitation nowcasting has been a long-time scientific challenge with great social and economic impact.The radar echo extrapolation approaches for precipitation nowcasting take radar echo images as input,aiming to generate future radar echo images by learning from the historical images.To effectively handle complex and high non-stationary evolution of radar echoes,we propose to decompose the movement into optical flow field motion and morphologic deformation.Following this idea,we introduce Flow-Deformation Network(FDNet),a neural network that models flow and deformation in two parallel cross pathways.The flow encoder captures the optical flow field motion between consecutive images and the deformation encoder distinguishes the change of shape from the translational motion of radar echoes.We evaluate the proposed network architecture on two real-world radar echo datasets.Our model achieves state-of-the-art prediction results compared with recent approaches.To the best of our knowledge,this is the first network architecture with flow and deformation separation to model the evolution of radar echoes for precipitation nowcasting.We believe that the general idea of this work could not only inspire much more effective approaches but also be applied to other similar spatio-temporal prediction tasks. 展开更多
关键词 spatio-temporal predictive learning precipitation nowcasting neural network
原文传递
Federated learning-outcome prediction with multi-layer privacy protection 被引量:1
14
作者 Yupei ZHANG Yuxin LI +3 位作者 Yifei WANG Shuangshuang WEI Yunan XU Xuequn SHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第6期205-214,共10页
Learning-outcome prediction(LOP)is a long-standing and critical problem in educational routes.Many studies have contributed to developing effective models while often suffering from data shortage and low generalizatio... Learning-outcome prediction(LOP)is a long-standing and critical problem in educational routes.Many studies have contributed to developing effective models while often suffering from data shortage and low generalization to various institutions due to the privacy-protection issue.To this end,this study proposes a distributed grade prediction model,dubbed FecMap,by exploiting the federated learning(FL)framework that preserves the private data of local clients and communicates with others through a global generalized model.FecMap considers local subspace learning(LSL),which explicitly learns the local features against the global features,and multi-layer privacy protection(MPP),which hierarchically protects the private features,including model-shareable features and not-allowably shared features,to achieve client-specific classifiers of high performance on LOP per institution.FecMap is then achieved in an iteration manner with all datasets distributed on clients by training a local neural network composed of a global part,a local part,and a classification head in clients and averaging the global parts from clients on the server.To evaluate the FecMap model,we collected three higher-educational datasets of student academic records from engineering majors.Experiment results manifest that FecMap benefits from the proposed LSL and MPP and achieves steady performance on the task of LOP,compared with the state-of-the-art models.This study makes a fresh attempt at the use of federated learning in the learning-analytical task,potentially paving the way to facilitating personalized education with privacy protection. 展开更多
关键词 federated learning local subspace learning hierarchical privacy protection learning outcome prediction privacy-protected representation learning
原文传递
Transparent open-box learning network and artificial neural network predictions of bubble-point pressure compared 被引量:1
15
作者 David A.Wood Abouzar Choubineh 《Petroleum》 CSCD 2020年第4期375-384,共10页
The transparent open box(TOB)learning network algorithm offers an alternative approach to the lack of transparency provided by most machine-learning algorithms.It provides the exact calculations and relationships amon... The transparent open box(TOB)learning network algorithm offers an alternative approach to the lack of transparency provided by most machine-learning algorithms.It provides the exact calculations and relationships among the underlying input variables of the datasets to which it is applied.It also has the capability to achieve credible and auditable levels of prediction accuracy to complex,non-linear datasets,typical of those encountered in the oil and gas sector,highlighting the potential for underfitting and overfitting.The algorithm is applied here to predict bubble-point pressure from a published PVT dataset of 166 data records involving four easy-tomeasure variables(reservoir temperature,gas-oil ratio,oil gravity,gas density relative to air)with uneven,and in parts,sparse data coverage.The TOB network demonstrates high-prediction accuracy for this complex system,although it predictions applied to the full dataset are outperformed by an artificial neural network(ANN).However,the performance of the TOB algorithm reveals the risk of overfitting in the sparse areas of the dataset and achieves a prediction performance that matches the ANN algorithm where the underlying data population is adequate.The high levels of transparency and its inhibitions to overfitting enable the TOB learning network to provide complementary information about the underlying dataset to that provided by traditional machine learning algorithms.This makes them suitable for application in parallel with neural-network algorithms,to overcome their black-box tendencies,and for benchmarking the prediction performance of other machine learning algorithms. 展开更多
关键词 learning network transparency learning network performance compared Prediction of oil bubble point pressure Over fitting data sets for prediction Auditing machine learning predictions TOB complements ANN
原文传递
Instance-Specific Algorithm Selection via Multi-Output Learning 被引量:1
16
作者 Kai Chen Yong Dou +1 位作者 Qi Lv Zhengfa Liang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第2期210-217,共8页
Instance-specific algorithm selection technologies have been successfully used in many research fields,such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential rel... Instance-specific algorithm selection technologies have been successfully used in many research fields,such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential relations between different candidate algorithms for the algorithm selection. In this study, we propose an instancespecific algorithm selection method based on multi-output learning, which can manage these relations more directly.Three kinds of multi-output learning methods are used to predict the performances of the candidate algorithms:(1)multi-output regressor stacking;(2) multi-output extremely randomized trees; and(3) hybrid single-output and multioutput trees. The experimental results obtained using 11 SAT datasets and 5 Max SAT datasets indicate that our proposed methods can obtain a better performance over the state-of-the-art algorithm selection methods. 展开更多
关键词 algorithm selection multi-output learning extremely randomized trees performance prediction constraint satisfaction
原文传递
Online Learning Behavior Analysis and Prediction Based on Spiking Neural Networks
17
作者 Yanjing Li Xiaowei Wang +2 位作者 Fukun Chen Bingxu Zhao Qiang Fu 《Journal of Social Computing》 EI 2024年第2期180-193,共14页
The vast amount of data generated by large-scale open online course platforms provide a solid foundation for the analysis of learning behavior in the field of education.This study utilizes the historical and final lea... The vast amount of data generated by large-scale open online course platforms provide a solid foundation for the analysis of learning behavior in the field of education.This study utilizes the historical and final learning behavior data of over 300000 learners from 17 courses offered on the edX platform by Harvard University and the Massachusetts Institute of Technology during the 2012-2013 academic year.We have developed a spike neural network to predict learning outcomes,and analyzed the correlation between learning behavior and outcomes,aiming to identify key learning behaviors that significantly impact these outcomes.Our goal is to monitor learning progress,provide targeted references for evaluating and improving learning effectiveness,and implement intervention measures promptly.Experimental results demonstrate that the prediction model based on online learning behavior using spiking neural network achieves an impressive accuracy of 99.80%.The learning behaviors that predominantly affect learning effectiveness are found to be students’academic performance and level of participation. 展开更多
关键词 online learning learning outcomes prediction learning behavior analysis spiking neural network
原文传递
AI for organic and polymer synthesis 被引量:2
18
作者 Xin Hong Qi Yang +18 位作者 Kuangbiao Liao Jianfeng Pei Mao Chen Fanyang Mo Hua Lu Wen-Bin Zhang Haisen Zhou Jiaxiao Chen Lebin Su Shuo-Qing Zhang Siyuan Liu Xu Huang Yi-Zhou Sun Yuxiang Wang Zexi Zhang Zhunzhun Yu Sanzhong Luo Xue-Feng Fu Shu-Li You 《Science China Chemistry》 SCIE EI CAS CSCD 2024年第8期2461-2496,共36页
Recent years have witnessed the transformative impact from the integration of artificial intelligence with organic and polymer synthesis. This synergy offers innovative and intelligent solutions to a range of classic ... Recent years have witnessed the transformative impact from the integration of artificial intelligence with organic and polymer synthesis. This synergy offers innovative and intelligent solutions to a range of classic problems in synthetic chemistry. These exciting advancements include the prediction of molecular property, multi-step retrosynthetic pathway planning, elucidation of the structure-performance relationship of single-step transformation, establishment of the quantitative linkage between polymer structures and their functions, design and optimization of polymerization process, prediction of the structure and sequence of biological macromolecules, as well as automated and intelligent synthesis platforms. Chemists can now explore synthetic chemistry with unprecedented precision and efficiency, creating novel reactions, catalysts, and polymer materials under the datadriven paradigm. Despite these thrilling developments, the field of artificial intelligence(AI) synthetic chemistry is still in its infancy, facing challenges and limitations in terms of data openness, model interpretability, as well as software and hardware support. This review aims to provide an overview of the current progress, key challenges, and future development suggestions in the interdisciplinary field between AI and synthetic chemistry. It is hoped that this overview will offer readers a comprehensive understanding of this emerging field, inspiring and promoting further scientific research and development. 展开更多
关键词 organic synthesis polymer synthesis machine learning prediction chemical database automated synthesis
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部