期刊文献+
共找到7,346篇文章
< 1 2 250 >
每页显示 20 50 100
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
1
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm Data intervals clustering Feature selection Application of artificial intelligence in distillation industry data-driven modelling
在线阅读 下载PDF
NJmat 2.0:User Instructions of Data-Driven Machine Learning Interface for Materials Science
2
作者 Lei Zhang Hangyuan Deng 《Computers, Materials & Continua》 2025年第4期1-11,共11页
NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large lan... NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science. 展开更多
关键词 data-driven machine learning natural language processing machine learning potential large language model
在线阅读 下载PDF
Deep learning aided underwater acoustic OFDM receivers: Model-driven or data-driven?
3
作者 Hao Zhao Miaowen Wen +3 位作者 Fei Ji Yaokun Liang Hua Yu Cui Yang 《Digital Communications and Networks》 2025年第3期866-877,共12页
The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communica... The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate. 展开更多
关键词 Deep learning Doubly-selective channels data-driven Model-driven Underwater acoustic communication OFDM
在线阅读 下载PDF
Reaction process optimization based on interpretable machine learning and metaheuristic optimization algorithms
4
作者 Dian Zhang Bo Ouyang Zheng-Hong Luo 《Chinese Journal of Chemical Engineering》 2025年第8期77-85,共9页
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u... The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes. 展开更多
关键词 Reaction process optimization Interpretable machine learning Metaheuristic optimization algorithm BIODIESEL
在线阅读 下载PDF
Signal processing and machine learning techniques in DC microgrids:a review
5
作者 Kanche Anjaiah Jonnalagadda Divya +1 位作者 Eluri N.V.D.V.Prasad Renu Sharma 《Global Energy Interconnection》 2025年第4期598-624,共27页
Low-voltage direct current(DC)microgrids have recently emerged as a promising and viable alternative to traditional alternating cur-rent(AC)microgrids,offering numerous advantages.Consequently,researchers are explorin... Low-voltage direct current(DC)microgrids have recently emerged as a promising and viable alternative to traditional alternating cur-rent(AC)microgrids,offering numerous advantages.Consequently,researchers are exploring the potential of DC microgrids across var-ious configurations.However,despite the sustainability and accuracy offered by DC microgrids,they pose various challenges when integrated into modern power distribution systems.Among these challenges,fault diagnosis holds significant importance.Rapid fault detection in DC microgrids is essential to maintain stability and ensure an uninterrupted power supply to critical loads.A primary chal-lenge is the lack of standards and guidelines for the protection and safety of DC microgrids,including fault detection,location,and clear-ing procedures for both grid-connected and islanded modes.In response,this study presents a brief overview of various approaches for protecting DC microgrids. 展开更多
关键词 DC microgrids Mathematical approach Signal processing technique Machine learning technique Hybrid model DETECTION
在线阅读 下载PDF
Deep Learning in Biomedical Image and Signal Processing:A Survey
6
作者 Batyrkhan Omarov 《Computers, Materials & Continua》 2025年第11期2195-2253,共59页
Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert p... Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare. 展开更多
关键词 Deep learning biomedical imaging signal processing neural networks image segmentation disease classification drug discovery patient monitoring robotic surgery artificial intelligence in healthcare
在线阅读 下载PDF
Optimization of Extrusion-based Silicone Additive Manufacturing Process Parameters Based on Improved Kernel Extreme Learning Machine
7
作者 Zi-Ning Li Xiao-Qing Tian +3 位作者 Dingyifei Ma Shahid Hussain Lian Xia Jiang Han 《Chinese Journal of Polymer Science》 2025年第5期848-862,共15页
Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors an... Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors and performance defects,leading to a decline in product quality and affecting its service life.This study proposes a process parameter optimization method that considers the mechanical properties of printed specimens and production costs.To improve the quality of silicone printing samples and reduce production costs,three machine learning models,kernel extreme learning machine(KELM),support vector regression(SVR),and random forest(RF),were developed to predict these three factors.Training data were obtained through a complete factorial experiment.A new dataset is obtained using the Euclidean distance method,which assigns the elimination factor.It is trained with Bayesian optimization algorithms for parameter optimization,the new dataset is input into the improved double Gaussian extreme learning machine,and finally obtains the improved KELM model.The results showed improved prediction accuracy over SVR and RF.Furthermore,a multi-objective optimization framework was proposed by combining genetic algorithm technology with the improved KELM model.The effectiveness and reasonableness of the model algorithm were verified by comparing the optimized results with the experimental results. 展开更多
关键词 Silicone material extrusion process parameter optimization Double Gaussian kernel extreme learning machine Euclidean distance assigned to the elimination factor Multi-objective optimization framework
原文传递
基于MDP和Q-learning的绿色移动边缘计算任务卸载策略
8
作者 赵宏伟 吕盛凱 +2 位作者 庞芷茜 马子涵 李雨 《河南理工大学学报(自然科学版)》 北大核心 2025年第5期9-16,共8页
目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process... 目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process,MDP)和Q-learning的绿色边缘计算任务卸载策略,该策略考虑了计算频率、传输功率、碳排放等约束,基于云边端协同计算模型,将碳排放优化问题转化为混合整数线性规划模型,通过MDP和Q-learning求解模型,并对比随机分配算法、Q-learning算法、SARSA(state action reward state action)算法的收敛性能、碳排放与总时延。结果与已有的计算卸载策略相比,新策略对应的任务调度算法收敛比SARSA算法、Q-learning算法分别提高了5%,2%,收敛性更好;系统碳排放成本比Q-learning算法、SARSA算法分别减少了8%,22%;考虑终端数量多少,新策略比Q-learning算法、SARSA算法终端数量分别减少了6%,7%;系统总计算时延上,新策略明显低于其他算法,比随机分配算法、Q-learning算法、SARSA算法分别减少了27%,14%,22%。结论该策略能够合理优化卸载计算任务和资源分配,权衡时延、能耗,减少系统碳排放量。 展开更多
关键词 碳排放 边缘计算 强化学习 马尔可夫决策过程 任务卸载
在线阅读 下载PDF
Multi-model ensemble learning for battery state-of-health estimation:Recent advances and perspectives 被引量:1
9
作者 Chuanping Lin Jun Xu +4 位作者 Delong Jiang Jiayang Hou Ying Liang Zhongyue Zou Xuesong Mei 《Journal of Energy Chemistry》 2025年第1期739-759,共21页
The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per... The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions. 展开更多
关键词 Lithium-ion battery State-of-health estimation data-driven Machine learning Ensemble learning Ensemble diversity
在线阅读 下载PDF
Advancements in Liver Tumor Detection:A Comprehensive Review of Various Deep Learning Models
10
作者 Shanmugasundaram Hariharan D.Anandan +3 位作者 Murugaperumal Krishnamoorthy Vinay Kukreja Nitin Goyal Shih-Yu Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期91-122,共32页
Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present wi... Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges. 展开更多
关键词 Liver tumor detection liver tumor segmentation image processing liver tumor diagnosis feature extraction tumor classification deep learning machine learning
暂未订购
Machine learning-based performance predictions for steels considering manufacturing process parameters:a review 被引量:2
11
作者 Wei Fang Jia-xin Huang +2 位作者 Tie-xu Peng Yang Long Fu-xing Yin 《Journal of Iron and Steel Research International》 SCIE EI CAS CSCD 2024年第7期1555-1581,共27页
Steels are widely used as structural materials,making them essential for supporting our lives and industries.However,further improving the comprehensive properties of steel through traditional trial-and-error methods ... Steels are widely used as structural materials,making them essential for supporting our lives and industries.However,further improving the comprehensive properties of steel through traditional trial-and-error methods becomes challenging due to the continuous development and numerous processing parameters involved in steel production.To address this challenge,the application of machine learning methods becomes crucial in establishing complex relationships between manufacturing processes and steel performance.This review begins with a general overview of machine learning methods and subsequently introduces various performance predictions in steel materials.The classification of performance pre-diction was used to assess the current application of machine learning model-assisted design.Several important issues,such as data source and characteristics,intermediate features,algorithm optimization,key feature analysis,and the role of environmental factors,were summarized and analyzed.These insights will be beneficial and enlightening to future research endeavors in this field. 展开更多
关键词 STEEL Manufacturing process Machine learning Performance prediction Algorithm
原文传递
Machine Learning Techniques in Predicting Hot Deformation Behavior of Metallic Materials
12
作者 Petr Opela Josef Walek Jaromír Kopecek 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期713-732,共20页
In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot al... In engineering practice,it is often necessary to determine functional relationships between dependent and independent variables.These relationships can be highly nonlinear,and classical regression approaches cannot always provide sufficiently reliable solutions.Nevertheless,Machine Learning(ML)techniques,which offer advanced regression tools to address complicated engineering issues,have been developed and widely explored.This study investigates the selected ML techniques to evaluate their suitability for application in the hot deformation behavior of metallic materials.The ML-based regression methods of Artificial Neural Networks(ANNs),Support Vector Machine(SVM),Decision Tree Regression(DTR),and Gaussian Process Regression(GPR)are applied to mathematically describe hot flow stress curve datasets acquired experimentally for a medium-carbon steel.Although the GPR method has not been used for such a regression task before,the results showed that its performance is the most favorable and practically unrivaled;neither the ANN method nor the other studied ML techniques provide such precise results of the solved regression analysis. 展开更多
关键词 Machine learning Gaussian process regression artificial neural networks support vector machine hot deformation behavior
在线阅读 下载PDF
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems 被引量:1
13
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
在线阅读 下载PDF
Advancements in machine learning for material design and process optimization in the field of additive manufacturing 被引量:1
14
作者 Hao-ran Zhou Hao Yang +8 位作者 Huai-qian Li Ying-chun Ma Sen Yu Jian shi Jing-chang Cheng Peng Gao Bo Yu Zhi-quan Miao Yan-peng Wei 《China Foundry》 SCIE EI CAS CSCD 2024年第2期101-115,共15页
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co... Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing. 展开更多
关键词 additive manufacturing machine learning material design process optimization intersection of disciplines embedded machine learning
在线阅读 下载PDF
Reliable calculations of nuclear binding energies by the Gaussian process of machine learning 被引量:1
15
作者 Zi-Yi Yuan Dong Bai +1 位作者 Zhen Wang Zhong-Zhou Ren 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第6期130-144,共15页
Reliable calculations of nuclear binding energies are crucial for advancing the research of nuclear physics. Machine learning provides an innovative approach to exploring complex physical problems. In this study, the ... Reliable calculations of nuclear binding energies are crucial for advancing the research of nuclear physics. Machine learning provides an innovative approach to exploring complex physical problems. In this study, the nuclear binding energies are modeled directly using a machine-learning method called the Gaussian process. First, the binding energies for 2238 nuclei with Z > 20 and N > 20 are calculated using the Gaussian process in a physically motivated feature space, yielding an average deviation of 0.046 MeV and a standard deviation of 0.066 MeV. The results show the good learning ability of the Gaussian process in the studies of binding energies. Then, the predictive power of the Gaussian process is studied by calculating the binding energies for 108 nuclei newly included in AME2020. The theoretical results are in good agreement with the experimental data, reflecting the good predictive power of the Gaussian process. Moreover, the α-decay energies for 1169 nuclei with 50 ≤ Z ≤ 110 are derived from the theoretical binding energies calculated using the Gaussian process. The average deviation and the standard deviation are, respectively, 0.047 MeV and 0.070 MeV. Noticeably, the calculated α-decay energies for the two new isotopes ^ (204 )Ac(Huang et al. Phys Lett B 834, 137484(2022)) and ^ (207) Th(Yang et al. Phys Rev C 105, L051302(2022)) agree well with the latest experimental data. These results demonstrate that the Gaussian process is reliable for the calculations of nuclear binding energies. Finally, the α-decay properties of some unknown actinide nuclei are predicted using the Gaussian process. The predicted results can be useful guides for future research on binding energies and α-decay properties. 展开更多
关键词 Nuclear binding energies DECAY Machine learning Gaussian process
在线阅读 下载PDF
Predicting grain size-dependent superplastic properties in friction stir processed ZK30 magnesium alloy with machine learning methods 被引量:1
16
作者 Farid Bahari-Sambran Fernando Carreno +1 位作者 C.M.Cepeda-Jiménez Alberto Orozco-Caballero 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第5期1931-1943,共13页
The aim of this work is to predict,for the first time,the high temperature flow stress dependency with the grain size and the underlaid deformation mechanism using two machine learning models,random forest(RF)and arti... The aim of this work is to predict,for the first time,the high temperature flow stress dependency with the grain size and the underlaid deformation mechanism using two machine learning models,random forest(RF)and artificial neural network(ANN).With that purpose,a ZK30 magnesium alloy was friction stir processed(FSP)using three different severe conditions to obtain fine grain microstructures(with average grain sizes between 2 and 3μm)prone to extensive superplastic response.The three friction stir processed samples clearly deformed by grain boundary sliding(GBS)deformation mechanism at high temperatures.The maximum elongations to failure,well over 400% at high strain rate of 10^(-2)s^(-1),were reached at 400℃ in the material with coarsest grain size of 2.8μm,and at 300℃ for the finest grain size of 2μm.Nevertheless,the superplastic response decreased at 350℃ and 400℃ due to thermal instabilities and grain coarsening,which makes it difficult to assess the operative deformation mechanism at such temperatures.This work highlights that the machine learning models considered,especially the ANN model with higher accuracy in predicting flow stress values,allow determining adequately the superplastic creep behavior including other possible grain size scenarios. 展开更多
关键词 Machine learning Artificial intelligence Magnesium alloys SUPERPLASTICITY Friction stir processing Grain coarsening
在线阅读 下载PDF
Prediction of corrosion rate for friction stir processed WE43 alloy by combining PSO-based virtual sample generation and machine learning 被引量:1
17
作者 Annayath Maqbool Abdul Khalad Noor Zaman Khan 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1518-1528,共11页
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros... The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys. 展开更多
关键词 Corrosion rate Friction stir processing Virtual sample generation Particle swarm optimization Machine learning Graphical user interface
在线阅读 下载PDF
AI-Powered Threat Detection in Online Communities: A Multi-Modal Deep Learning Approach
18
作者 Ravi Teja Potla 《Journal of Computer and Communications》 2025年第2期155-171,共17页
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr... The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation. 展开更多
关键词 Multi-Model AI Deep learning Natural Language processing (NLP) Explainable AI (XI) Federated learning Cyber Threat Detection LSTM CNNS
在线阅读 下载PDF
An Efficient Deep Learning Framework for Revealing the Evolution of Characterization Methods in Nanoscience
19
作者 Hui‑Cong Duan Long‑Xing Lin +6 位作者 Ji‑Chun Wang Tong‑Ruo Diao Sheng‑Jie Qiu Bi‑Jun Geng Jia Shi Shu Hu Yang Yang 《Nano-Micro Letters》 2025年第11期755-768,共14页
Text mining has emerged as a powerful strategy for extracting domain knowledge structure from large amounts of text data.To date,most text mining methods are restricted to specific literature information,resulting in ... Text mining has emerged as a powerful strategy for extracting domain knowledge structure from large amounts of text data.To date,most text mining methods are restricted to specific literature information,resulting in incomplete knowledge graphs.Here,we report a method that combines citation analysis with topic modeling to describe the hidden development patterns in the history of science.Leveraging this method,we construct a knowledge graph in the field of Raman spectroscopy.The traditional Latent DirichletAllocation model is chosen as the baseline model for comparison to validate the performance of our model.Our method improves the topic coherence with a minimum growth rate of 100%compared to the traditional text mining method.It outperforms the traditional text mining method on the diversity,and its growth rate ranges from 0 to 126%.The results show the effectiveness of rule-based tokenizer we designed in solving the word tokenizer problem caused by entity naming rules in the field of chemistry.It is versatile in revealing the distribution of topics,establishing the similarity and inheritance relationships,and identifying the important moments in the history of Raman spectroscopy.Our work provides a comprehensive tool for the science of science research and promises to offer new insights into the historical survey and development forecast of a research field. 展开更多
关键词 NANOSTRUCTURE Deep learning data-driven RAMAN Nanoscience
在线阅读 下载PDF
Few-Shot Recognition of Fiber Optic Vibration Sensing Signals Based on Triplet Loss Learning
20
作者 WANG Qiao REN Yanhui +4 位作者 LI Ziqiang QIAN Cheng DU Defei HU Xing LIU Dequan 《Wuhan University Journal of Natural Sciences》 2025年第4期334-342,共9页
The distributed fiber optic sensing system,known for its high sensitivity and wide-ranging measurement capabilities,has been widely used in monitoring underground gas pipelines.It primarily serves to perceive vibratio... The distributed fiber optic sensing system,known for its high sensitivity and wide-ranging measurement capabilities,has been widely used in monitoring underground gas pipelines.It primarily serves to perceive vibration signals induced by external events and to effectively provide early warnings of potential intrusion activities.Due to the complexity and diversity of external intrusion events,traditional deep learning methods can achieve event recognition with an average accuracy exceeding 90%.However,these methods rely on large-scale datasets,leading to significant time and labor costs during the data collection process.Additionally,traditional methods perform poorly when faced with the scarcity of low-frequency event samples,making it challenging to address these rare occurrences.To address this issue,this paper proposes a small-sample learning model based on triplet learning for intrusion event recognition.The model employs a 6-way 20-shot support set configuration and utilizes the KNN clustering algorithm to assess the model's performance.Experimental results indicate that the model achieves an average accuracy of 91.6%,further validating the superior performance of the triplet learning model in classifying external intrusion events.Compared to traditional methods,this approach not only effectively reduces the dependence on large-scale datasets but also better addresses the classification of low-frequency event samples,demonstrating significant application potential. 展开更多
关键词 distributed fiber optic sensing system deep learning signal processing small-sample learning triplet learning
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部