期刊文献+
共找到7,758篇文章
< 1 2 250 >
每页显示 20 50 100
Data-Driven Design of Scalable Perovskite Film Fabrication via Machine Learning–Guided Processing
1
作者 Hong Liu Kangyan Liu +7 位作者 Biao Zhang Ziang Chen Yi Yang Qiang Sun Tao Ye Bed Poudel Kai Wang Congcong Wu 《Carbon Energy》 2026年第3期129-139,共11页
The key challenge in the preparation of perovskite solar cells is to enhance the reproducibility of PSC manufacturing,particularly by better controlling multiple high-dimensional process parameters.This study proposes... The key challenge in the preparation of perovskite solar cells is to enhance the reproducibility of PSC manufacturing,particularly by better controlling multiple high-dimensional process parameters.This study proposes a machine learning(ML)approach to efficiently predict and analyze perovskite film fabrication processes.By evaluating five classic ML algorithms on 130 experimental data sets from blade-coating parameters,the Random Forest(RF)model was identified as the most effective,enabling rapid prediction of over 100,000 parameter sets in just 10 min-equivalent to 3 years of manual experimentation.The RF model demonstrated strong predictive accuracy,with an R^(2) close to 0.8.This approach led to the identification of optimal process parameter combinations,significantly improving the reproducibility of PSCs and reducing performance variance by approximately threefold,thereby advancing the development of scalable manufacturing processes. 展开更多
关键词 data-driven Design of Scalable Perovskite Film Fabrication via Machine learning Guided processing
在线阅读 下载PDF
Microseismic signal processing and rockburst disaster identification:A multi-task deep learning and machine learning approach
2
作者 Chunchi Ma Weihao Xu +3 位作者 Xuefeng Ran Tianbin Li Hang Zhang Dongwei Xing 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期441-456,共16页
Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely id... Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters. 展开更多
关键词 Underground engineering Microseismic signal processing Deep learning MULTI-TASK Rockburst identification
在线阅读 下载PDF
Processing map for oxide dispersion strengthening Cu alloys based on experimental results and machine learning modelling
3
作者 Le Zong Lingxin Li +8 位作者 Lantian Zhang Xuecheng Jin Yong Zhang Wenfeng Yang Pengfei Liu Bin Gan Liujie Xu Yuanshen Qi Wenwen Sun 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期292-305,共14页
Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa... Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%. 展开更多
关键词 oxide dispersion strengthened Cu alloys constitutive model machine learning hot deformation processing maps
在线阅读 下载PDF
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
4
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm Data intervals clustering Feature selection Application of artificial intelligence in distillation industry data-driven modelling
在线阅读 下载PDF
MELODI:An explainable machine learning method for mechanistic disentanglement of battery calendar aging
5
作者 Wenkai Ye Xiaoru Chen +6 位作者 Xu Hao Yilin Xie Fuda Gong Liangxi He Xuebing Han Hewu Wang Minggao Ouyang 《Journal of Energy Chemistry》 2026年第1期804-813,I0018,共11页
Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB ... Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB calendar aging is crucial for extending battery lifespan.However,LIB calendar aging is influenced by multiple factors,including battery material,its state,and storage environment.Calendar aging experiments are also time-consuming,costly,and lack standardized testing conditions.This study employs a data-driven approach to establish a cross-scale database linking materials,side-reaction mechanisms,and calendar aging of LIBs.MELODI(Mechanism-informed,Explainable,Learning-based Optimization for Degradation Identification)is proposed to identify calendar aging mechanisms and quantify the effects of multi-scale factors.Results reveal that cathode material loss drives up to 91.42%of calendar aging degradation in high-nickel(Ni)batteries,while solid electrolyte interphase growth dominates in lithium iron phosphate(LFP)and low-Ni batteries,contributing up to 82.43%of degradation in LFP batteries and 99.10%of decay in low-Ni batteries,respectively.This study systematically quantifies calendar aging in commercial LIBs under varying materials,states of charge,and temperatures.These findings offer quantitative guidance for experimental design or battery use,and implications for emerging applications like aerial robotics,vehicle-to-grid,and embodied intelligence systems. 展开更多
关键词 data-driven model Degradation mechanism Lithium-ion battery Machine learning
在线阅读 下载PDF
Human Activity Recognition Using Weighted Average Ensemble by Selected Deep Learning Models
6
作者 Waseem Akhtar Mahwish Ilyas +3 位作者 Romana Aziz Ghadah Aldehim Tassawar Iqbal Muhammad Ramzan 《Computer Modeling in Engineering & Sciences》 2026年第2期971-989,共19页
Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in ... Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in many applications,such as smart home,healthcare,human computer interaction,sports analysis,and especially,intelligent surveillance.In this paper,we propose a robust and efficient HAR system by leveraging deep learning paradigms,including pre-trained models,CNN architectures,and their average-weighted fusion.However,due to the diversity of human actions and various environmental influences,as well as a lack of data and resources,achieving high recognition accuracy remain elusive.In this work,a weighted average ensemble technique is employed to fuse three deep learning models:EfficientNet,ResNet50,and a custom CNN.The results of this study indicate that using a weighted average ensemble strategy for developing more effective HAR models may be a promising idea for detection and classification of human activities.Experiments by using the benchmark dataset proved that the proposed weighted ensemble approach outperformed existing approaches in terms of accuracy and other key performance measures.The combined average-weighted ensemble of pre-trained and CNN models obtained an accuracy of 98%,compared to 97%,96%,and 95%for the customized CNN,EfficientNet,and ResNet50 models,respectively. 展开更多
关键词 Artificial intelligence computer vision deep learning RECOGNITION human activity classification image processing
在线阅读 下载PDF
Interpretable machine learning predictive model for mechanical properties of AZ31 magnesium alloy rolled sheets
7
作者 Bi-wu ZHU Hao JIANG +6 位作者 Qiu-ping YI Xiao LIU Jian-zhao WU Wen-hui LIU Cong-chang XU Luo-xing LI Ke HU 《Transactions of Nonferrous Metals Society of China》 2026年第3期740-753,共14页
To investigate the complex relationship between rolling process parameters and mechanical properties of AZ31 magnesium alloy rolled sheets,the Leave-One-Out Cross-Validation(LOOCV)and parameter tuning were applied to ... To investigate the complex relationship between rolling process parameters and mechanical properties of AZ31 magnesium alloy rolled sheets,the Leave-One-Out Cross-Validation(LOOCV)and parameter tuning were applied to optimizing hyper-parameters for the four(BPNN,SVR,RF,and KNN)machine learning models.An interpretable prediction model based on machine learning and SHapley Additive exPlanations(SHAP),as well as an analytical method combining the SHAP model and the Pearson Correlation Coefficient(PCC),were proposed.The results showed that among the four models,the SVR model was able to simultaneously and accurately predict the ultimate tensile strength(UTS)and elongation(EL).According to the combination analysis of PCC and the magnesium alloy rolling forming mechanism,it was found that strain rate and reduction displayed a negative and positive correlation with UTS,respectively,while rolling temperature and reduction illustrated a positive and negative correlation with EL,respectively.Through the SHAP method,which could interpret the output results of the SVR machine learning model,it was deduced that reduction and strain rate played an important role in the SVR model of the outputs of the UTS and EL,respectively.Combining SHAP with PCC,it was found that strain rate and reduction had a greater influence on the UTS than rolling temperature,whereas strain rate and rolling temperature had more influence on the EL compared to reduction. 展开更多
关键词 AZ31 magnesium alloy rolling process mechanical properties machine learning SHapley Additive exPlanations
在线阅读 下载PDF
Deep learning-based number of sources estimation under colored noise and imperfect array
8
作者 Linqiang JIANG Tao TANG +2 位作者 Zhidong WU Ding WANG Paihang ZHAO 《Chinese Journal of Aeronautics》 2026年第2期414-428,共15页
The estimation of the Number of Sources(NoS)is a significant challenge in signal processing,particularly due to the impact of colored noise on the performance of NoS estimation.This paper proposes a Multidimensional F... The estimation of the Number of Sources(NoS)is a significant challenge in signal processing,particularly due to the impact of colored noise on the performance of NoS estimation.This paper proposes a Multidimensional Feature Network(MFNet)which is designed for NoS estimation by extracting features of the sampled received signals and Sampled Covariance Matrix(SCM).The MFNet treats the raw signal and the SCM as two different types of data,and is able to achieve NoS estimation under colored noise and imperfect array.MFNet employs the Gated Recurrent Unit(GRU)to capture sequential information from the original signal data and to construct the Pseudo Covariance Matrix(PCM).Subsequently,various dimensional features,including eigenvalues and the Gerschgorin disk radius,are extracted from both the PCM and SCM,which are then jointly input into the subsequent network.An overall accuracy of 82%can be achieved after network training.The ablation experimental results demonstrate the effectiveness of multiple inputs.And simulation results demonstrate that the proposed MFNet achieves higher estimation accuracy compared to existing algorithms and exhibits greater robustness against colored noise. 展开更多
关键词 Number of source estimation Deep learning Colored noise Imperfect array Array signal processing
原文传递
A Hybrid Deep Learning Approach Using Vision Transformer and U-Net for Flood Segmentation
9
作者 Cyreneo Dofitas Jr Yong-Woon Kim Yung-Cheol Byun 《Computers, Materials & Continua》 2026年第2期1209-1227,共19页
Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood s... Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood scenarios involving reflections,occlusions,or indistinct boundaries due to limited contextual modeling.To address these challenges,we propose a hybrid flood segmentation framework that integrates a Vision Transformer(ViT)encoder with a U-Net decoder,enhanced by a novel Flood-Aware Refinement Block(FARB).The FARB module improves boundary delineation and suppresses noise by combining residual smoothing with spatial-channel attention mechanisms.We evaluate our model on a UAV-acquired flood imagery dataset,demonstrating that the proposed ViTUNet+FARB architecture outperforms existing CNN and Transformer-based models in terms of accuracy and mean Intersection over Union(mIoU).Detailed ablation studies further validate the contribution of each component,confirming that the FARB design significantly enhances segmentation quality.To its better performance and computational efficiency,the proposed framework is well-suited for flood monitoring and disaster response applications,particularly in resource-constrained environments. 展开更多
关键词 Flood detection vision transformer(ViT) U-Net segmentation image processing deep learning artificial intelligence
在线阅读 下载PDF
Peer-to-Peer Energy Trading for Multi-microgrids via Stackelberg Game and Multi-agent Deep Reinforcement Learning
10
作者 Pengjie Zhao Junyong Wu +3 位作者 Fashun Shi Lusu Li Baoqing Li Yi Wang 《CSEE Journal of Power and Energy Systems》 2026年第1期187-199,共13页
This paper proposes a novel framework based on the Stackelberg game and deep reinforcement learning for multi-microgrids(MGs)in achieving peer-to-peer(P2P)energy trading.A multi-leaders,multi-followers Stackelberg gam... This paper proposes a novel framework based on the Stackelberg game and deep reinforcement learning for multi-microgrids(MGs)in achieving peer-to-peer(P2P)energy trading.A multi-leaders,multi-followers Stackelberg game is utilized to model the P2P energy trading process.Stackelberg equilibrium(SE)is regarded as a P2P optimal trading strategy.A two-stage privacy protection solution technique combining data-driven and model-driven is developed to obtain the SE.Specifically,energy storage scheduling problem in MGs is formulated as a Markov decision process with discrete periods,and a multi-action single-observation deep deterministic policy gradient(MASO-DDPG)algorithm is proposed to tackle optimal scheduling of energy storage in the first stage.According to optimal scheduling of energy storage,the closed-form expression for SE based on model-driven is derived,and distributed SE solution technique(DSET)is developed to obtain SE in the second stage.Case studies involving a 4-Microgrid demonstrate the P2P electricity price obtained by the two-stage method,as a novel pricing mechanism,can reasonably regulate microgrid operation mode and improve microgrid income participating in the P2P market,which verifies effectiveness and superiority of the proposed P2P energy trading model and two-stage solution method. 展开更多
关键词 Deep reinforcement learning markov decision process MICROGRID peer-to-peer(P2P) stackelberg equilibrium
原文传递
Detection of Maliciously Disseminated Hate Speech in Spanish Using Fine-Tuning and In-Context Learning Techniques with Large Language Models
11
作者 Tomás Bernal-Beltrán RonghaoPan +3 位作者 JoséAntonio García-Díaz María del Pilar Salas-Zárate Mario Andrés Paredes-Valverde Rafael Valencia-García 《Computers, Materials & Continua》 2026年第4期353-390,共38页
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S... The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns. 展开更多
关键词 Hate speech detection malicious communication campaigns AI-driven cybersecurity social media analytics large language models prompt-tuning fine-tuning in-context learning natural language processing
在线阅读 下载PDF
AquaTree:Deep Reinforcement Learning-Driven Monte Carlo Tree Search for Underwater Image Enhancement
12
作者 Chao Li Jianing Wang +1 位作者 Caichang Ding Zhiwei Ye 《Computers, Materials & Continua》 2026年第3期1444-1464,共21页
Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)meth... Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)method that reformulates the task as a Markov Decision Process(MDP)through the integration of Monte Carlo Tree Search(MCTS)and deep reinforcement learning(DRL).The framework employs an action space of 25 enhancement operators,strategically grouped for basic attribute adjustment,color component balance,correction,and deblurring.Exploration within MCTS is guided by a dual-branch convolutional network,enabling intelligent sequential operator selection.Our core contributions include:(1)a multimodal state representation combining CIELab color histograms with deep perceptual features,(2)a dual-objective reward mechanism optimizing chromatic fidelity and perceptual consistency,and(3)an alternating training strategy co-optimizing enhancement sequences and network parameters.We further propose two inference schemes:an MCTS-based approach prioritizing accuracy at higher computational cost,and an efficient network policy enabling real-time processing with minimal quality loss.Comprehensive evaluations on the UIEB Dataset and Color correction and haze removal comparisons on the U45 Dataset demonstrate AquaTree’s superiority,significantly outperforming nine state-of-the-art methods across five established underwater image quality metrics. 展开更多
关键词 Underwater image enhancement(UIE) Monte Carlo tree search(MCTS) deep reinforcement learning(DRL) Markov decision process(MDP)
在线阅读 下载PDF
Control-Communication Co-Optimization for Wireless Cloud Robotic System via Multi-Agent Transfer Reinforcement Learning
13
作者 Chi Xu Junyuan Zhang Haibin Yu 《IEEE/CAA Journal of Automatica Sinica》 2026年第2期311-326,共16页
The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deplo... The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deployment and flexible expansion.However,the high-precision control of WCRS requires deterministic wireless communication,which is always challenging in the complex and dynamic radio space.This paper employs the reconfigurable intelligent surface(RIS)to establish a novel RIS-assisted WCRS architecture,where the radio channel is controlled to achieve ultra-reliable,low-delay,and low-jitter communication for high-precision closed-loop motion control.However,control and communication are strongly coupled and should be co-optimized.Fully considering the constraints of control input threshold,control delay deadline,beam phase,antenna power,and information distortion,we establish a stability maximization problem to jointly optimize control input compensation,RIS phase shift,and beamforming.Herein,a new jitter-oriented system stability objective with respect to control error and communication jitter is defined and the closed-form expression of control delay deadline is derived based on the Jensen Inequality and Lyapunov-Krasovskii functional.Due to the time-varying and partial observability of the channel and robot states,we model the problem as a partially observable Markov decision process(POMDP).To solve this complex problem,we propose a multi-agent transfer reinforcement learning algorithm named LSTM-PPO-MATRL,where the LSTM-enhanced proximal policy optimization(PPO)is designed to approximate an optimal solution and the option-guided policy transfer learning is proposed to facilitate the learning process.By centralized training and decentralized execution,LSTM-PPO-MATRL is validated by extensive experiments on MuJoCo tasks for both low-mobility and high-mobility robotic control scenarios.The results demonstrate that LSTM-PPO-MATRL not only realizes high learning efficiency,but also supports low-delay,low-jitter communication for low error control,where 71.9%control accuracy improvement and 68.7%delay jitter reduction are achieved compared to the PPO-MADRL baseline. 展开更多
关键词 Multi-agent transfer reinforcement learning(MATRL) partially observable Markov decision process(POMDP) reconfigurable intelligent surface(RIS) system stability wireless cloud robotic system(WCRS)
在线阅读 下载PDF
NJmat 2.0:User Instructions of Data-Driven Machine Learning Interface for Materials Science
14
作者 Lei Zhang Hangyuan Deng 《Computers, Materials & Continua》 2025年第4期1-11,共11页
NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large lan... NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science. 展开更多
关键词 data-driven machine learning natural language processing machine learning potential large language model
在线阅读 下载PDF
Deep learning aided underwater acoustic OFDM receivers: Model-driven or data-driven?
15
作者 Hao Zhao Miaowen Wen +3 位作者 Fei Ji Yaokun Liang Hua Yu Cui Yang 《Digital Communications and Networks》 2025年第3期866-877,共12页
The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communica... The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate. 展开更多
关键词 Deep learning Doubly-selective channels data-driven Model-driven Underwater acoustic communication OFDM
在线阅读 下载PDF
Data-Driven Human-in-the-Loop Iterative Learning Fault Estimation Method
16
作者 Fei Wang Jie Sun +1 位作者 Junwei Zhu Ruofeng Wei 《Chinese Journal of Mechanical Engineering》 2025年第6期180-188,共9页
For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.... For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.By applying orthogonal triangular decomposition and singular value decomposition,a data-driven realization of the system's kernel representation is derived,based on this representation,a residual generator is constructed.Then,the actuator fault signal is estimated online by analyzing the system's dynamic residual,and an iterative learning algorithm is introduced to continuously optimize the residual-based performance function,thereby enhancing estimation accuracy.The proposed method achieves actuator fault estimation without requiring knowledge of model parameters,eliminating the time-consuming system modeling process,and allowing operators to focus on system optimization and decision-making.Compared with existing fault estimation methods,the proposed method demonstrates superior transient performance,steady-state performance,and real-time capability,reduces the need for manual intervention and lowers operational complexity.Finally,experimental results on a mobile robot verify the effectiveness and advantages of the method. 展开更多
关键词 data-driven Residual generator Fault estimation Iterative learning Mobile robot
在线阅读 下载PDF
Data-Driven Adaptive P-Type Iterative Learning Control for Linear Discrete Time Singular Systems
17
作者 Ijaz Hussain Xiaoe Ruan +1 位作者 Chuyang Liu Bingqiang Li 《IEEE/CAA Journal of Automatica Sinica》 2025年第10期2067-2081,共15页
Aiming at the pulse response sequence of a kind of repetitive linear discrete-time singular systems unavailable,the paper explores a data-driven adaptive iterative learning control(DDAILC)strategy that interacts with ... Aiming at the pulse response sequence of a kind of repetitive linear discrete-time singular systems unavailable,the paper explores a data-driven adaptive iterative learning control(DDAILC)strategy that interacts with the pulse response iterative correction(PRIC).The mechanism is to formulate the correction performance index as a linear summation of the quadratic correction error of the pulse response and the quadratic tracking error.The correction algorithm of the pulse response arrives and the correction error goes down in a monotonic way.It also discusses the conditional relationship between the declining rate of the correction error and the correction ratio.A DDAILC algorithm is designed by means of substituting the exact pulse response of the gain-optimized iterative learning control(GOILC)with its approximated one updated in the correction algorithm.The convergences regarding tracking error and correction error are obtained monotonically.Finally,numerical simulation verifies the validity and effectiveness. 展开更多
关键词 data-driven iterative learning control(ILC) gainadaptation MONOTONIC pulse response correction
在线阅读 下载PDF
Robot Impedance Iterative Learning With Sparse Online Gaussian Process
18
作者 Yongping Pan Tian Shi +2 位作者 Wei Li Bin Xu Choon Ki Ahn 《IEEE/CAA Journal of Automatica Sinica》 2025年第11期2218-2227,共10页
Robot interaction control with variable impedance parameters may conform to task requirements during continuous interaction with dynamic environments.Iterative learning(IL)is effective to learn desired impedance param... Robot interaction control with variable impedance parameters may conform to task requirements during continuous interaction with dynamic environments.Iterative learning(IL)is effective to learn desired impedance parameters for robots under unknown environments,and Gaussian process(GP)is a nonparametric Bayesian approach that models complicated functions with provable confidence using limited data.In this paper,we propose an impedance IL method enhanced by a sparse online Gaussian process(SOGP)to speed up learning convergence and improve generalization.The SOGP for variable impedance modeling is updated in the same iteration by removing similar data points from previous iterations while learning impedance parameters in multiple iterations.The proposed IL-SOGP method is verified by high-fidelity simulations of a collaborative robot with 7 degrees of freedom based on the admittance control framework.It is shown that the proposed method accelerates iterative convergence and improves generalization compared to the classical IL-based impedance learning method. 展开更多
关键词 Gaussian process(GP) impedance variation iterative learning(IL) physical robot interaction robot learning
在线阅读 下载PDF
Innovative Concrete Cube Failure Mode Detection Using Image Processing and Machine Learning for Sustainable Construction Practices
19
作者 Meenakshi S.Patil Rajesh B.Ghongade Hemant B.Dhonde 《Journal on Artificial Intelligence》 2025年第1期289-300,共12页
This study seeks to establish a novel,semi-automatic system that utilizes Industry 4.0 principles to effectively determine both acceptable and rejectable concrete cubes with regard to their failure modes,significantly... This study seeks to establish a novel,semi-automatic system that utilizes Industry 4.0 principles to effectively determine both acceptable and rejectable concrete cubes with regard to their failure modes,significantly contributing to the dependability of concrete quality evaluations.The study utilizes image processing and machine learning(ML)methods,namely object detectionmodels such as YOLOv8 and Convolutional Neural Networks(CNNs),to evaluate images of concrete cubes.These models are trained and validated on an extensive database of annotated images from real-world and laboratory conditions.Preliminary results indicate a good performance in the classification of concrete cube failure modes.The proposed system accurately identifies cracks,determines the severity of damage to structures,indicating the potential to minimize human errors and discrepancies that might occur through the current techniques to detect the failure mode of concrete cubes.Thedeveloped systemcould significantly improve the reliability of concrete cube assessments,reduce resource wastage,and contribute to more sustainable construction practices.By minimizing material costs and errors,this innovation supports the construction industry’s move towards sustainability. 展开更多
关键词 Concrete cube failure image processing machine learning YOLOv8 CNNS
在线阅读 下载PDF
Deep learning retrieval of 3D casting models combined with professional knowledge for process reuse
20
作者 Xiao-long Pei Hua Hou +2 位作者 Li-wen Chen Zhi-qiang Duan Yu-hong Zhao 《China Foundry》 2025年第6期710-722,共13页
Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep lear... Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep learning retrieval method for process reuse was proposed,which integrates process design features into the retrieval of casting 3D models.This method leverages the comparative language-image pretraining(CLIP)model to extract shape features from the three views and sectional views of the casting model and combines them with process design features such as modulus,main wall thickness,symmetry,and length-to-height ratio to enhance process reusability.A database of 230 production casting models was established for model validation.Results indicate that incorporating process design features improves model accuracy by 6.09%,reaching 97.82%,and increases process similarity by 30.25%.The reusability of the process was further verified using the casting simulation software EasyCast.The results show that the process retrieved after integrating process design features produces the least shrinkage in the target model,demonstrating this method’s superior ability for process reuse.This approach does not require a large dataset for training and optimization,making it highly applicable to casting process design and related manufacturing processes. 展开更多
关键词 CASTING 3D model retrieval process reuse deep learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部