Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely id...Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters.展开更多
Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa...Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB ...Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB calendar aging is crucial for extending battery lifespan.However,LIB calendar aging is influenced by multiple factors,including battery material,its state,and storage environment.Calendar aging experiments are also time-consuming,costly,and lack standardized testing conditions.This study employs a data-driven approach to establish a cross-scale database linking materials,side-reaction mechanisms,and calendar aging of LIBs.MELODI(Mechanism-informed,Explainable,Learning-based Optimization for Degradation Identification)is proposed to identify calendar aging mechanisms and quantify the effects of multi-scale factors.Results reveal that cathode material loss drives up to 91.42%of calendar aging degradation in high-nickel(Ni)batteries,while solid electrolyte interphase growth dominates in lithium iron phosphate(LFP)and low-Ni batteries,contributing up to 82.43%of degradation in LFP batteries and 99.10%of decay in low-Ni batteries,respectively.This study systematically quantifies calendar aging in commercial LIBs under varying materials,states of charge,and temperatures.These findings offer quantitative guidance for experimental design or battery use,and implications for emerging applications like aerial robotics,vehicle-to-grid,and embodied intelligence systems.展开更多
Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in ...Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in many applications,such as smart home,healthcare,human computer interaction,sports analysis,and especially,intelligent surveillance.In this paper,we propose a robust and efficient HAR system by leveraging deep learning paradigms,including pre-trained models,CNN architectures,and their average-weighted fusion.However,due to the diversity of human actions and various environmental influences,as well as a lack of data and resources,achieving high recognition accuracy remain elusive.In this work,a weighted average ensemble technique is employed to fuse three deep learning models:EfficientNet,ResNet50,and a custom CNN.The results of this study indicate that using a weighted average ensemble strategy for developing more effective HAR models may be a promising idea for detection and classification of human activities.Experiments by using the benchmark dataset proved that the proposed weighted ensemble approach outperformed existing approaches in terms of accuracy and other key performance measures.The combined average-weighted ensemble of pre-trained and CNN models obtained an accuracy of 98%,compared to 97%,96%,and 95%for the customized CNN,EfficientNet,and ResNet50 models,respectively.展开更多
Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic material...Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic materials,a capability that is essential to the transition toward a sustainable future[1–3].展开更多
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)meth...Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)method that reformulates the task as a Markov Decision Process(MDP)through the integration of Monte Carlo Tree Search(MCTS)and deep reinforcement learning(DRL).The framework employs an action space of 25 enhancement operators,strategically grouped for basic attribute adjustment,color component balance,correction,and deblurring.Exploration within MCTS is guided by a dual-branch convolutional network,enabling intelligent sequential operator selection.Our core contributions include:(1)a multimodal state representation combining CIELab color histograms with deep perceptual features,(2)a dual-objective reward mechanism optimizing chromatic fidelity and perceptual consistency,and(3)an alternating training strategy co-optimizing enhancement sequences and network parameters.We further propose two inference schemes:an MCTS-based approach prioritizing accuracy at higher computational cost,and an efficient network policy enabling real-time processing with minimal quality loss.Comprehensive evaluations on the UIEB Dataset and Color correction and haze removal comparisons on the U45 Dataset demonstrate AquaTree’s superiority,significantly outperforming nine state-of-the-art methods across five established underwater image quality metrics.展开更多
NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large lan...NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.展开更多
The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communica...The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate.展开更多
For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected....For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.By applying orthogonal triangular decomposition and singular value decomposition,a data-driven realization of the system's kernel representation is derived,based on this representation,a residual generator is constructed.Then,the actuator fault signal is estimated online by analyzing the system's dynamic residual,and an iterative learning algorithm is introduced to continuously optimize the residual-based performance function,thereby enhancing estimation accuracy.The proposed method achieves actuator fault estimation without requiring knowledge of model parameters,eliminating the time-consuming system modeling process,and allowing operators to focus on system optimization and decision-making.Compared with existing fault estimation methods,the proposed method demonstrates superior transient performance,steady-state performance,and real-time capability,reduces the need for manual intervention and lowers operational complexity.Finally,experimental results on a mobile robot verify the effectiveness and advantages of the method.展开更多
Aiming at the pulse response sequence of a kind of repetitive linear discrete-time singular systems unavailable,the paper explores a data-driven adaptive iterative learning control(DDAILC)strategy that interacts with ...Aiming at the pulse response sequence of a kind of repetitive linear discrete-time singular systems unavailable,the paper explores a data-driven adaptive iterative learning control(DDAILC)strategy that interacts with the pulse response iterative correction(PRIC).The mechanism is to formulate the correction performance index as a linear summation of the quadratic correction error of the pulse response and the quadratic tracking error.The correction algorithm of the pulse response arrives and the correction error goes down in a monotonic way.It also discusses the conditional relationship between the declining rate of the correction error and the correction ratio.A DDAILC algorithm is designed by means of substituting the exact pulse response of the gain-optimized iterative learning control(GOILC)with its approximated one updated in the correction algorithm.The convergences regarding tracking error and correction error are obtained monotonically.Finally,numerical simulation verifies the validity and effectiveness.展开更多
Robot interaction control with variable impedance parameters may conform to task requirements during continuous interaction with dynamic environments.Iterative learning(IL)is effective to learn desired impedance param...Robot interaction control with variable impedance parameters may conform to task requirements during continuous interaction with dynamic environments.Iterative learning(IL)is effective to learn desired impedance parameters for robots under unknown environments,and Gaussian process(GP)is a nonparametric Bayesian approach that models complicated functions with provable confidence using limited data.In this paper,we propose an impedance IL method enhanced by a sparse online Gaussian process(SOGP)to speed up learning convergence and improve generalization.The SOGP for variable impedance modeling is updated in the same iteration by removing similar data points from previous iterations while learning impedance parameters in multiple iterations.The proposed IL-SOGP method is verified by high-fidelity simulations of a collaborative robot with 7 degrees of freedom based on the admittance control framework.It is shown that the proposed method accelerates iterative convergence and improves generalization compared to the classical IL-based impedance learning method.展开更多
This study seeks to establish a novel,semi-automatic system that utilizes Industry 4.0 principles to effectively determine both acceptable and rejectable concrete cubes with regard to their failure modes,significantly...This study seeks to establish a novel,semi-automatic system that utilizes Industry 4.0 principles to effectively determine both acceptable and rejectable concrete cubes with regard to their failure modes,significantly contributing to the dependability of concrete quality evaluations.The study utilizes image processing and machine learning(ML)methods,namely object detectionmodels such as YOLOv8 and Convolutional Neural Networks(CNNs),to evaluate images of concrete cubes.These models are trained and validated on an extensive database of annotated images from real-world and laboratory conditions.Preliminary results indicate a good performance in the classification of concrete cube failure modes.The proposed system accurately identifies cracks,determines the severity of damage to structures,indicating the potential to minimize human errors and discrepancies that might occur through the current techniques to detect the failure mode of concrete cubes.Thedeveloped systemcould significantly improve the reliability of concrete cube assessments,reduce resource wastage,and contribute to more sustainable construction practices.By minimizing material costs and errors,this innovation supports the construction industry’s move towards sustainability.展开更多
Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep lear...Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep learning retrieval method for process reuse was proposed,which integrates process design features into the retrieval of casting 3D models.This method leverages the comparative language-image pretraining(CLIP)model to extract shape features from the three views and sectional views of the casting model and combines them with process design features such as modulus,main wall thickness,symmetry,and length-to-height ratio to enhance process reusability.A database of 230 production casting models was established for model validation.Results indicate that incorporating process design features improves model accuracy by 6.09%,reaching 97.82%,and increases process similarity by 30.25%.The reusability of the process was further verified using the casting simulation software EasyCast.The results show that the process retrieved after integrating process design features produces the least shrinkage in the target model,demonstrating this method’s superior ability for process reuse.This approach does not require a large dataset for training and optimization,making it highly applicable to casting process design and related manufacturing processes.展开更多
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u...The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.展开更多
Low-voltage direct current(DC)microgrids have recently emerged as a promising and viable alternative to traditional alternating cur-rent(AC)microgrids,offering numerous advantages.Consequently,researchers are explorin...Low-voltage direct current(DC)microgrids have recently emerged as a promising and viable alternative to traditional alternating cur-rent(AC)microgrids,offering numerous advantages.Consequently,researchers are exploring the potential of DC microgrids across var-ious configurations.However,despite the sustainability and accuracy offered by DC microgrids,they pose various challenges when integrated into modern power distribution systems.Among these challenges,fault diagnosis holds significant importance.Rapid fault detection in DC microgrids is essential to maintain stability and ensure an uninterrupted power supply to critical loads.A primary chal-lenge is the lack of standards and guidelines for the protection and safety of DC microgrids,including fault detection,location,and clear-ing procedures for both grid-connected and islanded modes.In response,this study presents a brief overview of various approaches for protecting DC microgrids.展开更多
In the context of intelligent manufacturing,the modern hot strip mill process(HSMP)shows characteristics such as diversification of products,multi-specification batch production,and demand-oriented customization.These...In the context of intelligent manufacturing,the modern hot strip mill process(HSMP)shows characteristics such as diversification of products,multi-specification batch production,and demand-oriented customization.These characteristics pose significant challenges to ensuring process stability and consistency of product performance.Therefore,exploring the potential relationship between product performance and the production process,and developing a comprehensive performance evaluation method adapted to modern HSMP have become an urgent issue.A comprehensive performance evaluation method for HSMP by integrating multi-task learning and stacked performance-related autoencoder is proposed to solve the problems such as incomplete performance indicators(PIs)data,insufficient real-time acquisition requirements,and coupling of multiple PIs.First,according to the existing Chinese standards,a comprehensive performance evaluation grade strategy for strip steel is designed.The random forest model is established to predict and complete the parts of PIs data that could not be obtained in real-time.Second,a stacked performance-related autoencoder(SPAE)model is proposed to extract the deep features closely related to the product performance.Then,considering the correlation between PIs,the multi-task learning framework is introduced to output the subitem ratings and comprehensive product performance rating results of the strip steel online in real-time,where each task represents a subitem of comprehensive performance.Finally,the effectiveness of the method is verified on a real HSMP dataset,and the results show that the accuracy of the proposed method is as high as 94.8%,which is superior to the other comparative methods.展开更多
The growing demand for carbon neutrality has heightened the focus on CO_(2)hydrogenation as a viable strategy for transforming carbon dioxide into valuable chemicals and fuels.Advanced machine learning(ML)approaches i...The growing demand for carbon neutrality has heightened the focus on CO_(2)hydrogenation as a viable strategy for transforming carbon dioxide into valuable chemicals and fuels.Advanced machine learning(ML)approaches integrate materials science with artificial intelligence,enabling scientists to identify hidden patterns in datasets,make informed decisions,and reduce the need for labor-intensive,repetitive experimentation.This review provides a comprehensive overview of ML applications in the thermocatalytic hydrogenation of CO_(2).Following an introduction to ML tools and workflows,various ML algorithms employed in CO_(2)hydrogenation are systematically categorized and reviewed.Next,the application of ML in catalyst discovery is discussed,highlighting its role in identifying optimal compositions and structures.Then,ML-driven strategies for process optimization,particularly in enhancing CO_(2)conversion and product selectivity,are examined.Studies modeling descriptors,spanning catalyst properties and reaction conditions,to predict catalytic performance are analyzed.Consequently,ML-based mechanistic studies are reviewed to elucidate reaction pathways,identify key intermediates,and optimize catalyst performance.Finally,key challenges and future perspectives in leveraging ML for advancing CO_(2)hydrogenation research are presented.展开更多
Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert p...Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.42130719 and 42177173)the Doctoral Direct Train Project of Chongqing Natural Science Foundation(Grant No.CSTB2023NSCQ-BSX0029).
文摘Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters.
基金financial support of the National Natural Science Foundation of China(No.52371103)the Fundamental Research Funds for the Central Universities,China(No.2242023K40028)+1 种基金the Open Research Fund of Jiangsu Key Laboratory for Advanced Metallic Materials,China(No.AMM2023B01).financial support of the Research Fund of Shihezi Key Laboratory of AluminumBased Advanced Materials,China(No.2023PT02)financial support of Guangdong Province Science and Technology Major Project,China(No.2021B0301030005)。
文摘Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金supported by the National Key Research and Development Program of China(2024YFE0213000)the Postdoctoral Innovative Talents Support Program(BX20240232)+1 种基金the Natural Science Foundation of China for Young Scholars(72304031)the Fundamental Research Funds for the Central Universities(FRF-TP-22-024A1).
文摘Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB calendar aging is crucial for extending battery lifespan.However,LIB calendar aging is influenced by multiple factors,including battery material,its state,and storage environment.Calendar aging experiments are also time-consuming,costly,and lack standardized testing conditions.This study employs a data-driven approach to establish a cross-scale database linking materials,side-reaction mechanisms,and calendar aging of LIBs.MELODI(Mechanism-informed,Explainable,Learning-based Optimization for Degradation Identification)is proposed to identify calendar aging mechanisms and quantify the effects of multi-scale factors.Results reveal that cathode material loss drives up to 91.42%of calendar aging degradation in high-nickel(Ni)batteries,while solid electrolyte interphase growth dominates in lithium iron phosphate(LFP)and low-Ni batteries,contributing up to 82.43%of degradation in LFP batteries and 99.10%of decay in low-Ni batteries,respectively.This study systematically quantifies calendar aging in commercial LIBs under varying materials,states of charge,and temperatures.These findings offer quantitative guidance for experimental design or battery use,and implications for emerging applications like aerial robotics,vehicle-to-grid,and embodied intelligence systems.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2026R765),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in many applications,such as smart home,healthcare,human computer interaction,sports analysis,and especially,intelligent surveillance.In this paper,we propose a robust and efficient HAR system by leveraging deep learning paradigms,including pre-trained models,CNN architectures,and their average-weighted fusion.However,due to the diversity of human actions and various environmental influences,as well as a lack of data and resources,achieving high recognition accuracy remain elusive.In this work,a weighted average ensemble technique is employed to fuse three deep learning models:EfficientNet,ResNet50,and a custom CNN.The results of this study indicate that using a weighted average ensemble strategy for developing more effective HAR models may be a promising idea for detection and classification of human activities.Experiments by using the benchmark dataset proved that the proposed weighted ensemble approach outperformed existing approaches in terms of accuracy and other key performance measures.The combined average-weighted ensemble of pre-trained and CNN models obtained an accuracy of 98%,compared to 97%,96%,and 95%for the customized CNN,EfficientNet,and ResNet50 models,respectively.
文摘Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic materials,a capability that is essential to the transition toward a sustainable future[1–3].
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
基金supported by theHubei Provincial Technology Innovation Special Project and the Natural Science Foundation of Hubei Province under Grants 2023BEB024,2024AFC066,respectively.
文摘Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)method that reformulates the task as a Markov Decision Process(MDP)through the integration of Monte Carlo Tree Search(MCTS)and deep reinforcement learning(DRL).The framework employs an action space of 25 enhancement operators,strategically grouped for basic attribute adjustment,color component balance,correction,and deblurring.Exploration within MCTS is guided by a dual-branch convolutional network,enabling intelligent sequential operator selection.Our core contributions include:(1)a multimodal state representation combining CIELab color histograms with deep perceptual features,(2)a dual-objective reward mechanism optimizing chromatic fidelity and perceptual consistency,and(3)an alternating training strategy co-optimizing enhancement sequences and network parameters.We further propose two inference schemes:an MCTS-based approach prioritizing accuracy at higher computational cost,and an efficient network policy enabling real-time processing with minimal quality loss.Comprehensive evaluations on the UIEB Dataset and Color correction and haze removal comparisons on the U45 Dataset demonstrate AquaTree’s superiority,significantly outperforming nine state-of-the-art methods across five established underwater image quality metrics.
基金supported by the Jiangsu Provincial Science and Technology Project Basic Research Program(Natural Science Foundation of Jiangsu Province)(No.BK20211283).
文摘NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.
基金funded in part by the National Natural Science Foundation of China under Grant 62401167 and 62192712in part by the Key Laboratory of Marine Environmental Survey Technology and Application,Ministry of Natural Resources,P.R.China under Grant MESTA-2023-B001in part by the Stable Supporting Fund of National Key Laboratory of Underwater Acoustic Technology under Grant JCKYS2022604SSJS007.
文摘The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate.
基金Supported by Shandong Provincial Taishan Scholar Program(Grant No.tsqn202312133)Shandong Provincial Natural Science Foundation(Grant Nos.ZR2022YQ61,ZR2023ZD32)+1 种基金Shandong Provincial Natural Science Foundation(Grant No.ZR2023ZD32)National Natural Science Foundation of China(Grant Nos.61772551 and 62111530052)。
文摘For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.By applying orthogonal triangular decomposition and singular value decomposition,a data-driven realization of the system's kernel representation is derived,based on this representation,a residual generator is constructed.Then,the actuator fault signal is estimated online by analyzing the system's dynamic residual,and an iterative learning algorithm is introduced to continuously optimize the residual-based performance function,thereby enhancing estimation accuracy.The proposed method achieves actuator fault estimation without requiring knowledge of model parameters,eliminating the time-consuming system modeling process,and allowing operators to focus on system optimization and decision-making.Compared with existing fault estimation methods,the proposed method demonstrates superior transient performance,steady-state performance,and real-time capability,reduces the need for manual intervention and lowers operational complexity.Finally,experimental results on a mobile robot verify the effectiveness and advantages of the method.
基金supported by the National Natural Science Foundation of China(619733380).
文摘Aiming at the pulse response sequence of a kind of repetitive linear discrete-time singular systems unavailable,the paper explores a data-driven adaptive iterative learning control(DDAILC)strategy that interacts with the pulse response iterative correction(PRIC).The mechanism is to formulate the correction performance index as a linear summation of the quadratic correction error of the pulse response and the quadratic tracking error.The correction algorithm of the pulse response arrives and the correction error goes down in a monotonic way.It also discusses the conditional relationship between the declining rate of the correction error and the correction ratio.A DDAILC algorithm is designed by means of substituting the exact pulse response of the gain-optimized iterative learning control(GOILC)with its approximated one updated in the correction algorithm.The convergences regarding tracking error and correction error are obtained monotonically.Finally,numerical simulation verifies the validity and effectiveness.
基金supported in part by the National Research Foundation of Korea(NRF)Grant Funded by the Korea Government(MSIT)(RS-2025-00555064).Recommended by Associate Editor Zengguang Hou.
文摘Robot interaction control with variable impedance parameters may conform to task requirements during continuous interaction with dynamic environments.Iterative learning(IL)is effective to learn desired impedance parameters for robots under unknown environments,and Gaussian process(GP)is a nonparametric Bayesian approach that models complicated functions with provable confidence using limited data.In this paper,we propose an impedance IL method enhanced by a sparse online Gaussian process(SOGP)to speed up learning convergence and improve generalization.The SOGP for variable impedance modeling is updated in the same iteration by removing similar data points from previous iterations while learning impedance parameters in multiple iterations.The proposed IL-SOGP method is verified by high-fidelity simulations of a collaborative robot with 7 degrees of freedom based on the admittance control framework.It is shown that the proposed method accelerates iterative convergence and improves generalization compared to the classical IL-based impedance learning method.
文摘This study seeks to establish a novel,semi-automatic system that utilizes Industry 4.0 principles to effectively determine both acceptable and rejectable concrete cubes with regard to their failure modes,significantly contributing to the dependability of concrete quality evaluations.The study utilizes image processing and machine learning(ML)methods,namely object detectionmodels such as YOLOv8 and Convolutional Neural Networks(CNNs),to evaluate images of concrete cubes.These models are trained and validated on an extensive database of annotated images from real-world and laboratory conditions.Preliminary results indicate a good performance in the classification of concrete cube failure modes.The proposed system accurately identifies cracks,determines the severity of damage to structures,indicating the potential to minimize human errors and discrepancies that might occur through the current techniques to detect the failure mode of concrete cubes.Thedeveloped systemcould significantly improve the reliability of concrete cube assessments,reduce resource wastage,and contribute to more sustainable construction practices.By minimizing material costs and errors,this innovation supports the construction industry’s move towards sustainability.
基金supported by the National Natural Science Foundation of China(Nos.52074246,52275390,52375394)the National Defense Basic Scientific Research Program of China(No.JCKY2020408B002)the Key R&D Program of Shanxi Province(No.202102050201011).
文摘Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep learning retrieval method for process reuse was proposed,which integrates process design features into the retrieval of casting 3D models.This method leverages the comparative language-image pretraining(CLIP)model to extract shape features from the three views and sectional views of the casting model and combines them with process design features such as modulus,main wall thickness,symmetry,and length-to-height ratio to enhance process reusability.A database of 230 production casting models was established for model validation.Results indicate that incorporating process design features improves model accuracy by 6.09%,reaching 97.82%,and increases process similarity by 30.25%.The reusability of the process was further verified using the casting simulation software EasyCast.The results show that the process retrieved after integrating process design features produces the least shrinkage in the target model,demonstrating this method’s superior ability for process reuse.This approach does not require a large dataset for training and optimization,making it highly applicable to casting process design and related manufacturing processes.
基金supported by the National Natural Science Foundation of China(22408227,22238005)the Postdoctoral Research Foundation of China(GZC20231576).
文摘The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.
文摘Low-voltage direct current(DC)microgrids have recently emerged as a promising and viable alternative to traditional alternating cur-rent(AC)microgrids,offering numerous advantages.Consequently,researchers are exploring the potential of DC microgrids across var-ious configurations.However,despite the sustainability and accuracy offered by DC microgrids,they pose various challenges when integrated into modern power distribution systems.Among these challenges,fault diagnosis holds significant importance.Rapid fault detection in DC microgrids is essential to maintain stability and ensure an uninterrupted power supply to critical loads.A primary chal-lenge is the lack of standards and guidelines for the protection and safety of DC microgrids,including fault detection,location,and clear-ing procedures for both grid-connected and islanded modes.In response,this study presents a brief overview of various approaches for protecting DC microgrids.
基金supported by the National Natural Science Foundation of China(NSFC)under Grants(Nos.U21A20483,62373040 and 62273031).
文摘In the context of intelligent manufacturing,the modern hot strip mill process(HSMP)shows characteristics such as diversification of products,multi-specification batch production,and demand-oriented customization.These characteristics pose significant challenges to ensuring process stability and consistency of product performance.Therefore,exploring the potential relationship between product performance and the production process,and developing a comprehensive performance evaluation method adapted to modern HSMP have become an urgent issue.A comprehensive performance evaluation method for HSMP by integrating multi-task learning and stacked performance-related autoencoder is proposed to solve the problems such as incomplete performance indicators(PIs)data,insufficient real-time acquisition requirements,and coupling of multiple PIs.First,according to the existing Chinese standards,a comprehensive performance evaluation grade strategy for strip steel is designed.The random forest model is established to predict and complete the parts of PIs data that could not be obtained in real-time.Second,a stacked performance-related autoencoder(SPAE)model is proposed to extract the deep features closely related to the product performance.Then,considering the correlation between PIs,the multi-task learning framework is introduced to output the subitem ratings and comprehensive product performance rating results of the strip steel online in real-time,where each task represents a subitem of comprehensive performance.Finally,the effectiveness of the method is verified on a real HSMP dataset,and the results show that the accuracy of the proposed method is as high as 94.8%,which is superior to the other comparative methods.
文摘The growing demand for carbon neutrality has heightened the focus on CO_(2)hydrogenation as a viable strategy for transforming carbon dioxide into valuable chemicals and fuels.Advanced machine learning(ML)approaches integrate materials science with artificial intelligence,enabling scientists to identify hidden patterns in datasets,make informed decisions,and reduce the need for labor-intensive,repetitive experimentation.This review provides a comprehensive overview of ML applications in the thermocatalytic hydrogenation of CO_(2).Following an introduction to ML tools and workflows,various ML algorithms employed in CO_(2)hydrogenation are systematically categorized and reviewed.Next,the application of ML in catalyst discovery is discussed,highlighting its role in identifying optimal compositions and structures.Then,ML-driven strategies for process optimization,particularly in enhancing CO_(2)conversion and product selectivity,are examined.Studies modeling descriptors,spanning catalyst properties and reaction conditions,to predict catalytic performance are analyzed.Consequently,ML-based mechanistic studies are reviewed to elucidate reaction pathways,identify key intermediates,and optimize catalyst performance.Finally,key challenges and future perspectives in leveraging ML for advancing CO_(2)hydrogenation research are presented.
基金supported by the Science Committee of the Ministry of Higher Education and Science of the Republic of Kazakhstan within the framework of grant AP23489899“Applying Deep Learning and Neuroimaging Methods for Brain Stroke Diagnosis”.
文摘Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare.