With the availability of high-performance computing technology and the development of advanced numerical simulation methods, Computational Fluid Dynamics (CFD) is becoming more and more practical and efficient in engi...With the availability of high-performance computing technology and the development of advanced numerical simulation methods, Computational Fluid Dynamics (CFD) is becoming more and more practical and efficient in engineering. As one of the high-precision representative algorithms, the high-order Discontinuous Galerkin Method (DGM) has not only attracted widespread attention from scholars in the CFD research community, but also received strong development. However, when DGM is extended to high-speed aerodynamic flow field calculations, non-physical numerical Gibbs oscillations near shock waves often significantly affect the numerical accuracy and even cause calculation failure. Data driven approaches based on machine learning techniques can be used to learn the characteristics of Gibbs noise, which motivates us to use it in high-speed DG applications. To achieve this goal, labeled data need to be generated in order to train the machine learning models. This paper proposes a new method for denoising modeling of Gibbs phenomenon using a machine learning technique, the zero-shot learning strategy, to eliminate acquiring large amounts of CFD data. The model adopts a graph convolutional network combined with graph attention mechanism to learn the denoising paradigm from synthetic Gibbs noise data and generalize to DGM numerical simulation data. Numerical simulation results show that the Gibbs denoising model proposed in this paper can suppress the numerical oscillation near shock waves in the high-order DGM. Our work automates the extension of DGM to high-speed aerodynamic flow field calculations with higher generalization and lower cost.展开更多
Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof ...Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof different types of features and domain shift problems are two of the critical issues in zero-shot learning. Toaddress both of these issues, this paper proposes a new modeling structure. The traditional approach mappedsemantic features and visual features into the same feature space;based on this, a dual discriminator approachis used in the proposed model. This dual discriminator approach can further enhance the consistency betweensemantic and visual features. At the same time, this approach can also align unseen class semantic features andtraining set samples, providing a portion of information about the unseen classes. In addition, a new feature fusionmethod is proposed in the model. This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen classes.At the same time, this feature fusion method can provide part of the information of the unseen classes, improvingits classification accuracy in generalized zero-shot learning and reducing domain bias. The proposed method isvalidated and compared with othermethods on four datasets, and fromthe experimental results, it can be seen thatthe method proposed in this paper achieves promising results.展开更多
Dear Editor, This letter provides a simple framework to generalized zero-shot learning for fault diagnosis. For industrial process monitoring, supervised learning and zero-shot learning(ZSL) can only deal with seen an...Dear Editor, This letter provides a simple framework to generalized zero-shot learning for fault diagnosis. For industrial process monitoring, supervised learning and zero-shot learning(ZSL) can only deal with seen and unseen faults, respectively. However, in the online monitoring stage of the actual industrial process, both seen and unseen faults may occur. This makes supervised learning and zero-shot learning impractical in industrial process monitoring.展开更多
Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-iti...Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-ities where operational records are scarce.This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such“cold-start”forecasting problems.It primarily covers three interrelated domains—solar photovoltaic(PV),wind power,and electrical load forecasting—where data scarcity and operational variability are most critical,while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective.To this end,we examined trends from over 150 predominantly peer-reviewed studies published between 2019 and mid-2025,highlighting advances in zero-shot and few-shot meta-learning frameworks that enable rapid model adaptation with minimal labeled data.Moreover,transfer learning approaches combined with spatiotemporal graph neural networks have been employed to transfer knowledge from existing energy assets to new,data-sparse environments,effectively capturing hidden dependencies among geographic features,meteorological dynamics,and grid structures.Synthetic data generation has further proven valuable for expanding training samples and mitigating overfitting in cold-start scenarios.In addition,large language models and explainable artificial intelligence(XAI)—notably conversational XAI systems—have been used to interpret and communicate complex model behaviors in accessible terms,fostering operator trust from the earliest deployment stages.By consolidating methodological advances,unresolved challenges,and open-source resources,this review provides a coherent overview of deep learning strategies that can shorten the data-sparse ramp-up period of new energy infrastructures and accelerate the transition toward resilient,low-carbon electricity grids.展开更多
In recent years,multi-label zero-shot learning(ML-ZSL)has garnered increasing attention because of its wide range of potential applications,such as image annotation,text classification,and bioinformatics.The central c...In recent years,multi-label zero-shot learning(ML-ZSL)has garnered increasing attention because of its wide range of potential applications,such as image annotation,text classification,and bioinformatics.The central challenge in ML-ZSL lies in predicting multiple labels for unseen classes without requiring any labeled training data,which contrasts with conventional supervised learning paradigms.However,existing methods face several significant challenges.These include the substantial semantic gap between different modalities,which impedes effective knowledge transfer,and the intricate and typically complex relationships among multiple labels,making it difficult to model them in a meaningful and accurate manner.To overcome these challenges,we propose a graph-augmented multimodal chain-of-thought(GMCoT)reasoning approach.The proposed method combines the strengths of multimodal large language models with graph-based structures,significantly enhancing the reasoning process involved in multi-label prediction.First,a novel multimodal chain-of-thought reasoning framework is presented which imitates human-like step-by-step reasoning to produce multi-label predictions.Second,a technique is presented for integrating label graphs into the reasoning process.This technique enables the capture of complex semantic relationships among labels,thereby improving the accuracy and consistency of multi-label generation.Comprehensive experiments on benchmark datasets demonstrate that the proposed GMCoT approach outperforms state-of-the-art methods in ML-ZSL.展开更多
Zero-shot learning(ZSL)is an important and rapidly growing area of machine learning that aims to recognize new classes without prior training data.Despite its significance,ZSL has faced challenges with overfitting in ...Zero-shot learning(ZSL)is an important and rapidly growing area of machine learning that aims to recognize new classes without prior training data.Despite its significance,ZSL has faced challenges with overfitting in embedding-based methods and limitations in traditional one-directional attention(ODA)based approaches.To bridge these gaps,this paper proposes the use of bi-directional attention(BDA)to integrate insights from both embedding and attention-based approaches.The proposed BDA system consists of a bi-directional attention network(BDAN)and a synthesized visual embedding network(SVEN)that facilitates visual-semantic interaction for ZSL classification.More specifically,the BDAN employs region self-attention(RSA),semantic synthesis attention(SSA),and visual synthesis attention(VSA)to overcome the overfitting issue in embedding methods and enhance transferability,to associate visual features with semantic property information,and to learn locally improved visual features.Extensive testing on CUB,SUN,and AWA2 datasets confirm the superiority of our proposed method over traditional approaches.展开更多
The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combi...The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.展开更多
Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model versi...Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.展开更多
Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challeng...Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.展开更多
Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face...Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.展开更多
The electrocatalytic reduction of nitric oxide for ammonia synthesis(NORR)is a key green energy conversion technology.Its efficiency relies on high-performance electrocatalysts to enhance both ammonia yield(Y_(NH3))an...The electrocatalytic reduction of nitric oxide for ammonia synthesis(NORR)is a key green energy conversion technology.Its efficiency relies on high-performance electrocatalysts to enhance both ammonia yield(Y_(NH3))and Faradaic efficiency(F_(NH3)).However,conventional experimental methods for screening high-activity NORR catalysts often entail high resource consumption and time costs.Machine learning combined with SHAP feature analysis was employed to establish a stacked ensemble model that integrates multiple algorithms,to allow for a systematic investigation of the key descriptors governing NORR performance based on an experimental dataset.Evaluation of eight model algorithms revealed that the Stacked-SVR model achieved an R^(2)of 0.9223 and an RMSE of 0.0608 for predicting on the test set,whereas the Stacked-RF model achieved an R^(2)of 0.9042 and an RMSE of 0.0900 for predicting.The stacked ensemble model integrates the strengths of individual algorithms and demonstrates strong NORR prediction performance while avoiding overfitting.SHAP feature analysis results revealed that the Cu content in the catalyst composition has the most significant impact on catalytic performance.Moreover,the combination of the wet chemical reduction synthesis,a carbon fiber(CF)conductive substrate,and HCl electrolyte is more favorable for enhancing catalytic activity.Additionally,moderately lowering the working potential,controlling the electrolyte volume at low to medium levels,reducing catalyst loading,and increasing electrolyte concentration were found to synergistically enhance both and.展开更多
Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitiv...Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.展开更多
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn...Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.展开更多
The Internet of Vehicles,or IoV,is expected to lessen pollution,ease traffic,and increase road safety.IoV entities’interconnectedness,however,raises the possibility of cyberattacks,which can have detrimental effects....The Internet of Vehicles,or IoV,is expected to lessen pollution,ease traffic,and increase road safety.IoV entities’interconnectedness,however,raises the possibility of cyberattacks,which can have detrimental effects.IoV systems typically send massive volumes of raw data to central servers,which may raise privacy issues.Additionally,model training on IoV devices with limited resources normally leads to slower training times and reduced service quality.We discuss a privacy-preserving Federated Split Learning with Tiny Machine Learning(TinyML)approach,which operates on IoV edge devices without sharing sensitive raw data.Specifically,we focus on integrating split learning(SL)with federated learning(FL)and TinyML models.FL is a decentralisedmachine learning(ML)technique that enables numerous edge devices to train a standard model while retaining data locally collectively.The article intends to thoroughly discuss the architecture and challenges associated with the increasing prevalence of SL in the IoV domain,coupled with FL and TinyML.The approach starts with the IoV learning framework,which includes edge computing,FL,SL,and TinyML,and then proceeds to discuss how these technologies might be integrated.We elucidate the comprehensive operational principles of Federated and split learning by examining and addressingmany challenges.We subsequently examine the integration of SL with FL and various applications of TinyML.Finally,exploring the potential integration of FL and SL with TinyML in the IoV domain is referred to as FSL-TM.It is a superior method for preserving privacy as it conducts model training on individual devices or edge nodes,thereby obviating the necessity for centralised data aggregation,which presents considerable privacy threats.The insights provided aim to help both researchers and practitioners understand the complicated terrain of FL and SL,hence facilitating advancement in this swiftly progressing domain.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces s...Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.展开更多
Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalize...Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.展开更多
基金co-supported by the Aeronautical Science Foundation of China(Nos.2018ZA52002,2019ZA052011).
文摘With the availability of high-performance computing technology and the development of advanced numerical simulation methods, Computational Fluid Dynamics (CFD) is becoming more and more practical and efficient in engineering. As one of the high-precision representative algorithms, the high-order Discontinuous Galerkin Method (DGM) has not only attracted widespread attention from scholars in the CFD research community, but also received strong development. However, when DGM is extended to high-speed aerodynamic flow field calculations, non-physical numerical Gibbs oscillations near shock waves often significantly affect the numerical accuracy and even cause calculation failure. Data driven approaches based on machine learning techniques can be used to learn the characteristics of Gibbs noise, which motivates us to use it in high-speed DG applications. To achieve this goal, labeled data need to be generated in order to train the machine learning models. This paper proposes a new method for denoising modeling of Gibbs phenomenon using a machine learning technique, the zero-shot learning strategy, to eliminate acquiring large amounts of CFD data. The model adopts a graph convolutional network combined with graph attention mechanism to learn the denoising paradigm from synthetic Gibbs noise data and generalize to DGM numerical simulation data. Numerical simulation results show that the Gibbs denoising model proposed in this paper can suppress the numerical oscillation near shock waves in the high-order DGM. Our work automates the extension of DGM to high-speed aerodynamic flow field calculations with higher generalization and lower cost.
文摘Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof different types of features and domain shift problems are two of the critical issues in zero-shot learning. Toaddress both of these issues, this paper proposes a new modeling structure. The traditional approach mappedsemantic features and visual features into the same feature space;based on this, a dual discriminator approachis used in the proposed model. This dual discriminator approach can further enhance the consistency betweensemantic and visual features. At the same time, this approach can also align unseen class semantic features andtraining set samples, providing a portion of information about the unseen classes. In addition, a new feature fusionmethod is proposed in the model. This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen classes.At the same time, this feature fusion method can provide part of the information of the unseen classes, improvingits classification accuracy in generalized zero-shot learning and reducing domain bias. The proposed method isvalidated and compared with othermethods on four datasets, and fromthe experimental results, it can be seen thatthe method proposed in this paper achieves promising results.
基金supported by the National Natural Science Foundation of China(61703158)the Zhejiang Provincial Public Welfare Technology Application Research Project(LGG22F030023,LTGS23F030003)the Huzhou Municipal Natural Science Foundation of China(2021YZ03)。
文摘Dear Editor, This letter provides a simple framework to generalized zero-shot learning for fault diagnosis. For industrial process monitoring, supervised learning and zero-shot learning(ZSL) can only deal with seen and unseen faults, respectively. However, in the online monitoring stage of the actual industrial process, both seen and unseen faults may occur. This makes supervised learning and zero-shot learning impractical in industrial process monitoring.
文摘Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data,a challenge that becomes especially pronounced when commissioning new facil-ities where operational records are scarce.This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such“cold-start”forecasting problems.It primarily covers three interrelated domains—solar photovoltaic(PV),wind power,and electrical load forecasting—where data scarcity and operational variability are most critical,while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective.To this end,we examined trends from over 150 predominantly peer-reviewed studies published between 2019 and mid-2025,highlighting advances in zero-shot and few-shot meta-learning frameworks that enable rapid model adaptation with minimal labeled data.Moreover,transfer learning approaches combined with spatiotemporal graph neural networks have been employed to transfer knowledge from existing energy assets to new,data-sparse environments,effectively capturing hidden dependencies among geographic features,meteorological dynamics,and grid structures.Synthetic data generation has further proven valuable for expanding training samples and mitigating overfitting in cold-start scenarios.In addition,large language models and explainable artificial intelligence(XAI)—notably conversational XAI systems—have been used to interpret and communicate complex model behaviors in accessible terms,fostering operator trust from the earliest deployment stages.By consolidating methodological advances,unresolved challenges,and open-source resources,this review provides a coherent overview of deep learning strategies that can shorten the data-sparse ramp-up period of new energy infrastructures and accelerate the transition toward resilient,low-carbon electricity grids.
基金supported by the Key R&D Program of Zhejiang Province(No.2024C01021)the National Regional Innovation and Development Joint Fund of China(No.U24A20254)the Leading Talents of Technological Innovation Program of Zhejiang Province(No.2023R5214)。
文摘In recent years,multi-label zero-shot learning(ML-ZSL)has garnered increasing attention because of its wide range of potential applications,such as image annotation,text classification,and bioinformatics.The central challenge in ML-ZSL lies in predicting multiple labels for unseen classes without requiring any labeled training data,which contrasts with conventional supervised learning paradigms.However,existing methods face several significant challenges.These include the substantial semantic gap between different modalities,which impedes effective knowledge transfer,and the intricate and typically complex relationships among multiple labels,making it difficult to model them in a meaningful and accurate manner.To overcome these challenges,we propose a graph-augmented multimodal chain-of-thought(GMCoT)reasoning approach.The proposed method combines the strengths of multimodal large language models with graph-based structures,significantly enhancing the reasoning process involved in multi-label prediction.First,a novel multimodal chain-of-thought reasoning framework is presented which imitates human-like step-by-step reasoning to produce multi-label predictions.Second,a technique is presented for integrating label graphs into the reasoning process.This technique enables the capture of complex semantic relationships among labels,thereby improving the accuracy and consistency of multi-label generation.Comprehensive experiments on benchmark datasets demonstrate that the proposed GMCoT approach outperforms state-of-the-art methods in ML-ZSL.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)support program(IITP-2024-2020-0-01789)the Artificial Intelligence Convergence Innovation Human Resources Development(IITP-2024-RS-2023-00254592)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)the National Research Foundation,Singapore,and Ministry of National Development,Singapore,under its Cities of Tomorrow R&D Programme(CoT Award NRF-CoTV4-2020-9).
文摘Zero-shot learning(ZSL)is an important and rapidly growing area of machine learning that aims to recognize new classes without prior training data.Despite its significance,ZSL has faced challenges with overfitting in embedding-based methods and limitations in traditional one-directional attention(ODA)based approaches.To bridge these gaps,this paper proposes the use of bi-directional attention(BDA)to integrate insights from both embedding and attention-based approaches.The proposed BDA system consists of a bi-directional attention network(BDAN)and a synthesized visual embedding network(SVEN)that facilitates visual-semantic interaction for ZSL classification.More specifically,the BDAN employs region self-attention(RSA),semantic synthesis attention(SSA),and visual synthesis attention(VSA)to overcome the overfitting issue in embedding methods and enhance transferability,to associate visual features with semantic property information,and to learn locally improved visual features.Extensive testing on CUB,SUN,and AWA2 datasets confirm the superiority of our proposed method over traditional approaches.
基金Guangzhou Metro Scientific Research Project(No.JT204-100111-23001)Chongqing Municipal Special Project for Technological Innovation and Application Development(No.CSTB2022TIAD-KPX0101)Science and Technology Research and Development Program of China State Railway Group Co.,Ltd.(No.N2023G045)。
文摘The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.
基金supported by the National Natural Science Foundation of China (42505149,41925023,U2342223,42105069,and 91744208)the China Postdoctoral Science Foundation (2025M770303)+1 种基金the Fundamental Research Funds for the Central Universities (14380230)the Jiangsu Funding Program for Excellent Postdoctoral Talent,and Jiangsu Collaborative Innovation Center of Climate Change。
文摘Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.
文摘Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.
基金Supported by CAS Basic and Interdisciplinary Frontier Scientific Research Pilot Project(XDB1190300,XDB1190302)Youth Innovation Promotion Association CAS(Y2021056)+1 种基金Joint Fund of the Yulin University and the Dalian National Laboratory for Clean Energy(YLU-DNL Fund 2022007)The special fund for Science and Technology Innovation Teams of Shanxi Province(202304051001007)。
文摘Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.
文摘The electrocatalytic reduction of nitric oxide for ammonia synthesis(NORR)is a key green energy conversion technology.Its efficiency relies on high-performance electrocatalysts to enhance both ammonia yield(Y_(NH3))and Faradaic efficiency(F_(NH3)).However,conventional experimental methods for screening high-activity NORR catalysts often entail high resource consumption and time costs.Machine learning combined with SHAP feature analysis was employed to establish a stacked ensemble model that integrates multiple algorithms,to allow for a systematic investigation of the key descriptors governing NORR performance based on an experimental dataset.Evaluation of eight model algorithms revealed that the Stacked-SVR model achieved an R^(2)of 0.9223 and an RMSE of 0.0608 for predicting on the test set,whereas the Stacked-RF model achieved an R^(2)of 0.9042 and an RMSE of 0.0900 for predicting.The stacked ensemble model integrates the strengths of individual algorithms and demonstrates strong NORR prediction performance while avoiding overfitting.SHAP feature analysis results revealed that the Cu content in the catalyst composition has the most significant impact on catalytic performance.Moreover,the combination of the wet chemical reduction synthesis,a carbon fiber(CF)conductive substrate,and HCl electrolyte is more favorable for enhancing catalytic activity.Additionally,moderately lowering the working potential,controlling the electrolyte volume at low to medium levels,reducing catalyst loading,and increasing electrolyte concentration were found to synergistically enhance both and.
文摘Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.
基金supported by a grant(No.CRPG-25-2054)under the Cybersecurity Research and Innovation Pioneers Initiative,provided by the National Cybersecurity Authority(NCA)in the Kingdom of Saudi Arabia.
文摘Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.
文摘The Internet of Vehicles,or IoV,is expected to lessen pollution,ease traffic,and increase road safety.IoV entities’interconnectedness,however,raises the possibility of cyberattacks,which can have detrimental effects.IoV systems typically send massive volumes of raw data to central servers,which may raise privacy issues.Additionally,model training on IoV devices with limited resources normally leads to slower training times and reduced service quality.We discuss a privacy-preserving Federated Split Learning with Tiny Machine Learning(TinyML)approach,which operates on IoV edge devices without sharing sensitive raw data.Specifically,we focus on integrating split learning(SL)with federated learning(FL)and TinyML models.FL is a decentralisedmachine learning(ML)technique that enables numerous edge devices to train a standard model while retaining data locally collectively.The article intends to thoroughly discuss the architecture and challenges associated with the increasing prevalence of SL in the IoV domain,coupled with FL and TinyML.The approach starts with the IoV learning framework,which includes edge computing,FL,SL,and TinyML,and then proceeds to discuss how these technologies might be integrated.We elucidate the comprehensive operational principles of Federated and split learning by examining and addressingmany challenges.We subsequently examine the integration of SL with FL and various applications of TinyML.Finally,exploring the potential integration of FL and SL with TinyML in the IoV domain is referred to as FSL-TM.It is a superior method for preserving privacy as it conducts model training on individual devices or edge nodes,thereby obviating the necessity for centralised data aggregation,which presents considerable privacy threats.The insights provided aim to help both researchers and practitioners understand the complicated terrain of FL and SL,hence facilitating advancement in this swiftly progressing domain.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
基金supported by the National Research Foundation of Korea grant funded by the Korea government(RS-2023-00217116)。
文摘Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.
文摘Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.