The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combi...The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.展开更多
Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model versi...Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.展开更多
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn...Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalize...Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.展开更多
BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning ofte...BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.展开更多
Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that empl...Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.展开更多
Frequency hopping(FH)communication has good anti-fading,anti-jamming and anti-eavesdropping capabilities,so it is one of the main ways to combat electronic jamming.In order to further improve the anti-jamming capabili...Frequency hopping(FH)communication has good anti-fading,anti-jamming and anti-eavesdropping capabilities,so it is one of the main ways to combat electronic jamming.In order to further improve the anti-jamming capability of FH communication,the parameters such as fixed frequency interval,hopping rate and hopping frequency in conventional FH can be assigned with time-varying characteristics.In order to set appropriate hopping parameters to improve the performance of the system in the electromagnetic environment with various types of jamming,a heuristically accelerated Q-learning(HAQL)method is proposed in this paper.Firstly,a theoretical model for the parameter decision-making of FH system is made,and the key parameters affecting the energy efficiency of the system are analyzed.Secondly,a Q-learning model in complex electromagnetic environment is proposed,which includes setting states,actions and rewards,as well as a HAQL-based decisionmaking algorithm is put forward.Lastly,simulations are carried out under different jamming environments,and simulation results show that the average energy efficiency of HAQL algorithm is higher than that of the SARSA algorithm,the e-greedy QL algorithm and the HQL-OSGM algorithm,respectively.展开更多
This study aimed to integrate Monte Carlo(MC)simulation with deep learning(DL)-based denoising techniques to achieve fast and accurate prediction of high-quality electronic portal imaging device(EPID)transmission dose...This study aimed to integrate Monte Carlo(MC)simulation with deep learning(DL)-based denoising techniques to achieve fast and accurate prediction of high-quality electronic portal imaging device(EPID)transmission dose(TD)for patientspecific quality assurance(PSQA).A total of 100 lung cases were used to obtain the noisy EPID TD by the ARCHER MC code under four kinds of particle numbers(1×10^(6),1×10^(7),1×10^(8)and 1×10^(9)),and the original EPID TD was denoised by the SUNet neural network.The denoised EPID TD was assessed both qualitatively and quantitatively using the structural similarity(SSIM),peak signal-to-noise ratio(PSNR),and gamma passing rate(GPR)with respect to 1×10^(9)as a reference.The computation times for both the MC simulation and DL-based denoising were recorded.As the number of particles increased,both the quality of the noisy EPID TD and computation time increased significantly(1×10^(6):1.12 s,1×10^(7):1.72 s,1×10^(8):8.62 s,and 1×10^(9):73.89 s).In contrast,the DL-based denoising time remained at 0.13-0.16 s.The denoised EPID TD shows a smoother visual appearance and profile curves,but differences between 1×10^(6)and 1×10^(9)still remain.SSIM improves from 0.61 to 0.95 for 1×10^(6),0.70 to 0.96 for 1×10^(7),and 0.90 to 0.97 for 1×10^(8).PSNR increases by>20%for 1×10^(6)and 1×10^(7),and>10%for 1×10^(8).GPR improves from 48.47%to 89.10%for 1×10^(6),61.04%to 94.35%for 1×10^(7),and 91.88%to 99.55%for 1×10^(8).The method that combines MC simulation with DL-based denoising for EPID TD generation can accelerate TD prediction and maintain high accuracy,offering a promising solution for efficient PSQA.展开更多
The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and na...The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and navigation systems.Consequently,accurately predicting the intensity of the SC holds great significance,but predicting the SC involves a long-term time series,and many existing time series forecasting methods have fallen short in terms of accuracy and efficiency.The Time-series Dense Encoder model is a deep learning solution tailored for long time series prediction.Based on a multi-layer perceptron structure,it outperforms the best previously existing models in accuracy,while being efficiently trainable on general datasets.We propose a method based on this model for SC forecasting.Using a trained model,we predict the test set from SC 19 to SC 25 with an average mean absolute percentage error of 32.02,root mean square error of 30.3,mean absolute error of 23.32,and R^(2)(coefficient of determination)of 0.76,outperforming other deep learning models in terms of accuracy and training efficiency on sunspot number datasets.Subsequently,we use it to predict the peaks of SC 25 and SC 26.For SC 25,the peak time has ended,but a stronger peak is predicted for SC 26,of 199.3,within a range of 170.8-221.9,projected to occur during April 2034.展开更多
Distribution transformers play a vital role in power distribution systems,and their reliable operation is crucial for grid stability.This study presents a simulation-based framework for active fault diagnosis and earl...Distribution transformers play a vital role in power distribution systems,and their reliable operation is crucial for grid stability.This study presents a simulation-based framework for active fault diagnosis and early warning of distribution transformers,integrating Sample Ensemble Learning(SEL)with a Self-Optimizing Support Vector Machine(SO-SVM).The SEL technique enhances data diversity and mitigates class imbalance,while SO-SVM adaptively tunes its hyperparameters to improve classification accuracy.A comprehensive transformer model was developed in MATLAB/Simulink to simulate diverse fault scenarios,including inter-turn winding faults,core saturation,and thermal aging.Feature vectors were extracted from voltage,current,and temperature measurements to train and validate the proposed hybrid model.Quantitative analysis shows that the SEL–SO-SVM framework achieves a classification accuracy of 97.8%,a precision of 96.5%,and an F1-score of 97.2%.Beyond classification,the model effectively identified incipient faults,providing an early warning lead time of up to 2.5 s before significant deviations in operational parameters.This predictive capability underscores its potential for preventing catastrophic transformer failures and enabling timely maintenance actions.The proposed approach demonstrates strong applicability for enhancing the reliability and operational safety of distribution transformers in simulated environments,offering a promising foundation for future real-time and field-level implementations.展开更多
The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-ti...The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-time scenarios.This review begins with a concise overview of traditional tight-binding(TB)models,including both(semi-)empirical and first-principles approaches,establishing the foundation for understanding MLTB developments.We then present a systematic classification of existing MLTB methodologies,grouped into two major categories:direct prediction of TB Hamiltonian elements and inference of empirical parameters.A comparative analysis with other ML-based electronic structure models is also provided,highlighting the advancement of MLTB approaches.Finally,we explore the emerging MLTB application ecosystem,highlighting how the integration of MLTB models with a diverse suite of post-processing tools from linear-scaling solvers to quantum transport frameworks and molecular dynamics interfaces is essential for tackling complex scientific problems across different domains.The continued advancement of this integrated paradigm promises to accelerate materials discovery and open new frontiers in the predictive simulation of complex quantum phenomena.展开更多
Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,...Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.展开更多
Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB ...Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB calendar aging is crucial for extending battery lifespan.However,LIB calendar aging is influenced by multiple factors,including battery material,its state,and storage environment.Calendar aging experiments are also time-consuming,costly,and lack standardized testing conditions.This study employs a data-driven approach to establish a cross-scale database linking materials,side-reaction mechanisms,and calendar aging of LIBs.MELODI(Mechanism-informed,Explainable,Learning-based Optimization for Degradation Identification)is proposed to identify calendar aging mechanisms and quantify the effects of multi-scale factors.Results reveal that cathode material loss drives up to 91.42%of calendar aging degradation in high-nickel(Ni)batteries,while solid electrolyte interphase growth dominates in lithium iron phosphate(LFP)and low-Ni batteries,contributing up to 82.43%of degradation in LFP batteries and 99.10%of decay in low-Ni batteries,respectively.This study systematically quantifies calendar aging in commercial LIBs under varying materials,states of charge,and temperatures.These findings offer quantitative guidance for experimental design or battery use,and implications for emerging applications like aerial robotics,vehicle-to-grid,and embodied intelligence systems.展开更多
This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federat...This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.展开更多
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy...With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.展开更多
The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable mach...The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable machine learning(ML)to reveal electrolyte hydration energy as a universal descriptor governing ion-specific capacitance in two-dimensional(2D)materials.Through explainable ML,we elucidate how ion hydration shell stability and size critically influence charge transport and storage at the electrode-electrolyte interface.Our analysis identifies hydration energy-not ionic size-as the primary factor dictating capacitance,challenging prevailing assumptions and providing quantifiable design rules for electrolyte selection.These insights offer a data-driven pathway to optimize 2D materials for supercapacitors and beyond,including batteries and electrocatalytic systems.This work demonstrates the power of explainable artificial intelligence in uncovering molecular-level mechanisms that accelerate the discovery and development of next-generation energy storage technologies.展开更多
Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phe...Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phenotypes mainly relies on manual measurement which is inefficient,subjective and destroys samples.Therefore,the paper proposes a nondestructive measurement method for the canopy phenotype of the watermelon plug seedlings based on deep learning.The Azure Kinect was used to shoot canopy color images,depth images,and RGB-D images of the watermelon plug seedlings.The Mask-RCNN network was used to classify,segment,and count the canopy leaves of the watermelon plug seedlings.To reduce the error of leaf area measurement caused by mutual occlusion of leaves,the leaves were repaired by CycleGAN,and the depth images were restored by image processing.Then,the Delaunay triangulation was adopted to measure the leaf area in the leaf point cloud.The YOLOX target detection network was used to identify the growing point position of each seedling on the plug tray.Then the depth differences between the growing point and the upper surface of the plug tray were calculated to obtain plant height.The experiment results show that the nondestructive measurement algorithm proposed in this paper achieves good measurement performance for the watermelon plug seedlings from the 1 true-leaf to 3 true-leaf stages.The average relative error of measurement is 2.33%for the number of true leaves,4.59%for the number of cotyledons,8.37%for the leaf area,and 3.27%for the plant height.The experiment results demonstrate that the proposed algorithm in this paper provides an effective solution for the nondestructive measurement of the canopy phenotype of the plug seedlings.展开更多
Iterative Learning Control(ILC)provides an effective framework for optimizing repetitive tasks,making it particularly suitable for high-precision applications in both precision manufacturing and intelligent transporta...Iterative Learning Control(ILC)provides an effective framework for optimizing repetitive tasks,making it particularly suitable for high-precision applications in both precision manufacturing and intelligent transportation systems(ITS).This paper presents a systematic review of ILC's developmental progress,current methodologies,and practical implementations across these two critical domains.The review first analyzes the key technical challenges encountered when integrating ILC into precision manufacturing workflows.Through case studies,it evaluates demonstrated improvements in positioning accuracy,surface finish quality,and production throughput.Furthermore,the study examines ILC’s applications in ITS,with particular focus on vehicular motion control applications including autonomous vehicle trajectory tracking,platoon coordination,and traffic signal timing optimization,where its data-driven characteristics enhance adaptability to dynamic environments.Finally,the paper proposes targeted future research directions that are essential for fully realizing ILC’s potential in advancing these interconnected yet distinct fields.展开更多
High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of ...High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.展开更多
基金Guangzhou Metro Scientific Research Project(No.JT204-100111-23001)Chongqing Municipal Special Project for Technological Innovation and Application Development(No.CSTB2022TIAD-KPX0101)Science and Technology Research and Development Program of China State Railway Group Co.,Ltd.(No.N2023G045)。
文摘The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.
基金supported by the National Natural Science Foundation of China (42505149,41925023,U2342223,42105069,and 91744208)the China Postdoctoral Science Foundation (2025M770303)+1 种基金the Fundamental Research Funds for the Central Universities (14380230)the Jiangsu Funding Program for Excellent Postdoctoral Talent,and Jiangsu Collaborative Innovation Center of Climate Change。
文摘Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.
基金supported by a grant(No.CRPG-25-2054)under the Cybersecurity Research and Innovation Pioneers Initiative,provided by the National Cybersecurity Authority(NCA)in the Kingdom of Saudi Arabia.
文摘Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
文摘Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.
基金Supported by Chongqing Medical Scientific Research Project(Joint Project of Chongqing Health Commission and Science and Technology Bureau),No.2023MSXM060.
文摘BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.
基金Project(42077244)supported by the National Natural Science Foundation of ChinaProject(2020-05)supported by the Open Research Fund of Guangdong Provincial Key Laboratory of Deep Earth Sciences and Geothermal Energy Exploitation and Utilization,China。
文摘Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.
基金State Key Program of National Natural Science of China under grant nos.U19B2016。
文摘Frequency hopping(FH)communication has good anti-fading,anti-jamming and anti-eavesdropping capabilities,so it is one of the main ways to combat electronic jamming.In order to further improve the anti-jamming capability of FH communication,the parameters such as fixed frequency interval,hopping rate and hopping frequency in conventional FH can be assigned with time-varying characteristics.In order to set appropriate hopping parameters to improve the performance of the system in the electromagnetic environment with various types of jamming,a heuristically accelerated Q-learning(HAQL)method is proposed in this paper.Firstly,a theoretical model for the parameter decision-making of FH system is made,and the key parameters affecting the energy efficiency of the system are analyzed.Secondly,a Q-learning model in complex electromagnetic environment is proposed,which includes setting states,actions and rewards,as well as a HAQL-based decisionmaking algorithm is put forward.Lastly,simulations are carried out under different jamming environments,and simulation results show that the average energy efficiency of HAQL algorithm is higher than that of the SARSA algorithm,the e-greedy QL algorithm and the HQL-OSGM algorithm,respectively.
基金supported by National Key R&D Program of China(No.2022YFC2404604)Chongqing Research Institution Performance Incentive Guidance Special Project(No.CSTB2023JXJL-YFX0080)Chongqing Medical Scientific Research Project(Joint project of Chongqing Health Commission and Science and Technology Bureau)(No.2022DBXM005)。
文摘This study aimed to integrate Monte Carlo(MC)simulation with deep learning(DL)-based denoising techniques to achieve fast and accurate prediction of high-quality electronic portal imaging device(EPID)transmission dose(TD)for patientspecific quality assurance(PSQA).A total of 100 lung cases were used to obtain the noisy EPID TD by the ARCHER MC code under four kinds of particle numbers(1×10^(6),1×10^(7),1×10^(8)and 1×10^(9)),and the original EPID TD was denoised by the SUNet neural network.The denoised EPID TD was assessed both qualitatively and quantitatively using the structural similarity(SSIM),peak signal-to-noise ratio(PSNR),and gamma passing rate(GPR)with respect to 1×10^(9)as a reference.The computation times for both the MC simulation and DL-based denoising were recorded.As the number of particles increased,both the quality of the noisy EPID TD and computation time increased significantly(1×10^(6):1.12 s,1×10^(7):1.72 s,1×10^(8):8.62 s,and 1×10^(9):73.89 s).In contrast,the DL-based denoising time remained at 0.13-0.16 s.The denoised EPID TD shows a smoother visual appearance and profile curves,but differences between 1×10^(6)and 1×10^(9)still remain.SSIM improves from 0.61 to 0.95 for 1×10^(6),0.70 to 0.96 for 1×10^(7),and 0.90 to 0.97 for 1×10^(8).PSNR increases by>20%for 1×10^(6)and 1×10^(7),and>10%for 1×10^(8).GPR improves from 48.47%to 89.10%for 1×10^(6),61.04%to 94.35%for 1×10^(7),and 91.88%to 99.55%for 1×10^(8).The method that combines MC simulation with DL-based denoising for EPID TD generation can accelerate TD prediction and maintain high accuracy,offering a promising solution for efficient PSQA.
基金supported by the Academic Research Projects of Beijing Union University(ZK20202204)the National Natural Science Foundation of China(12250005,12073040,12273059,11973056,12003051,11573037,12073041,11427901,11572005,11611530679 and 12473052)+1 种基金the Strategic Priority Research Program of the China Academy of Sciences(XDB0560000,XDA15052200,XDB09040200,XDA15010700,XDB0560301,and XDA15320102)the Chinese Meridian Project(CMP).
文摘The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and navigation systems.Consequently,accurately predicting the intensity of the SC holds great significance,but predicting the SC involves a long-term time series,and many existing time series forecasting methods have fallen short in terms of accuracy and efficiency.The Time-series Dense Encoder model is a deep learning solution tailored for long time series prediction.Based on a multi-layer perceptron structure,it outperforms the best previously existing models in accuracy,while being efficiently trainable on general datasets.We propose a method based on this model for SC forecasting.Using a trained model,we predict the test set from SC 19 to SC 25 with an average mean absolute percentage error of 32.02,root mean square error of 30.3,mean absolute error of 23.32,and R^(2)(coefficient of determination)of 0.76,outperforming other deep learning models in terms of accuracy and training efficiency on sunspot number datasets.Subsequently,we use it to predict the peaks of SC 25 and SC 26.For SC 25,the peak time has ended,but a stronger peak is predicted for SC 26,of 199.3,within a range of 170.8-221.9,projected to occur during April 2034.
文摘Distribution transformers play a vital role in power distribution systems,and their reliable operation is crucial for grid stability.This study presents a simulation-based framework for active fault diagnosis and early warning of distribution transformers,integrating Sample Ensemble Learning(SEL)with a Self-Optimizing Support Vector Machine(SO-SVM).The SEL technique enhances data diversity and mitigates class imbalance,while SO-SVM adaptively tunes its hyperparameters to improve classification accuracy.A comprehensive transformer model was developed in MATLAB/Simulink to simulate diverse fault scenarios,including inter-turn winding faults,core saturation,and thermal aging.Feature vectors were extracted from voltage,current,and temperature measurements to train and validate the proposed hybrid model.Quantitative analysis shows that the SEL–SO-SVM framework achieves a classification accuracy of 97.8%,a precision of 96.5%,and an F1-score of 97.2%.Beyond classification,the model effectively identified incipient faults,providing an early warning lead time of up to 2.5 s before significant deviations in operational parameters.This predictive capability underscores its potential for preventing catastrophic transformer failures and enabling timely maintenance actions.The proposed approach demonstrates strong applicability for enhancing the reliability and operational safety of distribution transformers in simulated environments,offering a promising foundation for future real-time and field-level implementations.
基金supported by the Advanced Materials-National Science and Technology Major Project(Grant No.2025ZD0618401)the National Natural Science Foundation of China(Grant No.12504285)+1 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20250472)NFSG grant from BITS-Pilani,Dubai campus。
文摘The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-time scenarios.This review begins with a concise overview of traditional tight-binding(TB)models,including both(semi-)empirical and first-principles approaches,establishing the foundation for understanding MLTB developments.We then present a systematic classification of existing MLTB methodologies,grouped into two major categories:direct prediction of TB Hamiltonian elements and inference of empirical parameters.A comparative analysis with other ML-based electronic structure models is also provided,highlighting the advancement of MLTB approaches.Finally,we explore the emerging MLTB application ecosystem,highlighting how the integration of MLTB models with a diverse suite of post-processing tools from linear-scaling solvers to quantum transport frameworks and molecular dynamics interfaces is essential for tackling complex scientific problems across different domains.The continued advancement of this integrated paradigm promises to accelerate materials discovery and open new frontiers in the predictive simulation of complex quantum phenomena.
基金supported by the Basic Science Research Program(2023R1A2C3004336,RS-202300243807)&Regional Leading Research Center(RS-202400405278)through the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)。
文摘Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.
基金supported by the National Key Research and Development Program of China(2024YFE0213000)the Postdoctoral Innovative Talents Support Program(BX20240232)+1 种基金the Natural Science Foundation of China for Young Scholars(72304031)the Fundamental Research Funds for the Central Universities(FRF-TP-22-024A1).
文摘Lithium-ion batteries(LIBs)are widely deployed,from grid-scale storage to electric vehicles.LIBs remain stationary most of their service life,where calendar aging degrades capacity.Understanding the mechanisms of LIB calendar aging is crucial for extending battery lifespan.However,LIB calendar aging is influenced by multiple factors,including battery material,its state,and storage environment.Calendar aging experiments are also time-consuming,costly,and lack standardized testing conditions.This study employs a data-driven approach to establish a cross-scale database linking materials,side-reaction mechanisms,and calendar aging of LIBs.MELODI(Mechanism-informed,Explainable,Learning-based Optimization for Degradation Identification)is proposed to identify calendar aging mechanisms and quantify the effects of multi-scale factors.Results reveal that cathode material loss drives up to 91.42%of calendar aging degradation in high-nickel(Ni)batteries,while solid electrolyte interphase growth dominates in lithium iron phosphate(LFP)and low-Ni batteries,contributing up to 82.43%of degradation in LFP batteries and 99.10%of decay in low-Ni batteries,respectively.This study systematically quantifies calendar aging in commercial LIBs under varying materials,states of charge,and temperatures.These findings offer quantitative guidance for experimental design or battery use,and implications for emerging applications like aerial robotics,vehicle-to-grid,and embodied intelligence systems.
基金funded by the National Natural Science Foundation of China,grant number 61605004the Fundamental Research Funds for the Central Universities,grant number FRF-TP-19-016A2Guizhou Power Grid Co.,Ltd.2024 first batch of services(2024-2026 technology R&D services for science and technology projects(in addition to national and SGCC key projects)),grant number 060100KC23100012。
文摘This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.
文摘With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments.
基金supported by Iran National Science Foundation(INSF)under project No.4022382Facilities were provided by the Condensed Matter National Laboratory at the Institute for Research in Fundamental Sciences(IPM)in Tehran,Iran.Additionally,financial support for equipment purchase was granted by the INSF under project number 4022382.
文摘The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable machine learning(ML)to reveal electrolyte hydration energy as a universal descriptor governing ion-specific capacitance in two-dimensional(2D)materials.Through explainable ML,we elucidate how ion hydration shell stability and size critically influence charge transport and storage at the electrode-electrolyte interface.Our analysis identifies hydration energy-not ionic size-as the primary factor dictating capacitance,challenging prevailing assumptions and providing quantifiable design rules for electrolyte selection.These insights offer a data-driven pathway to optimize 2D materials for supercapacitors and beyond,including batteries and electrocatalytic systems.This work demonstrates the power of explainable artificial intelligence in uncovering molecular-level mechanisms that accelerate the discovery and development of next-generation energy storage technologies.
基金funded by the National Key Research and Development Program of China(Grant No.2019YFD1001900)the HZAU-AGIS Cooperation Fund(Grant No.SZYJY2022006).
文摘Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phenotypes mainly relies on manual measurement which is inefficient,subjective and destroys samples.Therefore,the paper proposes a nondestructive measurement method for the canopy phenotype of the watermelon plug seedlings based on deep learning.The Azure Kinect was used to shoot canopy color images,depth images,and RGB-D images of the watermelon plug seedlings.The Mask-RCNN network was used to classify,segment,and count the canopy leaves of the watermelon plug seedlings.To reduce the error of leaf area measurement caused by mutual occlusion of leaves,the leaves were repaired by CycleGAN,and the depth images were restored by image processing.Then,the Delaunay triangulation was adopted to measure the leaf area in the leaf point cloud.The YOLOX target detection network was used to identify the growing point position of each seedling on the plug tray.Then the depth differences between the growing point and the upper surface of the plug tray were calculated to obtain plant height.The experiment results show that the nondestructive measurement algorithm proposed in this paper achieves good measurement performance for the watermelon plug seedlings from the 1 true-leaf to 3 true-leaf stages.The average relative error of measurement is 2.33%for the number of true leaves,4.59%for the number of cotyledons,8.37%for the leaf area,and 3.27%for the plant height.The experiment results demonstrate that the proposed algorithm in this paper provides an effective solution for the nondestructive measurement of the canopy phenotype of the plug seedlings.
基金funded by the Wuxi Young Scientific and Technological Talent Support Initiative,project number:TJXD-2024-203the Natural Science Foundation of the Jiangsu Higher Education Institutions of China,grant number:24KJB470027.
文摘Iterative Learning Control(ILC)provides an effective framework for optimizing repetitive tasks,making it particularly suitable for high-precision applications in both precision manufacturing and intelligent transportation systems(ITS).This paper presents a systematic review of ILC's developmental progress,current methodologies,and practical implementations across these two critical domains.The review first analyzes the key technical challenges encountered when integrating ILC into precision manufacturing workflows.Through case studies,it evaluates demonstrated improvements in positioning accuracy,surface finish quality,and production throughput.Furthermore,the study examines ILC’s applications in ITS,with particular focus on vehicular motion control applications including autonomous vehicle trajectory tracking,platoon coordination,and traffic signal timing optimization,where its data-driven characteristics enhance adaptability to dynamic environments.Finally,the paper proposes targeted future research directions that are essential for fully realizing ILC’s potential in advancing these interconnected yet distinct fields.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(RS-2020-NR049579).
文摘High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.