Lead(Pb)is a typical low-melting-point ductile metal and serves as an important model material in the study of dynamic responses.Under shock-wave loading,its dynamic mechanical behavior comprises two key phenomena:pla...Lead(Pb)is a typical low-melting-point ductile metal and serves as an important model material in the study of dynamic responses.Under shock-wave loading,its dynamic mechanical behavior comprises two key phenomena:plastic deformation and shock-induced phase transitions.The underlying mechanisms of these processes are still poorly understood.Revealing these mechanisms remains challenging for experimental approaches.Non-equilibrium molecular dynamics(NEMD)simulations are an alternative theoretical tool for studying dynamic responses,as they capture atomic-scale mechanisms such as defect evolution and deformation pathways.However,due to the limited accuracy of empirical interatomic potentials,the reliability of previous NEMD studies has been questioned.Using our newly developed machine learning potential for Pb-Sn alloys,we revisited the microstructural evolution in response to shock loading under various shock orientations.The results reveal that shock loading along the[001]orientation of Pb exhibits a fast,reversible,and massive phase transition and stacking-fault evolution.The behavior of Pb differs from previous studies by the absence of twinning during plastic deformation.Loading along the[011]orientation leads to slow,irreversible plastic deformation,and a localized FCC-BCC phase transition in the Pitsch orientation relationship.This study provides crucial theoretical insights into the dynamic mechanical response of Pb,offering a theoretical input for understanding the microstructure-performance relationship under extreme conditions.展开更多
With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study p...With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.展开更多
As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and adv...As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and advance reservation(AR).Various resource allocation schemes for IR/AR requests have been designed in EON to reduce bandwidth blocking probability(BBP).However,these schemes do not consider different transmission requirements of IR requests and cannot maintain a low BBP for high-priority requests.In this paper,multi-priority is considered in the hybrid IR/AR request scenario.We modify the asynchronous advantage actor critic(A3C)model and propose an A3C-assisted priority resource allocation(APRA)algorithm.The APRA integrates priority and transmission quality of IR requests to design the A3C reward function,then dynamically allocates dedicated resources for different IR requests according to the time-varying requirements.By maximizing the reward,the transmission quality of IR requests can be matched with the priority,and lower BBP for high-priority IR requests can be ensured.Simulation results show that the APRA reduces the BBP of high-priority IR requests from 0.0341 to0.0138,and the overall network operation gain is improved by 883 compared to the scheme without considering the priority.展开更多
In dynamic and uncertain reconnaissance missions,effective task assignment and path planning for multiple unmanned aerial vehicles(UAVs)present significant challenges.A stochastic multi-UAV reconnaissance scheduling p...In dynamic and uncertain reconnaissance missions,effective task assignment and path planning for multiple unmanned aerial vehicles(UAVs)present significant challenges.A stochastic multi-UAV reconnaissance scheduling problem is formulated as a combinatorial optimization task with nonlinear objectives and coupled constraints.To solve the non-deterministic polynomial(NP)-hard problem efficiently,a novel learning-enhanced pigeon-inspired optimization(L-PIO)algorithm is proposed.The algorithm integrates a Q-learning mechanism to dynamically regulate control parameters,enabling adaptive exploration–exploitation trade-offs across different optimization phases.Additionally,geometric abstraction techniques are employed to approximate complex reconnaissance regions using maximum inscribed rectangles and spiral path models,allowing for precise cost modeling of UAV paths.The formal objective function is developed to minimize global flight distance and completion time while maximizing reconnaissance priority and task coverage.A series of simulation experiments are conducted under three scenarios:static task allocation,dynamic task emergence,and UAV failure recovery.Comparative analysis with several updated algorithms demonstrates that L-PIO exhibits superior robustness,adaptability,and computational efficiency.The results verify the algorithm's effectiveness in addressing dynamic reconnaissance task planning in real-time multi-UAV applications.展开更多
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
Breast cancer’s heterogeneous progression demands innovative tools for accurate prediction.We present a hybrid framework that integrates machine learning(ML)and fractional-order dynamics to predict tumor growth acros...Breast cancer’s heterogeneous progression demands innovative tools for accurate prediction.We present a hybrid framework that integrates machine learning(ML)and fractional-order dynamics to predict tumor growth across diagnostic and temporal scales.On the Wisconsin Diagnostic Breast Cancer dataset,seven ML algorithms were evaluated,with deep neural networks(DNNs)achieving the highest accuracy(97.72%).Key morphological features(area,radius,texture,and concavity)were identified as top malignancy predictors,aligning with clinical intuition.Beyond static classification,we developed a fractional-order dynamical model using Caputo derivatives to capture memory-driven tumor progression.The model revealed clinically interpretable patterns:lower fractional orders correlated with prolonged aggressive growth,while higher orders indicated rapid stabilization,mimicking indolent subtypes.Theoretical analyses were rigorously proven,and numerical simulations closely fit clinical data.The framework’s clinical utility is demonstrated through an interactive graphics user interface(GUI)that integrates real-time risk assessment with growth trajectory simulations.展开更多
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca...This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.展开更多
The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties ...The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties of molten Mg alloys remains challenging due to the difficulties in experimentally character-izing the high-temperature melts.Herein,we propose that molecular dynamics(MD)simulations driven by deep learning based interatomic potentials(DPs),referred to as DPMD,are a promising strategy to tackle this challenge.We develop MgAl-DP,MgSi-DP,MgCa-DP,and MgZn-DP to assess the kinetic prop-erties of Mg-Al,Mg-Si,Mg-Ca,and Mg-Zn alloy melts.The reliability of our DPs is rigorously evaluated by comparing the DPMD results with those from ab initio MD(AIMD)simulations,as well as available ex-perimental results.Our theoretically evaluated viscosity of Mg-Al melts shows excellent agreement with experimental results over a wide temperature range.Additionally,we found that the solute elements Ca and Zn exhibit sluggish kinetics in the studied melts,which supporting the promising glass-forming abil-ity of the Mg-Zn-Ca alloy system.The computational efficiency of DPMD simulations is several orders of magnitude higher than that of AIMD simulations,while maintaining ab initio-level accuracy.This makes DPMD a highly feasible protocol for building a comprehensive and reliable database of kinetic properties of Mg alloy melts.展开更多
Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NI...Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NIR-II, 900–1880nm) fluorescence volumetric microscopic imaging method that combines an electrically tunable lens (ETL) with deep learning approaches for rapid 3D imaging. The technology achieves volumetric imaging at 4.2 frames per second (fps) across a 200 μm depth range in live mouse brain vasculature. Two specialized neural networks are utilized: a scale-recurrent network (SRN) for image enhancement and a cerebral vessel interpolation (CVI) network that enables 16-fold axial upsampling. The SRN, trained on two-photon fluorescence microscopic data, improves both lateral and axial resolution of NIR-II fluorescence wide-field microscopic images. The CVI network, adapted from video interpolation techniques, generates intermediate frames between acquired axial planes, resulting in smooth and continuous 3D vessel reconstructions. Using this integrated system, we visualize and quantify blood flow dynamics in individual vessels and are capable of measuring blood velocity at different depths. This approach maintains high lateral resolution while achieving rapid volumetric imaging, and is particularly suitable for studying dynamic vascular processes in deep tissue. Our method demonstrates the potential of combining optical engineering with artificial intelligence to advance biological imaging capabilities.展开更多
The design and optimization of nonlinear fiber laser sources,such as soliton self-frequency shift(SSFS)tunable sources and supercontinuum(SC)sources,have traditionally relied on manual tuning and simulations,posing ch...The design and optimization of nonlinear fiber laser sources,such as soliton self-frequency shift(SSFS)tunable sources and supercontinuum(SC)sources,have traditionally relied on manual tuning and simulations,posing challenges for real-time applications.Machine learning has shown promise in fiber nonlinear propagation characterization,but the optimization and design of nonlinear systems remain relatively unexplored,especially under multitarget optimization conditions.In this paper,we propose a method that combines deep reinforcement learning(DRL)and deep neural network(DNN)to achieve fast synchronization optimization of ultrafast pulse nonlinear propagation in optical fibers under multitarget optimization tasks,with applications demonstrated in complex SSFS and SC generation systems in the mid-infrared band.The results indicate that a set of optimization parameters can be obtained in a few seconds,enabling rapid,automated tuning of pulse parameters in pursuit of diverse optimization objectives.This integration of DRL and DNN models holds transformative potential for the real-time optimization of not only fiber lasers but also a wide variety of complex photonic systems,paving the way for intelligent,adaptive optical system design and operation.展开更多
This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund produc...This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund products.Customers click on their preferred fund products when visiting a fund recommendation web-page.The system collects customer click sequences to continually estimate and update their utility function.The system generates product lists using the ε-greedy algorithm,where each product on the list has the probability of 1-ε of being selected as an exploitation strategy,and the probability of ε is chosen as the exploration strategy.We perform a series of numerical tests to evaluate the estimation performance with different values of ε.展开更多
This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurat...This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurations.The approach leverages advanced parameterization techniques,such as class and shape transformation(CST)and Bezier curves,to reduce design complexity while preserving flexibility.Computational fluid dynamics(CFD)simulations are performed to generate a comprehensive dataset,which is used to train an extreme gradient boosting(XGBoost)model for predicting aerodynamic performance.The optimization process,using the non‑dominated sorting genetic algorithm(NSGA‑Ⅱ),results in a 12.3%reduction in drag for the CRM wing and an 18%improvement in the lift‑to‑drag ratio for the Bionica box‑wing.These findings validate the efficacy of machine learning based method in aerodynamic optimization,demonstrating significant efficiency gains across both configurations.展开更多
The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we...The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we present an on-the-fly machine learning molecular dynamics(OTF-MLMD)approach to probe the complex side reactions at lithium metal anode–electrolyte interfaces with exceptional accuracy and computational efficiency.The machine learning force field(MLFF)was firstly validated in a bulk-phase system comprising twenty 1,2-dimethoxyethane(DME)molecules,demonstrating energy fluctuations and structural parameters in close agreement with ab initio molecular dynamics(AIMD)benchmarks.Subsequent simulations of lithium–DME and lithium–electrolyte interfaces revealed minimal discrepancies in energy,bond lengths,and net charge variations(notably in FSI-species),underscoring the method's DFT-level precision of the approach.A further small-scale interfacial model enabled on-the-fly training over a mere of 340 fs,which was then successfully transferred to a large-scale simulation encompassing nearly 300,000 atoms,representing the largest interfacial model in LMB research up to date.The hierarchical validation strategy not only establishes the robustness of the MLFF in capturing both interfacial and bulk-phase chemistry but also paves the way for statistically meaningful simulations of battery interfaces.The fruitful findings highlight the transformative potential of OTF-MLMD in bridging the gap between atomistic accuracy and macroscopic modeling,affording a universal approach to understand interfacial reactions in LMBs.展开更多
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea...This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.展开更多
Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the vide...Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the video transmission quality.The federated learning method based on distributed data sets was used to reduce network costs and increase the learning efficiency of the deep learning network model.It solved too much data transfer costs and broke down the data silos.Intra-clustered dynamic federated deep reinforcement learning(IcD-FDRL)was constructed in clustered mobile edge-computing(CMEC)networks due to the promoted video transmission quality for the stability and efficiency of the DRL algorithm.Then,the IcD-FDRL algorithm was employed to CMEC networks’edge for intelligentedge video transmissions,which could satisfy the diversified needs of different users.The simulation analysis proved the effectiveness of IcD-FDRL in improving QoE,cache hit ratio,and training.展开更多
Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strate...Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strategies.This review explores the convergence of Computational Fluid Dynamics(CFD)and Machine Learning(ML)as an integrated framework to address these issues in modern cancer therapy.The paper discusses recent advancements where CFD models simulate complex tumor microenvironmental conditions,like interstitial fluid pressure(IFP)and drug perfusion,and ML enhances simulation workflows,automates image-based segmentation,and enhances predictive accuracy.The synergy between CFD and ML improves scalability and enables patientspecific treatment planning.Methodologically,it coversmulti-scalemodeling approaches,nanotherapeutic simulations,imaging integration,and emerging AI-driven frameworks.The paper identifies gaps in current applications,including the need for robust clinical validation,real-time model adaptability,and ethical data integration.Future directions suggest that CFD–ML hybrids could serve as digital twins for tumor evolution,offering insights for adaptive therapies.The review advocates for a computationally augmented oncology ecosystem that combines biological complexity with engineering precision for next-generation cancer care.展开更多
Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Trig...Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Triglyceride-glucose(TyG)index dynamics,a marker for insulin resistance,and its relationship with CVD in Chinese adults aged 45 and older.Methods This reanalysis utilized five waves of CHARLS data with multistage sampling.From 17,705 participants,5,625 with TyG index and subsequent CVD data were included,excluding those lacking 2011 and 2015 TyG data.TyG derived from glucose and triglyceride levels,CVD outcomes via self-reports and records.Participants divided into four groups based on TyG changes(2011–2015):low-low,low-high,high-low,high-high TyG groups.Results Adjusting for covariates,stable high group showed a significantly higher risk of incident CVD compared to stable low group,with an HR of 1.18(95%CI:1.03–1.36).Similarly,for stroke risk,stable high group had a HR of 1.45(95%CI:1.11–1.89).Survival curves indicated that individuals with stable high TyG levels had a significantly increased CVD risk compared to controls.The dynamic TyG change showed a greater risk for CVD than abnormal glucose metabolism,notably for stroke.However,there was no statistical difference in single incidence risk of heart disease between stable low and stable high group.Subgroup analyses underscored demographic disparities,with stable high group consistently showing elevated risks,particularly among<65 years individuals,females,and those with higher education,lower BMI,or higher depression scores.Machine learning models,including random forest,XGBoost,CoxBoost,Deepsurv and GBM,underscored the predictive superiority of dynamic TyG over abnormal glucose metabolism for CVD.Conclusions Dynamic TyG change correlate with CVD risks.Monitoring these changes could predict and manage cardiovascular health in middle-aged and older adults.Targeted interventions based on TyG index trends are crucial for reducing CVD risks in this population.展开更多
Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to priva...Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.展开更多
As the core determinant of lithium-ion battery performance,electrode materials play a crucial role in defining the battery's capacity,cycling stability,and durability.During charging and discharging,electrode mate...As the core determinant of lithium-ion battery performance,electrode materials play a crucial role in defining the battery's capacity,cycling stability,and durability.During charging and discharging,electrode materials undergo complex ion intercalation and deintercalation processes,accompanied by defect formation and structural evolution.However,the microscopic mechanisms underlying processes such as cation disordering,lattice oxygen loss,and stage structure formation are still not fully understood.To address these challenges,we have developed the Electrode Dynamic Ion Intercalation/Deintercalation Simulator(EDIS),a software platform designed to simulate the dynamic processes of ion intercalation and deintercalation in electrode materials.Leveraging high-precision machine learning potentials,EDIS can efficiently model structural evolution and lithium-ion diffusion behavior under various states of charge and discharge,achieving accuracy approaching that of quantum mechanical methods in relevant chemical spaces.The software supports quantitative analysis of how variations in lithium-ion concentration and distribution affect lithium-ion transport properties,enables evaluation of the impact of structural defects,and allows for tracking of both structural evolution and transport characteristics during continuous cycling.EDIS is versatile and can be extended to sodium-ion batteries and related systems.By enabling in-depth analysis of these microscopic processes,EDIS provides a robust theoretical tool for mechanistic studies and the rational design of high-performance electrode materials for next-generation lithium-ion batteries.展开更多
基金supported by the National Key R&D Program of China(Grant No.2022YFA1004300)the National Natural Science Foundation of China(Grant No.12404004)。
文摘Lead(Pb)is a typical low-melting-point ductile metal and serves as an important model material in the study of dynamic responses.Under shock-wave loading,its dynamic mechanical behavior comprises two key phenomena:plastic deformation and shock-induced phase transitions.The underlying mechanisms of these processes are still poorly understood.Revealing these mechanisms remains challenging for experimental approaches.Non-equilibrium molecular dynamics(NEMD)simulations are an alternative theoretical tool for studying dynamic responses,as they capture atomic-scale mechanisms such as defect evolution and deformation pathways.However,due to the limited accuracy of empirical interatomic potentials,the reliability of previous NEMD studies has been questioned.Using our newly developed machine learning potential for Pb-Sn alloys,we revisited the microstructural evolution in response to shock loading under various shock orientations.The results reveal that shock loading along the[001]orientation of Pb exhibits a fast,reversible,and massive phase transition and stacking-fault evolution.The behavior of Pb differs from previous studies by the absence of twinning during plastic deformation.Loading along the[011]orientation leads to slow,irreversible plastic deformation,and a localized FCC-BCC phase transition in the Pitsch orientation relationship.This study provides crucial theoretical insights into the dynamic mechanical response of Pb,offering a theoretical input for understanding the microstructure-performance relationship under extreme conditions.
基金supported by the SungKyunKwan University and the BK21 FOUR(Graduate School Innovation)funded by the Ministry of Education(MOE,Korea)and National Research Foundation of Korea(NRF).
文摘With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.
文摘As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and advance reservation(AR).Various resource allocation schemes for IR/AR requests have been designed in EON to reduce bandwidth blocking probability(BBP).However,these schemes do not consider different transmission requirements of IR requests and cannot maintain a low BBP for high-priority requests.In this paper,multi-priority is considered in the hybrid IR/AR request scenario.We modify the asynchronous advantage actor critic(A3C)model and propose an A3C-assisted priority resource allocation(APRA)algorithm.The APRA integrates priority and transmission quality of IR requests to design the A3C reward function,then dynamically allocates dedicated resources for different IR requests according to the time-varying requirements.By maximizing the reward,the transmission quality of IR requests can be matched with the priority,and lower BBP for high-priority IR requests can be ensured.Simulation results show that the APRA reduces the BBP of high-priority IR requests from 0.0341 to0.0138,and the overall network operation gain is improved by 883 compared to the scheme without considering the priority.
基金supported by the National Natural Science Foundation of China(Nos.T2121003,U24B20156)Open Fund of the National Key Laboratory of Helicopter Aeromechanics(No.2024-ZSJ-LB-02-06)。
文摘In dynamic and uncertain reconnaissance missions,effective task assignment and path planning for multiple unmanned aerial vehicles(UAVs)present significant challenges.A stochastic multi-UAV reconnaissance scheduling problem is formulated as a combinatorial optimization task with nonlinear objectives and coupled constraints.To solve the non-deterministic polynomial(NP)-hard problem efficiently,a novel learning-enhanced pigeon-inspired optimization(L-PIO)algorithm is proposed.The algorithm integrates a Q-learning mechanism to dynamically regulate control parameters,enabling adaptive exploration–exploitation trade-offs across different optimization phases.Additionally,geometric abstraction techniques are employed to approximate complex reconnaissance regions using maximum inscribed rectangles and spiral path models,allowing for precise cost modeling of UAV paths.The formal objective function is developed to minimize global flight distance and completion time while maximizing reconnaissance priority and task coverage.A series of simulation experiments are conducted under three scenarios:static task allocation,dynamic task emergence,and UAV failure recovery.Comparative analysis with several updated algorithms demonstrates that L-PIO exhibits superior robustness,adaptability,and computational efficiency.The results verify the algorithm's effectiveness in addressing dynamic reconnaissance task planning in real-time multi-UAV applications.
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.
文摘Breast cancer’s heterogeneous progression demands innovative tools for accurate prediction.We present a hybrid framework that integrates machine learning(ML)and fractional-order dynamics to predict tumor growth across diagnostic and temporal scales.On the Wisconsin Diagnostic Breast Cancer dataset,seven ML algorithms were evaluated,with deep neural networks(DNNs)achieving the highest accuracy(97.72%).Key morphological features(area,radius,texture,and concavity)were identified as top malignancy predictors,aligning with clinical intuition.Beyond static classification,we developed a fractional-order dynamical model using Caputo derivatives to capture memory-driven tumor progression.The model revealed clinically interpretable patterns:lower fractional orders correlated with prolonged aggressive growth,while higher orders indicated rapid stabilization,mimicking indolent subtypes.Theoretical analyses were rigorously proven,and numerical simulations closely fit clinical data.The framework’s clinical utility is demonstrated through an interactive graphics user interface(GUI)that integrates real-time risk assessment with growth trajectory simulations.
基金supported by the National Natural Science Foundation of China Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102)the National Natural Science Foundation of China(Grant No.12202451).
文摘This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.
基金Financial support from the National Natural Science Founda-tion of China(Nos.52222409 and52074132)the National Key Research and Development Program(No.2022YFE0122000)+1 种基金Partial financial support comes from the Science and Technology Development Program of Jilin Province(No.20210301025GX)the Fundamental Research Funds for the Central Universities,JLU.
文摘The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties of molten Mg alloys remains challenging due to the difficulties in experimentally character-izing the high-temperature melts.Herein,we propose that molecular dynamics(MD)simulations driven by deep learning based interatomic potentials(DPs),referred to as DPMD,are a promising strategy to tackle this challenge.We develop MgAl-DP,MgSi-DP,MgCa-DP,and MgZn-DP to assess the kinetic prop-erties of Mg-Al,Mg-Si,Mg-Ca,and Mg-Zn alloy melts.The reliability of our DPs is rigorously evaluated by comparing the DPMD results with those from ab initio MD(AIMD)simulations,as well as available ex-perimental results.Our theoretically evaluated viscosity of Mg-Al melts shows excellent agreement with experimental results over a wide temperature range.Additionally,we found that the solute elements Ca and Zn exhibit sluggish kinetics in the studied melts,which supporting the promising glass-forming abil-ity of the Mg-Zn-Ca alloy system.The computational efficiency of DPMD simulations is several orders of magnitude higher than that of AIMD simulations,while maintaining ab initio-level accuracy.This makes DPMD a highly feasible protocol for building a comprehensive and reliable database of kinetic properties of Mg alloy melts.
基金supported by the National Key R&D Program of China (No. 2024YFF1206700)the National Natural Science Foundation of China (No. U23A20487)the Hangzhou Chengxi Sci-tech Innovation Corridor Management Committee.
文摘Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NIR-II, 900–1880nm) fluorescence volumetric microscopic imaging method that combines an electrically tunable lens (ETL) with deep learning approaches for rapid 3D imaging. The technology achieves volumetric imaging at 4.2 frames per second (fps) across a 200 μm depth range in live mouse brain vasculature. Two specialized neural networks are utilized: a scale-recurrent network (SRN) for image enhancement and a cerebral vessel interpolation (CVI) network that enables 16-fold axial upsampling. The SRN, trained on two-photon fluorescence microscopic data, improves both lateral and axial resolution of NIR-II fluorescence wide-field microscopic images. The CVI network, adapted from video interpolation techniques, generates intermediate frames between acquired axial planes, resulting in smooth and continuous 3D vessel reconstructions. Using this integrated system, we visualize and quantify blood flow dynamics in individual vessels and are capable of measuring blood velocity at different depths. This approach maintains high lateral resolution while achieving rapid volumetric imaging, and is particularly suitable for studying dynamic vascular processes in deep tissue. Our method demonstrates the potential of combining optical engineering with artificial intelligence to advance biological imaging capabilities.
基金supported by the National Natural Science Foundation of China(Grant No.62575051)the Aeronautical Science Foundation of China(Grant No.2023M038080001)+1 种基金the Equipment Pre-research Joint Fund of the Ministry of Education(Grant No.8091B042228)the Science and Technology Project of Sichuan Province(Grant Nos.2023NSFSC1964 and 203NSFSC0033).
文摘The design and optimization of nonlinear fiber laser sources,such as soliton self-frequency shift(SSFS)tunable sources and supercontinuum(SC)sources,have traditionally relied on manual tuning and simulations,posing challenges for real-time applications.Machine learning has shown promise in fiber nonlinear propagation characterization,but the optimization and design of nonlinear systems remain relatively unexplored,especially under multitarget optimization conditions.In this paper,we propose a method that combines deep reinforcement learning(DRL)and deep neural network(DNN)to achieve fast synchronization optimization of ultrafast pulse nonlinear propagation in optical fibers under multitarget optimization tasks,with applications demonstrated in complex SSFS and SC generation systems in the mid-infrared band.The results indicate that a set of optimization parameters can be obtained in a few seconds,enabling rapid,automated tuning of pulse parameters in pursuit of diverse optimization objectives.This integration of DRL and DNN models holds transformative potential for the real-time optimization of not only fiber lasers but also a wide variety of complex photonic systems,paving the way for intelligent,adaptive optical system design and operation.
基金This research was supported by National Key R&D Program of China under No.2022YFA1004000National Natural Science Foundation of China under No.11991023 and 12371324.
文摘This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund products.Customers click on their preferred fund products when visiting a fund recommendation web-page.The system collects customer click sequences to continually estimate and update their utility function.The system generates product lists using the ε-greedy algorithm,where each product on the list has the probability of 1-ε of being selected as an exploitation strategy,and the probability of ε is chosen as the exploration strategy.We perform a series of numerical tests to evaluate the estimation performance with different values of ε.
文摘This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurations.The approach leverages advanced parameterization techniques,such as class and shape transformation(CST)and Bezier curves,to reduce design complexity while preserving flexibility.Computational fluid dynamics(CFD)simulations are performed to generate a comprehensive dataset,which is used to train an extreme gradient boosting(XGBoost)model for predicting aerodynamic performance.The optimization process,using the non‑dominated sorting genetic algorithm(NSGA‑Ⅱ),results in a 12.3%reduction in drag for the CRM wing and an 18%improvement in the lift‑to‑drag ratio for the Bionica box‑wing.These findings validate the efficacy of machine learning based method in aerodynamic optimization,demonstrating significant efficiency gains across both configurations.
基金supported by the National Key Research and Development Program(2021YFB2500300)the National Natural Science Foundation of China(T2322015,92472101,22393903,22393900,and 52394170)+1 种基金the Beijing Municipal Natural Science Foundation(L247015 and L233004)Tsinghua University Initiative Scientific Research Program。
文摘The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we present an on-the-fly machine learning molecular dynamics(OTF-MLMD)approach to probe the complex side reactions at lithium metal anode–electrolyte interfaces with exceptional accuracy and computational efficiency.The machine learning force field(MLFF)was firstly validated in a bulk-phase system comprising twenty 1,2-dimethoxyethane(DME)molecules,demonstrating energy fluctuations and structural parameters in close agreement with ab initio molecular dynamics(AIMD)benchmarks.Subsequent simulations of lithium–DME and lithium–electrolyte interfaces revealed minimal discrepancies in energy,bond lengths,and net charge variations(notably in FSI-species),underscoring the method's DFT-level precision of the approach.A further small-scale interfacial model enabled on-the-fly training over a mere of 340 fs,which was then successfully transferred to a large-scale simulation encompassing nearly 300,000 atoms,representing the largest interfacial model in LMB research up to date.The hierarchical validation strategy not only establishes the robustness of the MLFF in capturing both interfacial and bulk-phase chemistry but also paves the way for statistically meaningful simulations of battery interfaces.The fruitful findings highlight the transformative potential of OTF-MLMD in bridging the gap between atomistic accuracy and macroscopic modeling,affording a universal approach to understand interfacial reactions in LMBs.
基金supported by the Science and Technology Project of State Grid Sichuan Electric Power Company Chengdu Power Supply Company under Grant No.521904240005.
文摘This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.
基金supported by the Start-up Project of Doctoral Research in Jiangxi University of Water Resources and Electric Power(No.2024kyqd062)the Key Project of Science and Technology Research of Jiangxi Provincial Education Department(No.GJJ180251)the National Natural Science Foundation of China(No.61961021).
文摘Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the video transmission quality.The federated learning method based on distributed data sets was used to reduce network costs and increase the learning efficiency of the deep learning network model.It solved too much data transfer costs and broke down the data silos.Intra-clustered dynamic federated deep reinforcement learning(IcD-FDRL)was constructed in clustered mobile edge-computing(CMEC)networks due to the promoted video transmission quality for the stability and efficiency of the DRL algorithm.Then,the IcD-FDRL algorithm was employed to CMEC networks’edge for intelligentedge video transmissions,which could satisfy the diversified needs of different users.The simulation analysis proved the effectiveness of IcD-FDRL in improving QoE,cache hit ratio,and training.
基金supported by the Ministry of Higher Education Malaysia for the Fundamental Research Grant Scheme[FRGS/1/2023/TK04/USM/03/1].
文摘Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strategies.This review explores the convergence of Computational Fluid Dynamics(CFD)and Machine Learning(ML)as an integrated framework to address these issues in modern cancer therapy.The paper discusses recent advancements where CFD models simulate complex tumor microenvironmental conditions,like interstitial fluid pressure(IFP)and drug perfusion,and ML enhances simulation workflows,automates image-based segmentation,and enhances predictive accuracy.The synergy between CFD and ML improves scalability and enables patientspecific treatment planning.Methodologically,it coversmulti-scalemodeling approaches,nanotherapeutic simulations,imaging integration,and emerging AI-driven frameworks.The paper identifies gaps in current applications,including the need for robust clinical validation,real-time model adaptability,and ethical data integration.Future directions suggest that CFD–ML hybrids could serve as digital twins for tumor evolution,offering insights for adaptive therapies.The review advocates for a computationally augmented oncology ecosystem that combines biological complexity with engineering precision for next-generation cancer care.
基金the National Natural Science Foundation of China(grant numbers 82070434,LYQ)。
文摘Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Triglyceride-glucose(TyG)index dynamics,a marker for insulin resistance,and its relationship with CVD in Chinese adults aged 45 and older.Methods This reanalysis utilized five waves of CHARLS data with multistage sampling.From 17,705 participants,5,625 with TyG index and subsequent CVD data were included,excluding those lacking 2011 and 2015 TyG data.TyG derived from glucose and triglyceride levels,CVD outcomes via self-reports and records.Participants divided into four groups based on TyG changes(2011–2015):low-low,low-high,high-low,high-high TyG groups.Results Adjusting for covariates,stable high group showed a significantly higher risk of incident CVD compared to stable low group,with an HR of 1.18(95%CI:1.03–1.36).Similarly,for stroke risk,stable high group had a HR of 1.45(95%CI:1.11–1.89).Survival curves indicated that individuals with stable high TyG levels had a significantly increased CVD risk compared to controls.The dynamic TyG change showed a greater risk for CVD than abnormal glucose metabolism,notably for stroke.However,there was no statistical difference in single incidence risk of heart disease between stable low and stable high group.Subgroup analyses underscored demographic disparities,with stable high group consistently showing elevated risks,particularly among<65 years individuals,females,and those with higher education,lower BMI,or higher depression scores.Machine learning models,including random forest,XGBoost,CoxBoost,Deepsurv and GBM,underscored the predictive superiority of dynamic TyG over abnormal glucose metabolism for CVD.Conclusions Dynamic TyG change correlate with CVD risks.Monitoring these changes could predict and manage cardiovascular health in middle-aged and older adults.Targeted interventions based on TyG index trends are crucial for reducing CVD risks in this population.
基金supported in part by the National Key Research and Development Program of Chinaunder(Grant 2021YFB3101100)in part by the National Natural Science Foundation of Chinaunder(Grant 42461057),(Grant 62272123),and(Grant 42371470)+1 种基金in part by the Fundamental Research Program of Shanxi Province under(Grant 202303021212164)in part by the Postgraduate Education Innovation Program of Shanxi Province under(Grant 2024KY474).
文摘Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.
基金supported by the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDB1040300)the National Natural Science Foundation of China(Grant No.52172258)。
文摘As the core determinant of lithium-ion battery performance,electrode materials play a crucial role in defining the battery's capacity,cycling stability,and durability.During charging and discharging,electrode materials undergo complex ion intercalation and deintercalation processes,accompanied by defect formation and structural evolution.However,the microscopic mechanisms underlying processes such as cation disordering,lattice oxygen loss,and stage structure formation are still not fully understood.To address these challenges,we have developed the Electrode Dynamic Ion Intercalation/Deintercalation Simulator(EDIS),a software platform designed to simulate the dynamic processes of ion intercalation and deintercalation in electrode materials.Leveraging high-precision machine learning potentials,EDIS can efficiently model structural evolution and lithium-ion diffusion behavior under various states of charge and discharge,achieving accuracy approaching that of quantum mechanical methods in relevant chemical spaces.The software supports quantitative analysis of how variations in lithium-ion concentration and distribution affect lithium-ion transport properties,enables evaluation of the impact of structural defects,and allows for tracking of both structural evolution and transport characteristics during continuous cycling.EDIS is versatile and can be extended to sodium-ion batteries and related systems.By enabling in-depth analysis of these microscopic processes,EDIS provides a robust theoretical tool for mechanistic studies and the rational design of high-performance electrode materials for next-generation lithium-ion batteries.