Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client pr...The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.展开更多
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca...This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.展开更多
The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties ...The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties of molten Mg alloys remains challenging due to the difficulties in experimentally character-izing the high-temperature melts.Herein,we propose that molecular dynamics(MD)simulations driven by deep learning based interatomic potentials(DPs),referred to as DPMD,are a promising strategy to tackle this challenge.We develop MgAl-DP,MgSi-DP,MgCa-DP,and MgZn-DP to assess the kinetic prop-erties of Mg-Al,Mg-Si,Mg-Ca,and Mg-Zn alloy melts.The reliability of our DPs is rigorously evaluated by comparing the DPMD results with those from ab initio MD(AIMD)simulations,as well as available ex-perimental results.Our theoretically evaluated viscosity of Mg-Al melts shows excellent agreement with experimental results over a wide temperature range.Additionally,we found that the solute elements Ca and Zn exhibit sluggish kinetics in the studied melts,which supporting the promising glass-forming abil-ity of the Mg-Zn-Ca alloy system.The computational efficiency of DPMD simulations is several orders of magnitude higher than that of AIMD simulations,while maintaining ab initio-level accuracy.This makes DPMD a highly feasible protocol for building a comprehensive and reliable database of kinetic properties of Mg alloy melts.展开更多
Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NI...Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NIR-II, 900–1880nm) fluorescence volumetric microscopic imaging method that combines an electrically tunable lens (ETL) with deep learning approaches for rapid 3D imaging. The technology achieves volumetric imaging at 4.2 frames per second (fps) across a 200 μm depth range in live mouse brain vasculature. Two specialized neural networks are utilized: a scale-recurrent network (SRN) for image enhancement and a cerebral vessel interpolation (CVI) network that enables 16-fold axial upsampling. The SRN, trained on two-photon fluorescence microscopic data, improves both lateral and axial resolution of NIR-II fluorescence wide-field microscopic images. The CVI network, adapted from video interpolation techniques, generates intermediate frames between acquired axial planes, resulting in smooth and continuous 3D vessel reconstructions. Using this integrated system, we visualize and quantify blood flow dynamics in individual vessels and are capable of measuring blood velocity at different depths. This approach maintains high lateral resolution while achieving rapid volumetric imaging, and is particularly suitable for studying dynamic vascular processes in deep tissue. Our method demonstrates the potential of combining optical engineering with artificial intelligence to advance biological imaging capabilities.展开更多
This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund produc...This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund products.Customers click on their preferred fund products when visiting a fund recommendation web-page.The system collects customer click sequences to continually estimate and update their utility function.The system generates product lists using the ε-greedy algorithm,where each product on the list has the probability of 1-ε of being selected as an exploitation strategy,and the probability of ε is chosen as the exploration strategy.We perform a series of numerical tests to evaluate the estimation performance with different values of ε.展开更多
This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurat...This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurations.The approach leverages advanced parameterization techniques,such as class and shape transformation(CST)and Bezier curves,to reduce design complexity while preserving flexibility.Computational fluid dynamics(CFD)simulations are performed to generate a comprehensive dataset,which is used to train an extreme gradient boosting(XGBoost)model for predicting aerodynamic performance.The optimization process,using the non‑dominated sorting genetic algorithm(NSGA‑Ⅱ),results in a 12.3%reduction in drag for the CRM wing and an 18%improvement in the lift‑to‑drag ratio for the Bionica box‑wing.These findings validate the efficacy of machine learning based method in aerodynamic optimization,demonstrating significant efficiency gains across both configurations.展开更多
The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we...The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we present an on-the-fly machine learning molecular dynamics(OTF-MLMD)approach to probe the complex side reactions at lithium metal anode–electrolyte interfaces with exceptional accuracy and computational efficiency.The machine learning force field(MLFF)was firstly validated in a bulk-phase system comprising twenty 1,2-dimethoxyethane(DME)molecules,demonstrating energy fluctuations and structural parameters in close agreement with ab initio molecular dynamics(AIMD)benchmarks.Subsequent simulations of lithium–DME and lithium–electrolyte interfaces revealed minimal discrepancies in energy,bond lengths,and net charge variations(notably in FSI-species),underscoring the method's DFT-level precision of the approach.A further small-scale interfacial model enabled on-the-fly training over a mere of 340 fs,which was then successfully transferred to a large-scale simulation encompassing nearly 300,000 atoms,representing the largest interfacial model in LMB research up to date.The hierarchical validation strategy not only establishes the robustness of the MLFF in capturing both interfacial and bulk-phase chemistry but also paves the way for statistically meaningful simulations of battery interfaces.The fruitful findings highlight the transformative potential of OTF-MLMD in bridging the gap between atomistic accuracy and macroscopic modeling,affording a universal approach to understand interfacial reactions in LMBs.展开更多
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea...This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.展开更多
Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the vide...Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the video transmission quality.The federated learning method based on distributed data sets was used to reduce network costs and increase the learning efficiency of the deep learning network model.It solved too much data transfer costs and broke down the data silos.Intra-clustered dynamic federated deep reinforcement learning(IcD-FDRL)was constructed in clustered mobile edge-computing(CMEC)networks due to the promoted video transmission quality for the stability and efficiency of the DRL algorithm.Then,the IcD-FDRL algorithm was employed to CMEC networks’edge for intelligentedge video transmissions,which could satisfy the diversified needs of different users.The simulation analysis proved the effectiveness of IcD-FDRL in improving QoE,cache hit ratio,and training.展开更多
Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strate...Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strategies.This review explores the convergence of Computational Fluid Dynamics(CFD)and Machine Learning(ML)as an integrated framework to address these issues in modern cancer therapy.The paper discusses recent advancements where CFD models simulate complex tumor microenvironmental conditions,like interstitial fluid pressure(IFP)and drug perfusion,and ML enhances simulation workflows,automates image-based segmentation,and enhances predictive accuracy.The synergy between CFD and ML improves scalability and enables patientspecific treatment planning.Methodologically,it coversmulti-scalemodeling approaches,nanotherapeutic simulations,imaging integration,and emerging AI-driven frameworks.The paper identifies gaps in current applications,including the need for robust clinical validation,real-time model adaptability,and ethical data integration.Future directions suggest that CFD–ML hybrids could serve as digital twins for tumor evolution,offering insights for adaptive therapies.The review advocates for a computationally augmented oncology ecosystem that combines biological complexity with engineering precision for next-generation cancer care.展开更多
Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Trig...Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Triglyceride-glucose(TyG)index dynamics,a marker for insulin resistance,and its relationship with CVD in Chinese adults aged 45 and older.Methods This reanalysis utilized five waves of CHARLS data with multistage sampling.From 17,705 participants,5,625 with TyG index and subsequent CVD data were included,excluding those lacking 2011 and 2015 TyG data.TyG derived from glucose and triglyceride levels,CVD outcomes via self-reports and records.Participants divided into four groups based on TyG changes(2011–2015):low-low,low-high,high-low,high-high TyG groups.Results Adjusting for covariates,stable high group showed a significantly higher risk of incident CVD compared to stable low group,with an HR of 1.18(95%CI:1.03–1.36).Similarly,for stroke risk,stable high group had a HR of 1.45(95%CI:1.11–1.89).Survival curves indicated that individuals with stable high TyG levels had a significantly increased CVD risk compared to controls.The dynamic TyG change showed a greater risk for CVD than abnormal glucose metabolism,notably for stroke.However,there was no statistical difference in single incidence risk of heart disease between stable low and stable high group.Subgroup analyses underscored demographic disparities,with stable high group consistently showing elevated risks,particularly among<65 years individuals,females,and those with higher education,lower BMI,or higher depression scores.Machine learning models,including random forest,XGBoost,CoxBoost,Deepsurv and GBM,underscored the predictive superiority of dynamic TyG over abnormal glucose metabolism for CVD.Conclusions Dynamic TyG change correlate with CVD risks.Monitoring these changes could predict and manage cardiovascular health in middle-aged and older adults.Targeted interventions based on TyG index trends are crucial for reducing CVD risks in this population.展开更多
Traditionally, heuristic re-planning algorithms are used to tackle the problem of dynamic task planning for multiple satellites. However, the traditional heuristic strategies depend on the concrete tasks, which often ...Traditionally, heuristic re-planning algorithms are used to tackle the problem of dynamic task planning for multiple satellites. However, the traditional heuristic strategies depend on the concrete tasks, which often affect the result’s optimality. Noticing that the historical information of cooperative task planning will impact the latter planning results, we propose a hybrid learning algorithm for dynamic multi-satellite task planning, which is based on the multi-agent reinforcement learning of policy iteration and the transfer learning. The reinforcement learning strategy of each satellite is described with neural networks. The policy neural network individuals with the best topological structure and weights are found by applying co-evolutionary search iteratively. To avoid the failure of the historical learning caused by the randomly occurring observation requests, a novel approach is proposed to balance the quality and efficiency of the task planning, which converts the historical learning strategy to the current initial learning strategy by applying the transfer learning algorithm. The simulations and analysis show the feasibility and adaptability of the proposed approach especially for the situation with randomly occurring observation requests.展开更多
In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchr...In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchronous dynamical system with rate constraints on events in the iteration domain. The stability condition is provided in the form of linear matrix inequalities (LMIS) depending on the stability of asynchronous dynamical systems. The analysis is supported by simulations.展开更多
A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such ...A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such as linear programming or convex optimization, the new approach obtains the capability of iteratively on-line learning environment performance by using Reinforcement Learning (RL) algorithm after observing the variability and uncertainty of the heterogeneous wireless networks. Appropriate decision-making access actions can then be obtained by employing Fuzzy Inference System (FIS) which ensures the strategy being able to explore the possible status and exploit the experiences sufficiently. The new approach considers multi-objective such as spectrum efficiency and fairness between CR Access Points (AP) effectively. By interacting with the environment and accumulating comprehensive advantages, it can achieve the largest long-term reward expected on the desired objectives and implement the best action. Moreover, the present algorithm is relatively simple and does not require complex calculations. Simulation results show that the proposed approach can get better performance with respect to fixed frequency planning scheme or general dynamic spectrum allocation policy.展开更多
High entropy diborides are new categories of ultra-high temperature ceramics,which are believed promising candidates for applications in hypersonic vehicles.However,knowledge on high temperature thermal and mechanical...High entropy diborides are new categories of ultra-high temperature ceramics,which are believed promising candidates for applications in hypersonic vehicles.However,knowledge on high temperature thermal and mechanical properties of high entropy diborides is still lacking unit now.In this work,variations of thermal and elastic properties of high entropy(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) with respect to temperature were predicted by molecular dynamics simulations.Firstly,a deep learning potential for Ti-Zr-Hf-Nb-Ta-B diboride system was fitted with its prediction error in energy and force respectively being 9.2 meV/atom and 208 meV/A,in comparison with first-principles calculations.Then,temperature dependent lattice constants,anisotropic thermal expansions,anisotropic phonon thermal conductivities,and elastic properties of high entropy(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) from 0℃to 2400℃were evaluated,where the predicted room temperature values agree well with experimental measurements.In addition,intrinsic lattice distortions of(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) were analyzed by displacements of atoms from their ideal positions,which are in an order of 10^(-3) A and one order of magnitude smaller than those in(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))C.It indicates that lattice distortions in(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) is not so severe as expected.With the new paradigm of machine learning potential,deep insight into high entropy materials can be achieved in the future,since the chemical and structural complexly in high entropy materials can be well handled by machine learning potential.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spe...Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spectrum access makes the CIoV unable to adapt to the various spectrum environments.In this paper,a reinforcement learning based dynamic spectrum access scheme is proposed to improve the transmission performance of the CIoV in the licensed spectrum,and avoid causing harmful interference to the PU.The frame structure of the CIoV is separated into sensing period and access period,whereby the CIoV can optimize the transmission parameters in the access period according to the spectrum decisions in the sensing period.Considering both detection probability and false alarm probability,a Q-learning based spectrum access algorithm is proposed for the CIoV to intelligently select the optimal channel,bandwidth and transmit power under the dynamic spectrum states and various spectrum sensing performance.The simulations have shown that compared with the traditional non-learning spectrum access algorithm,the proposed Q-learning algorithm can effectively improve the spectral efficiency and throughput of the CIoV as well as decrease the interference power to the PU.展开更多
Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materia...Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materials.As one of the widely known ML methods,back-propagation(BP)neural networks with and without optimization by genetic algorithm(GA)are also established for comparisons of time cost and prediction error.With the aim to further increase the prediction accuracy and efficiency,this paper proposes a long short-term memory(LSTM)networks model to predict the dynamic compressive performance of concrete-like materials at high strain rates.Dynamic explicit analysis is performed in the finite element(FE)software ABAQUS to simulate various waveforms in the split Hopkinson pressure bar(SHPB)experiments by applying different stress waves in the incident bar.The FE simulation accuracy is validated against SHPB experimental results from the viewpoint of dynamic increase factor.In order to cover more extensive loading scenarios,60 sets of FE simulations are conducted in this paper to generate three kinds of waveforms in the incident and transmission bars of SHPB experiments.By training the proposed three networks,the nonlinear mapping relations can be reasonably established between incident,reflect,and transmission waves.Statistical measures are used to quantify the network prediction accuracy,confirming that the predicted stress-strain curves of concrete-like materials at high strain rates by the proposed networks agree sufficiently with those by FE simulations.It is found that compared with BP network,the GA-BP network can effectively stabilize the network structure,indicating that the GA optimization improves the prediction accuracy of the SHPB dynamic responses by performing the crossover and mutation operations of weights and thresholds in the original BP network.By eliminating the long-time dependencies,the proposed LSTM network achieves better results than the BP and GA-BP networks,since smaller mean square error(MSE)and higher correlation coefficient are achieved.More importantly,the proposed LSTM algorithm,after the training process with a limited number of FE simulations,could replace the time-consuming and laborious FE pre-and post-processing and modelling.展开更多
Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This ...Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.展开更多
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金supported by the National Natural Science Foundation of China(Nos.62072411,62372343,62402352,62403500)the Key Research and Development Program of Hubei Province(No.2023BEB024)the Open Fund of Key Laboratory of Social Computing and Cognitive Intelligence(Dalian University of Technology),Ministry of Education(No.SCCI2024TB02).
文摘The proliferation of deep learning(DL)has amplified the demand for processing large and complex datasets for tasks such as modeling,classification,and identification.However,traditional DL methods compromise client privacy by collecting sensitive data,underscoring the necessity for privacy-preserving solutions like Federated Learning(FL).FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data.Given that FL clients autonomously manage training data,encouraging client engagement is pivotal for successful model training.To overcome challenges like unreliable communication and budget constraints,we present ENTIRE,a contract-based dynamic participation incentive mechanism for FL.ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences.Our approach involves several key steps.Initially,we examine how random client participation impacts FL convergence in non-convex scenarios,establishing the correlation between client participation levels and model performance.Subsequently,we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs.By balancing budget considerations with model effectiveness,we craft optimal contracts for different budgetary constraints,prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training.Finally,we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets.The results demonstrate a significant 12.9%enhancement in model performance,validating its adherence to anticipated economic properties.
基金supported by the National Natural Science Foundation of China Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102)the National Natural Science Foundation of China(Grant No.12202451).
文摘This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.
基金Financial support from the National Natural Science Founda-tion of China(Nos.52222409 and52074132)the National Key Research and Development Program(No.2022YFE0122000)+1 种基金Partial financial support comes from the Science and Technology Development Program of Jilin Province(No.20210301025GX)the Fundamental Research Funds for the Central Universities,JLU.
文摘The kinetic properties of Mg alloy melts are crucial for determining the forming quality of castings,as they directly affect crystal nucleation and dendritic growth.However,accurately assessing the kinetic properties of molten Mg alloys remains challenging due to the difficulties in experimentally character-izing the high-temperature melts.Herein,we propose that molecular dynamics(MD)simulations driven by deep learning based interatomic potentials(DPs),referred to as DPMD,are a promising strategy to tackle this challenge.We develop MgAl-DP,MgSi-DP,MgCa-DP,and MgZn-DP to assess the kinetic prop-erties of Mg-Al,Mg-Si,Mg-Ca,and Mg-Zn alloy melts.The reliability of our DPs is rigorously evaluated by comparing the DPMD results with those from ab initio MD(AIMD)simulations,as well as available ex-perimental results.Our theoretically evaluated viscosity of Mg-Al melts shows excellent agreement with experimental results over a wide temperature range.Additionally,we found that the solute elements Ca and Zn exhibit sluggish kinetics in the studied melts,which supporting the promising glass-forming abil-ity of the Mg-Zn-Ca alloy system.The computational efficiency of DPMD simulations is several orders of magnitude higher than that of AIMD simulations,while maintaining ab initio-level accuracy.This makes DPMD a highly feasible protocol for building a comprehensive and reliable database of kinetic properties of Mg alloy melts.
基金supported by the National Key R&D Program of China (No. 2024YFF1206700)the National Natural Science Foundation of China (No. U23A20487)the Hangzhou Chengxi Sci-tech Innovation Corridor Management Committee.
文摘Three-dimensional (3D) visualization of dynamic biological processes in deep tissue remains challenging due to the trade-off between temporal resolution and imaging depth. Here, we present a novel near-infrared-II (NIR-II, 900–1880nm) fluorescence volumetric microscopic imaging method that combines an electrically tunable lens (ETL) with deep learning approaches for rapid 3D imaging. The technology achieves volumetric imaging at 4.2 frames per second (fps) across a 200 μm depth range in live mouse brain vasculature. Two specialized neural networks are utilized: a scale-recurrent network (SRN) for image enhancement and a cerebral vessel interpolation (CVI) network that enables 16-fold axial upsampling. The SRN, trained on two-photon fluorescence microscopic data, improves both lateral and axial resolution of NIR-II fluorescence wide-field microscopic images. The CVI network, adapted from video interpolation techniques, generates intermediate frames between acquired axial planes, resulting in smooth and continuous 3D vessel reconstructions. Using this integrated system, we visualize and quantify blood flow dynamics in individual vessels and are capable of measuring blood velocity at different depths. This approach maintains high lateral resolution while achieving rapid volumetric imaging, and is particularly suitable for studying dynamic vascular processes in deep tissue. Our method demonstrates the potential of combining optical engineering with artificial intelligence to advance biological imaging capabilities.
基金This research was supported by National Key R&D Program of China under No.2022YFA1004000National Natural Science Foundation of China under No.11991023 and 12371324.
文摘This study introduces a fund recommendation system based on the ε-greedy algorithm and an incremental learning framework.This model simulates the interaction process when customers browse the web-pages of fund products.Customers click on their preferred fund products when visiting a fund recommendation web-page.The system collects customer click sequences to continually estimate and update their utility function.The system generates product lists using the ε-greedy algorithm,where each product on the list has the probability of 1-ε of being selected as an exploitation strategy,and the probability of ε is chosen as the exploration strategy.We perform a series of numerical tests to evaluate the estimation performance with different values of ε.
文摘This study discusses a machine learning‑driven methodology for optimizing the aerodynamic performance of both conventional,like common research model(CRM),and non‑conventional,like Bionica box‑wing,aircraft configurations.The approach leverages advanced parameterization techniques,such as class and shape transformation(CST)and Bezier curves,to reduce design complexity while preserving flexibility.Computational fluid dynamics(CFD)simulations are performed to generate a comprehensive dataset,which is used to train an extreme gradient boosting(XGBoost)model for predicting aerodynamic performance.The optimization process,using the non‑dominated sorting genetic algorithm(NSGA‑Ⅱ),results in a 12.3%reduction in drag for the CRM wing and an 18%improvement in the lift‑to‑drag ratio for the Bionica box‑wing.These findings validate the efficacy of machine learning based method in aerodynamic optimization,demonstrating significant efficiency gains across both configurations.
基金supported by the National Key Research and Development Program(2021YFB2500300)the National Natural Science Foundation of China(T2322015,92472101,22393903,22393900,and 52394170)+1 种基金the Beijing Municipal Natural Science Foundation(L247015 and L233004)Tsinghua University Initiative Scientific Research Program。
文摘The global rapid transition towards sustainable energy systems has heightened the demand for highperformance lithium metal batteries(LMBs),where understanding interfacial phenomena is paramount.In this contribution,we present an on-the-fly machine learning molecular dynamics(OTF-MLMD)approach to probe the complex side reactions at lithium metal anode–electrolyte interfaces with exceptional accuracy and computational efficiency.The machine learning force field(MLFF)was firstly validated in a bulk-phase system comprising twenty 1,2-dimethoxyethane(DME)molecules,demonstrating energy fluctuations and structural parameters in close agreement with ab initio molecular dynamics(AIMD)benchmarks.Subsequent simulations of lithium–DME and lithium–electrolyte interfaces revealed minimal discrepancies in energy,bond lengths,and net charge variations(notably in FSI-species),underscoring the method's DFT-level precision of the approach.A further small-scale interfacial model enabled on-the-fly training over a mere of 340 fs,which was then successfully transferred to a large-scale simulation encompassing nearly 300,000 atoms,representing the largest interfacial model in LMB research up to date.The hierarchical validation strategy not only establishes the robustness of the MLFF in capturing both interfacial and bulk-phase chemistry but also paves the way for statistically meaningful simulations of battery interfaces.The fruitful findings highlight the transformative potential of OTF-MLMD in bridging the gap between atomistic accuracy and macroscopic modeling,affording a universal approach to understand interfacial reactions in LMBs.
基金supported by the Science and Technology Project of State Grid Sichuan Electric Power Company Chengdu Power Supply Company under Grant No.521904240005.
文摘This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.
基金supported by the Start-up Project of Doctoral Research in Jiangxi University of Water Resources and Electric Power(No.2024kyqd062)the Key Project of Science and Technology Research of Jiangxi Provincial Education Department(No.GJJ180251)the National Natural Science Foundation of China(No.61961021).
文摘Deep reinforcement learning is broadly employed in the optimization of wireless video transmissions.Nevertheless,the instability of the deep reinforcement learning algorithm affects the further improvement of the video transmission quality.The federated learning method based on distributed data sets was used to reduce network costs and increase the learning efficiency of the deep learning network model.It solved too much data transfer costs and broke down the data silos.Intra-clustered dynamic federated deep reinforcement learning(IcD-FDRL)was constructed in clustered mobile edge-computing(CMEC)networks due to the promoted video transmission quality for the stability and efficiency of the DRL algorithm.Then,the IcD-FDRL algorithm was employed to CMEC networks’edge for intelligentedge video transmissions,which could satisfy the diversified needs of different users.The simulation analysis proved the effectiveness of IcD-FDRL in improving QoE,cache hit ratio,and training.
基金supported by the Ministry of Higher Education Malaysia for the Fundamental Research Grant Scheme[FRGS/1/2023/TK04/USM/03/1].
文摘Conventional oncology faces challenges such as suboptimal drug delivery,tumor heterogeneity,and therapeutic resistance,indicating a need formore personalized,andmechanistically grounded and predictive treatment strategies.This review explores the convergence of Computational Fluid Dynamics(CFD)and Machine Learning(ML)as an integrated framework to address these issues in modern cancer therapy.The paper discusses recent advancements where CFD models simulate complex tumor microenvironmental conditions,like interstitial fluid pressure(IFP)and drug perfusion,and ML enhances simulation workflows,automates image-based segmentation,and enhances predictive accuracy.The synergy between CFD and ML improves scalability and enables patientspecific treatment planning.Methodologically,it coversmulti-scalemodeling approaches,nanotherapeutic simulations,imaging integration,and emerging AI-driven frameworks.The paper identifies gaps in current applications,including the need for robust clinical validation,real-time model adaptability,and ethical data integration.Future directions suggest that CFD–ML hybrids could serve as digital twins for tumor evolution,offering insights for adaptive therapies.The review advocates for a computationally augmented oncology ecosystem that combines biological complexity with engineering precision for next-generation cancer care.
基金the National Natural Science Foundation of China(grant numbers 82070434,LYQ)。
文摘Background Cardiovascular disease(CVD)remains a major health challenge globally,particularly in aging populations.Using data from the China Health and Retirement Longitudinal Study(CHARLS),this study examines the Triglyceride-glucose(TyG)index dynamics,a marker for insulin resistance,and its relationship with CVD in Chinese adults aged 45 and older.Methods This reanalysis utilized five waves of CHARLS data with multistage sampling.From 17,705 participants,5,625 with TyG index and subsequent CVD data were included,excluding those lacking 2011 and 2015 TyG data.TyG derived from glucose and triglyceride levels,CVD outcomes via self-reports and records.Participants divided into four groups based on TyG changes(2011–2015):low-low,low-high,high-low,high-high TyG groups.Results Adjusting for covariates,stable high group showed a significantly higher risk of incident CVD compared to stable low group,with an HR of 1.18(95%CI:1.03–1.36).Similarly,for stroke risk,stable high group had a HR of 1.45(95%CI:1.11–1.89).Survival curves indicated that individuals with stable high TyG levels had a significantly increased CVD risk compared to controls.The dynamic TyG change showed a greater risk for CVD than abnormal glucose metabolism,notably for stroke.However,there was no statistical difference in single incidence risk of heart disease between stable low and stable high group.Subgroup analyses underscored demographic disparities,with stable high group consistently showing elevated risks,particularly among<65 years individuals,females,and those with higher education,lower BMI,or higher depression scores.Machine learning models,including random forest,XGBoost,CoxBoost,Deepsurv and GBM,underscored the predictive superiority of dynamic TyG over abnormal glucose metabolism for CVD.Conclusions Dynamic TyG change correlate with CVD risks.Monitoring these changes could predict and manage cardiovascular health in middle-aged and older adults.Targeted interventions based on TyG index trends are crucial for reducing CVD risks in this population.
文摘Traditionally, heuristic re-planning algorithms are used to tackle the problem of dynamic task planning for multiple satellites. However, the traditional heuristic strategies depend on the concrete tasks, which often affect the result’s optimality. Noticing that the historical information of cooperative task planning will impact the latter planning results, we propose a hybrid learning algorithm for dynamic multi-satellite task planning, which is based on the multi-agent reinforcement learning of policy iteration and the transfer learning. The reinforcement learning strategy of each satellite is described with neural networks. The policy neural network individuals with the best topological structure and weights are found by applying co-evolutionary search iteratively. To avoid the failure of the historical learning caused by the randomly occurring observation requests, a novel approach is proposed to balance the quality and efficiency of the task planning, which converts the historical learning strategy to the current initial learning strategy by applying the transfer learning algorithm. The simulations and analysis show the feasibility and adaptability of the proposed approach especially for the situation with randomly occurring observation requests.
基金supported by General Program (No. 60774022)State Key Program (No. 60834001) of National Natural Science Foundation of China
文摘In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchronous dynamical system with rate constraints on events in the iteration domain. The stability condition is provided in the form of linear matrix inequalities (LMIS) depending on the stability of asynchronous dynamical systems. The analysis is supported by simulations.
基金supported in part by National Science Fund for Distinguished Young Scholars project under Grant No.60725105National Basic Research Program of China (973 Pro-gram) under Grant No.2009CB320404+1 种基金National Natural Science Foundation of China under Grant No.61072068Fundamental Research Funds for the Central Universities under Grant No.JY10000901031
文摘A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such as linear programming or convex optimization, the new approach obtains the capability of iteratively on-line learning environment performance by using Reinforcement Learning (RL) algorithm after observing the variability and uncertainty of the heterogeneous wireless networks. Appropriate decision-making access actions can then be obtained by employing Fuzzy Inference System (FIS) which ensures the strategy being able to explore the possible status and exploit the experiences sufficiently. The new approach considers multi-objective such as spectrum efficiency and fairness between CR Access Points (AP) effectively. By interacting with the environment and accumulating comprehensive advantages, it can achieve the largest long-term reward expected on the desired objectives and implement the best action. Moreover, the present algorithm is relatively simple and does not require complex calculations. Simulation results show that the proposed approach can get better performance with respect to fixed frequency planning scheme or general dynamic spectrum allocation policy.
基金supported by Natural Sciences Foundation of China under Grant No.51972089 and No.51672064。
文摘High entropy diborides are new categories of ultra-high temperature ceramics,which are believed promising candidates for applications in hypersonic vehicles.However,knowledge on high temperature thermal and mechanical properties of high entropy diborides is still lacking unit now.In this work,variations of thermal and elastic properties of high entropy(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) with respect to temperature were predicted by molecular dynamics simulations.Firstly,a deep learning potential for Ti-Zr-Hf-Nb-Ta-B diboride system was fitted with its prediction error in energy and force respectively being 9.2 meV/atom and 208 meV/A,in comparison with first-principles calculations.Then,temperature dependent lattice constants,anisotropic thermal expansions,anisotropic phonon thermal conductivities,and elastic properties of high entropy(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) from 0℃to 2400℃were evaluated,where the predicted room temperature values agree well with experimental measurements.In addition,intrinsic lattice distortions of(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) were analyzed by displacements of atoms from their ideal positions,which are in an order of 10^(-3) A and one order of magnitude smaller than those in(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))C.It indicates that lattice distortions in(Ti_(0.2)Zr_(0.2)Hf_(0.2)Nb_(0.2)Ta_(0.2))B_(2) is not so severe as expected.With the new paradigm of machine learning potential,deep insight into high entropy materials can be achieved in the future,since the chemical and structural complexly in high entropy materials can be well handled by machine learning potential.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
基金This work was supported by the Joint Foundations of the National Natural Science Foundations of China and the Civil Aviation of China under Grant U1833102the Natural Science Foundation of Liaoning Province under Grants 2020-HYLH-13 and 2019-ZD-0014+1 种基金the fundamental research funds for the central universities under Grant DUT21JC20the Engineering Research Center of Mobile Communications,Ministry of Education.
文摘Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spectrum access makes the CIoV unable to adapt to the various spectrum environments.In this paper,a reinforcement learning based dynamic spectrum access scheme is proposed to improve the transmission performance of the CIoV in the licensed spectrum,and avoid causing harmful interference to the PU.The frame structure of the CIoV is separated into sensing period and access period,whereby the CIoV can optimize the transmission parameters in the access period according to the spectrum decisions in the sensing period.Considering both detection probability and false alarm probability,a Q-learning based spectrum access algorithm is proposed for the CIoV to intelligently select the optimal channel,bandwidth and transmit power under the dynamic spectrum states and various spectrum sensing performance.The simulations have shown that compared with the traditional non-learning spectrum access algorithm,the proposed Q-learning algorithm can effectively improve the spectral efficiency and throughput of the CIoV as well as decrease the interference power to the PU.
基金supported by the National Natural Science Foundation of China (No. 52175148)the Natural Science Foundation of Shaanxi Province (No. 2021KW-25)+1 种基金the Open Cooperation Innovation Fund of Xi’an Modern Chemistry Research Institute (No. SYJJ20210409)the Fundamental Research Funds for the Central Universities (No. 3102018ZY015)
文摘Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materials.As one of the widely known ML methods,back-propagation(BP)neural networks with and without optimization by genetic algorithm(GA)are also established for comparisons of time cost and prediction error.With the aim to further increase the prediction accuracy and efficiency,this paper proposes a long short-term memory(LSTM)networks model to predict the dynamic compressive performance of concrete-like materials at high strain rates.Dynamic explicit analysis is performed in the finite element(FE)software ABAQUS to simulate various waveforms in the split Hopkinson pressure bar(SHPB)experiments by applying different stress waves in the incident bar.The FE simulation accuracy is validated against SHPB experimental results from the viewpoint of dynamic increase factor.In order to cover more extensive loading scenarios,60 sets of FE simulations are conducted in this paper to generate three kinds of waveforms in the incident and transmission bars of SHPB experiments.By training the proposed three networks,the nonlinear mapping relations can be reasonably established between incident,reflect,and transmission waves.Statistical measures are used to quantify the network prediction accuracy,confirming that the predicted stress-strain curves of concrete-like materials at high strain rates by the proposed networks agree sufficiently with those by FE simulations.It is found that compared with BP network,the GA-BP network can effectively stabilize the network structure,indicating that the GA optimization improves the prediction accuracy of the SHPB dynamic responses by performing the crossover and mutation operations of weights and thresholds in the original BP network.By eliminating the long-time dependencies,the proposed LSTM network achieves better results than the BP and GA-BP networks,since smaller mean square error(MSE)and higher correlation coefficient are achieved.More importantly,the proposed LSTM algorithm,after the training process with a limited number of FE simulations,could replace the time-consuming and laborious FE pre-and post-processing and modelling.
基金supported by a grant from the National Natural Science Foundation of China(Grant No.52109163 and 51979188).
文摘Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.