The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digit...The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,...This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,including PTB7-Th:FOIC,PTB7-Th:O6T-4F,PM6:Y6,and PM6:FM,were systematically tested.When coupled with electron transport layer(ETL)contacts,these blends exhibit exceptional charge separation and extraction,with PM6:Y6 achieving saturation photocurrents up to 16.8 mA cm^(-2) at 1.23 VRHE(oxygen evolution thermodynamic potential).For the first time,a tandem structure utilizing organic photoanodes has been computationally designed and fabricated and the implementation of a double PM6:Y6 photoanode/photovoltaic structure resulted in photogenerated currents exceeding 7mA cm^(-2) at 0 VRHE(hydrogen evolution thermodynamic potential)and anodic current onset potentials as low as-0.5 VRHE.The herein-presented organic-based approach paves the way for further exploration of different blend combinations to target specific oxidative reactions by selecting precise donor/acceptor candidates among the multiple existing ones.展开更多
1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers ...1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.展开更多
Adolescent idiopathic scoliosis(AIS)is a dynamic progression during growth,which requires long-term collaborations and efforts from clinicians,patients and their families.It would be beneficial to have a precise inter...Adolescent idiopathic scoliosis(AIS)is a dynamic progression during growth,which requires long-term collaborations and efforts from clinicians,patients and their families.It would be beneficial to have a precise intervention based on cross-scale understandings of the etiology,real-time sensing and actuating to enable early detection,screening and personalized treatment.We argue that merging computational intelligence and wearable technologies can bridge the gap between the current trajectory of the techniques applied to AIS and this vision.Wearable technologies such as inertial measurement units(IMUs)and surface electromyography(sEMG)have shown great potential in monitoring spinal curvature and muscle activity in real-time.For instance,IMUs can track the kinematics of the spine during daily activities,while sEMG can detect asymmetric muscle activation patterns that may contribute to scoliosis progression.Computational intelligence,particularly deep learning algorithms,can process these multi-modal data streams to identify early signs of scoliosis and adapt treatment strategies dynamically.By using their combination,we can find potential solutions for a better understanding of the disease,a more effective and intelligent way for treatment and rehabilitation.展开更多
Adiabatic holonomic gates possess the geometric robustness of adiabatic geometric phases,i.e.,dependence only on the evolution path of the parameter space but not on the evolution details of the quantum system,which,w...Adiabatic holonomic gates possess the geometric robustness of adiabatic geometric phases,i.e.,dependence only on the evolution path of the parameter space but not on the evolution details of the quantum system,which,when coordinated with decoherence-free subspaces,permits additional resilience to the collective dephasing environment.However,the previous scheme[Phys.Rev.Lett.95130501(2005)]of adiabatic holonomic quantum computation in decoherence-free subspaces requires four-body interaction that is challenging in practical implementation.In this work,we put forward a scheme to realize universal adiabatic holonomic quantum computation in decoherence-free subspaces using only realistically available two-body interaction,thereby avoiding the difficulty of implementing four-body interaction.Furthermore,an arbitrary one-qubit gate in our scheme can be realized by a single-shot implementation,which eliminates the need to combine multiple gates for realizing such a gate.展开更多
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int...Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.展开更多
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche...Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.展开更多
Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is con...Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance.展开更多
Biotechnological strategies for plastic depolymerization and recycling have emerged as transformative approaches to combat the global plastic pollution crisis,aligning with the principles of a sustainable and circular...Biotechnological strategies for plastic depolymerization and recycling have emerged as transformative approaches to combat the global plastic pollution crisis,aligning with the principles of a sustainable and circular economy.Despite advances in engineering PET hydrolases,the degradation process is frequently compromised by product inhibition and the heterogeneity of final products,thereby obstructing subsequent PET recondensation and impeding the synthesis of high-value derivatives.In this work,we utilized previously devised computational strategies to redesign a thermostable DuraMHETase,achieving an apparent melting temperature of 72℃ in complex with MHET and a 6-fold higher in total turnover number(TTN)toward MHET than the wild-type enzyme at 60℃.The fused enzyme system composed of DuraMHETase and TurboPETase demonstrated higher efficiency than other PET hydrolases and the separated dual enzyme systems.Furthermore,we identified both exo-and endo-PETase activities in DuraMHETase,whereas the endo-activity was previously unobserved at ambient temperatures.These results expand the functional scope of MHETase beyond mere intermediate hydrolysis,and may provide guidance for the development of more synergistic approaches to plastic biodepolymerization and recycling.展开更多
The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of ...The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.展开更多
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca...This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.展开更多
In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with t...In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.展开更多
Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite ...Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite this understanding,the neural circuit mechanisms underlying this phenomenon remain elusive.In this study,we present a biophysical computational model encompassing three crucial regions,including the dorsolateral prefrontal cortex,subgenual anterior cingulate cortex,and ventromedial prefrontal cortex.The objective is to investigate the role of coupling relationships within the prefrontal-cingulate cortex networks in balancing emotions and cognitive processes.The numerical results confirm that coupled weights play a crucial role in the balance of emotional cognitive networks.Furthermore,our model predicts the pathogenic mechanism of depression resulting from abnormalities in the subgenual cortex,and network functionality was restored through intervention in the dorsolateral prefrontal cortex.This study utilizes computational modeling techniques to provide an insight explanation for the diagnosis and treatment of depression.展开更多
Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a part...Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a particular concern.Nevertheless,there is a research gap that prevailing solutions focus primarily on data security using blockchain but ignore computational security,making the traditional ML process vulnerable to off-chain risks.Therefore,the research objective is to develop a novel ML on blockchain(MLOB)framework to ensure both the data and computational process security.The central tenet is to place them both on the blockchain,execute them as blockchain smart contracts,and protect the execution records on-chain.The framework is established by developing a prototype and further calibrated using a case study of industrial inspection.It is shown that the MLOB framework,compared with existing ML and BT isolated solutions,is superior in terms of security(successfully defending against corruption on six designed attack scenario),maintaining accuracy(0.01%difference with baseline),albeit with a slightly compromised efficiency(0.231 second latency increased).The key finding is MLOB can significantly enhances the computational security of engineering computing without increasing computing power demands.This finding can alleviate concerns regarding the computational resource requirements of ML-BT integration.With proper adaption,the MLOB framework can inform various novel solutions to achieve computational security in broader engineering challenges.展开更多
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
文摘The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金partly funded by a BIST Ignite Programme grant from the Barcelona Institute of Science and Technology(Code:MOLOPEC)financial support from LICROX and SOREC2 EUFunded projects(Codes:951843 and 101084326)+7 种基金the BIST Program,and Severo Ochoa Programpartially funded by CEX2019-000910-S(MCIN/AEI/10.13039/501100011033 and PID2020-112650RBI00),Fundació Cellex,Fundació Mir-PuigGeneralitat de Catalunya through CERCAfunding from the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101081441financial support by the Agencia Estatal de Investigación(grant PRE2018-084881)the financial support by from the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101081441support from the MCIN/AEI JdC-F Fellowship(FJC2020-043223-I)the Severo Ochoa Excellence Postdoctoral Fellowship(CEX2019-000910-S).
文摘This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,including PTB7-Th:FOIC,PTB7-Th:O6T-4F,PM6:Y6,and PM6:FM,were systematically tested.When coupled with electron transport layer(ETL)contacts,these blends exhibit exceptional charge separation and extraction,with PM6:Y6 achieving saturation photocurrents up to 16.8 mA cm^(-2) at 1.23 VRHE(oxygen evolution thermodynamic potential).For the first time,a tandem structure utilizing organic photoanodes has been computationally designed and fabricated and the implementation of a double PM6:Y6 photoanode/photovoltaic structure resulted in photogenerated currents exceeding 7mA cm^(-2) at 0 VRHE(hydrogen evolution thermodynamic potential)and anodic current onset potentials as low as-0.5 VRHE.The herein-presented organic-based approach paves the way for further exploration of different blend combinations to target specific oxidative reactions by selecting precise donor/acceptor candidates among the multiple existing ones.
文摘1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.
基金by National Natural Science Foundation of China(No.62306083)the Postdoctoral Science Foundation of Heilongjiang Province of China(LBH-Z22175)the Ministry of Industry and Information Technology。
文摘Adolescent idiopathic scoliosis(AIS)is a dynamic progression during growth,which requires long-term collaborations and efforts from clinicians,patients and their families.It would be beneficial to have a precise intervention based on cross-scale understandings of the etiology,real-time sensing and actuating to enable early detection,screening and personalized treatment.We argue that merging computational intelligence and wearable technologies can bridge the gap between the current trajectory of the techniques applied to AIS and this vision.Wearable technologies such as inertial measurement units(IMUs)and surface electromyography(sEMG)have shown great potential in monitoring spinal curvature and muscle activity in real-time.For instance,IMUs can track the kinematics of the spine during daily activities,while sEMG can detect asymmetric muscle activation patterns that may contribute to scoliosis progression.Computational intelligence,particularly deep learning algorithms,can process these multi-modal data streams to identify early signs of scoliosis and adapt treatment strategies dynamically.By using their combination,we can find potential solutions for a better understanding of the disease,a more effective and intelligent way for treatment and rehabilitation.
基金Project supported by the National Natural Science Foundation of China(Grant No.12305021)。
文摘Adiabatic holonomic gates possess the geometric robustness of adiabatic geometric phases,i.e.,dependence only on the evolution path of the parameter space but not on the evolution details of the quantum system,which,when coordinated with decoherence-free subspaces,permits additional resilience to the collective dephasing environment.However,the previous scheme[Phys.Rev.Lett.95130501(2005)]of adiabatic holonomic quantum computation in decoherence-free subspaces requires four-body interaction that is challenging in practical implementation.In this work,we put forward a scheme to realize universal adiabatic holonomic quantum computation in decoherence-free subspaces using only realistically available two-body interaction,thereby avoiding the difficulty of implementing four-body interaction.Furthermore,an arbitrary one-qubit gate in our scheme can be realized by a single-shot implementation,which eliminates the need to combine multiple gates for realizing such a gate.
基金supported in part by Sub Project of National Key Research and Development plan in 2020 NO.2020YFC1511704Beijing Information Science and Technology University NO.2020KYNH212,NO.2021CGZH302+1 种基金Beijing Science and Technology Project(Grant No.Z211100004421009)in part by the National Natural Science Foundation of China(Grant No.62301058).
文摘Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.
文摘Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.
基金supported in part by the Natural Science Foundation of China (62171110,U19B2028 and U20B2070)。
文摘Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance.
文摘Biotechnological strategies for plastic depolymerization and recycling have emerged as transformative approaches to combat the global plastic pollution crisis,aligning with the principles of a sustainable and circular economy.Despite advances in engineering PET hydrolases,the degradation process is frequently compromised by product inhibition and the heterogeneity of final products,thereby obstructing subsequent PET recondensation and impeding the synthesis of high-value derivatives.In this work,we utilized previously devised computational strategies to redesign a thermostable DuraMHETase,achieving an apparent melting temperature of 72℃ in complex with MHET and a 6-fold higher in total turnover number(TTN)toward MHET than the wild-type enzyme at 60℃.The fused enzyme system composed of DuraMHETase and TurboPETase demonstrated higher efficiency than other PET hydrolases and the separated dual enzyme systems.Furthermore,we identified both exo-and endo-PETase activities in DuraMHETase,whereas the endo-activity was previously unobserved at ambient temperatures.These results expand the functional scope of MHETase beyond mere intermediate hydrolysis,and may provide guidance for the development of more synergistic approaches to plastic biodepolymerization and recycling.
基金supported by the Australian Research Council(Grant No.IC190100020)the Australian Research Council Indus〓〓try Fellowship(Grant No.IE230100435)the National Natural Science Foundation of China(Grant Nos.12032014 and T2488101)。
文摘The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.
基金supported by the National Natural Science Foundation of China Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102)the National Natural Science Foundation of China(Grant No.12202451).
文摘This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.
文摘In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.
基金supported by the Major Research Instrument Development Project of the National Natural Science Foundation of China(82327810)the Foundation of the President of Hebei University(XZJJ202202)the Hebei Province“333 talent project”(A202101058).
文摘Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite this understanding,the neural circuit mechanisms underlying this phenomenon remain elusive.In this study,we present a biophysical computational model encompassing three crucial regions,including the dorsolateral prefrontal cortex,subgenual anterior cingulate cortex,and ventromedial prefrontal cortex.The objective is to investigate the role of coupling relationships within the prefrontal-cingulate cortex networks in balancing emotions and cognitive processes.The numerical results confirm that coupled weights play a crucial role in the balance of emotional cognitive networks.Furthermore,our model predicts the pathogenic mechanism of depression resulting from abnormalities in the subgenual cortex,and network functionality was restored through intervention in the dorsolateral prefrontal cortex.This study utilizes computational modeling techniques to provide an insight explanation for the diagnosis and treatment of depression.
文摘Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a particular concern.Nevertheless,there is a research gap that prevailing solutions focus primarily on data security using blockchain but ignore computational security,making the traditional ML process vulnerable to off-chain risks.Therefore,the research objective is to develop a novel ML on blockchain(MLOB)framework to ensure both the data and computational process security.The central tenet is to place them both on the blockchain,execute them as blockchain smart contracts,and protect the execution records on-chain.The framework is established by developing a prototype and further calibrated using a case study of industrial inspection.It is shown that the MLOB framework,compared with existing ML and BT isolated solutions,is superior in terms of security(successfully defending against corruption on six designed attack scenario),maintaining accuracy(0.01%difference with baseline),albeit with a slightly compromised efficiency(0.231 second latency increased).The key finding is MLOB can significantly enhances the computational security of engineering computing without increasing computing power demands.This finding can alleviate concerns regarding the computational resource requirements of ML-BT integration.With proper adaption,the MLOB framework can inform various novel solutions to achieve computational security in broader engineering challenges.