Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing o...Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing offers a transformative approach to solve ADN planning.To fully leverage the potential of quantum computing,this paper proposes a photonic quantum acceleration algorithm.First,a quantum-accelerated framework for ADN planning is proposed on the basis of coherent photonic quantum computers.The ADN planning model is then formulated and decomposed into discrete master problems and continuous subproblems to facilitate the quantum optimization process.The photonic quantum-embedded adaptive alternating direction method of multipliers(PQA-ADMM)algorithm is subsequently proposed to equivalently map the discrete master problem onto a quantum-interpretable model,enabling its deployment on a photonic quantum computer.Finally,a comparative analysis with various solvers,including Gurobi,demonstrates that the proposed PQA-ADMM algorithm achieves significant speedup on the modified IEEE 33-node and IEEE 123-node systems,highlighting its effectiveness.展开更多
In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “...In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “divide” and applied it in construction of solution space. Then, by application of a sticker based parallel algorithm using biological operations, independent set problem was resolved in polynomial time.展开更多
Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when...Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when Virchow’s triad of hypercoagulability, changes in blood flow (e.g. stasis) and endothelial dysfunction, is fulfilled. Although such immobilisation is most often seen in bedbound patients and travellers on long distance flights, there is increasing evidence that prolonged periods of work or leisure related to using computers while seated at work desks, is an independent risk factor. In this report, we present two cases of “e-thrombosis” from prolonged sitting while using a computer.展开更多
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa...Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.展开更多
We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this m...We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this mean for the future? Are these children lucky or not?展开更多
In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot n...In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot necessarily unitary.The duality mode provides a natural link between classical computing and quantum computing.In addition,the duality mode provides a new tool for quantum algorithm design.展开更多
A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R...A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.展开更多
In the effort to develop useful quantum computers,simulating quantum machines with conventional classical computing resources is a key capability.Such simulations will always face limits,preventing the emulation of qu...In the effort to develop useful quantum computers,simulating quantum machines with conventional classical computing resources is a key capability.Such simulations will always face limits,preventing the emulation of quantum computers at substantial scale;however,by pushing the envelope through optimal choices of algorithms and hardware,the value of simulator tools can be maximized.This work reviews state-of-the-art numerical simulation methods,i.e.,classical algorithms that emulate quantum computer evolution under specific operations.We focus on the mainstream state-vector and tensor-network paradigms,while briefly mentioning alternative methods.Moreover,we review the diverse applications of simulation across different facets of quantum computer development,including understanding the fundamental differences between quantum and classical computations,exploring algorithmic design for quantum advantage,predicting quantum processor performance at the design stage,and efficiently characterizing fabricated devices for rapid iterations.This review complements recent surveys of current tools and implementations;here,we aim to provide readers with an essential understanding of the theoretical basis of classical simulation methods,a detailed discussion of their advantages and limitations,and an overview of the demands and challenges arising from practical use cases.展开更多
Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate...Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate-scale nonadiabatic holo-nomic quantum computers is clearly necessary.Given that measurements are the sole means of extracting information,they play an indispensable role in nonadiabatic holonomic quantum computers.Accordingly,developing methods to reduce measurement errors in nonadiabatic holonomic quantum computers is of great importance.However,while much attention has been given to the research on nonadiabatic holonomic gates,the research on reducing measurement errors in nonadiabatic holonomic quantum computers is severely lacking.In this study,we propose a measurement error reduction method tailored for intermediate-scale nonadiabatic holonomic quantum computers.The reason we say this is because our method can not only reduce the measurement errors in the computer but also be useful in mitigating errors originating from nonadiabatic holonomic gates.Given these features,our method significantly advances the construction of accurate intermediate-scale nonadiabatic holonomic quantum computers.展开更多
Background Describing where distribution hotspots and coldspots are located is crucial for any science-based species management and governance.Thus,here we created the world's first Super Species Distribution Mode...Background Describing where distribution hotspots and coldspots are located is crucial for any science-based species management and governance.Thus,here we created the world's first Super Species Distribution Models(SDMs)including all described primate species and the best-available predictor set.These Super SDMs are conducted using an ensemble of modern Machine Learning algorithms,including Maxent,Tree Net,Random Forest,CART,CART Boosting and Bagging,and MARS with the utilization of cloud supercomputers(as an add-on option for more powerful models).For the global cold/hotspot models,we obtained global distribution data from www.GBIF.org(approx.420,000 raw occurrence records)and utilized the world's largest Open Access environmental predictor set of 201 layers.For this analysis,all occurrences have been merged into one multi-species(400+species)pixel-based analysis.Results We present the first quantified pixel-based global primate hotspot prediction for Central and Northern South America,West Africa,East Africa,Southeast Asia,Central Asia,and Southern Africa.The global primate coldspots are Antarctica,the Arctic,most temperate regions,and Oceania past the Wallace line.We additionally described all these modeled hotspots/coldspots and discussed reasons for a quantified understanding of where the world's non-human primates occur(or not).Conclusions This shows us where the focus for most future research and conservation management efforts should be,using state-of-the-art digital data indication tools with reasoning.Those areas should be considered of the highest conservation management priority,ideally following‘no killing zones'and sustainable land stewardship approaches if primates are to have a chance of survival.展开更多
We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the s...We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the simplest or most complex biological system. Evolution is manifested by land mark discoveries which deeply affect our social life. Demographic pressure, demand for improved living standards and devastation of the natural environment pose new and complex challenges. We believe that the implementation of new computational models based on the latest scientific methodology can provide a reasonable chance of overcoming today's social problems. To ensure this goal, however, we need a change of mindset, placing findings obtained from modern science above traditional concepts and beliefs. In particular, the type of modeling used with success in computational sciences must be extended to allow simulations of novel models for social life.展开更多
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca...Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.展开更多
.The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizati....The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously.Graphics processing units(GPUs)have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements.A central challenge in implementing GMG on GPUs,though,is that computational work on coarse levels cannot fully utilize the capacity of a GPU.In this work,we perform numerical studies of GMG on CPU–GPU heterogeneous computers.Furthermore,we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver,Fast Fourier Transform,in the cuFFT library developed by NVIDIA.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 52307134the Fundamental Research Funds for the Central Universities(xzy012025022)。
文摘Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing offers a transformative approach to solve ADN planning.To fully leverage the potential of quantum computing,this paper proposes a photonic quantum acceleration algorithm.First,a quantum-accelerated framework for ADN planning is proposed on the basis of coherent photonic quantum computers.The ADN planning model is then formulated and decomposed into discrete master problems and continuous subproblems to facilitate the quantum optimization process.The photonic quantum-embedded adaptive alternating direction method of multipliers(PQA-ADMM)algorithm is subsequently proposed to equivalently map the discrete master problem onto a quantum-interpretable model,enabling its deployment on a photonic quantum computer.Finally,a comparative analysis with various solvers,including Gurobi,demonstrates that the proposed PQA-ADMM algorithm achieves significant speedup on the modified IEEE 33-node and IEEE 123-node systems,highlighting its effectiveness.
文摘In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “divide” and applied it in construction of solution space. Then, by application of a sticker based parallel algorithm using biological operations, independent set problem was resolved in polynomial time.
文摘Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when Virchow’s triad of hypercoagulability, changes in blood flow (e.g. stasis) and endothelial dysfunction, is fulfilled. Although such immobilisation is most often seen in bedbound patients and travellers on long distance flights, there is increasing evidence that prolonged periods of work or leisure related to using computers while seated at work desks, is an independent risk factor. In this report, we present two cases of “e-thrombosis” from prolonged sitting while using a computer.
文摘Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.
文摘We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this mean for the future? Are these children lucky or not?
基金the National Fundamental Research Program under Grant No.2006CB921106National Natural Science Foundation of China under Grant Nos.10325521 and 60433050
文摘In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot necessarily unitary.The duality mode provides a natural link between classical computing and quantum computing.In addition,the duality mode provides a new tool for quantum algorithm design.
文摘A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.
基金supported by the National Natural Science Foundation of China(12325501 and 12447101).
文摘In the effort to develop useful quantum computers,simulating quantum machines with conventional classical computing resources is a key capability.Such simulations will always face limits,preventing the emulation of quantum computers at substantial scale;however,by pushing the envelope through optimal choices of algorithms and hardware,the value of simulator tools can be maximized.This work reviews state-of-the-art numerical simulation methods,i.e.,classical algorithms that emulate quantum computer evolution under specific operations.We focus on the mainstream state-vector and tensor-network paradigms,while briefly mentioning alternative methods.Moreover,we review the diverse applications of simulation across different facets of quantum computer development,including understanding the fundamental differences between quantum and classical computations,exploring algorithmic design for quantum advantage,predicting quantum processor performance at the design stage,and efficiently characterizing fabricated devices for rapid iterations.This review complements recent surveys of current tools and implementations;here,we aim to provide readers with an essential understanding of the theoretical basis of classical simulation methods,a detailed discussion of their advantages and limitations,and an overview of the demands and challenges arising from practical use cases.
基金supported by the National Natural Science Foundation of China(Grant No.12174224)。
文摘Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate-scale nonadiabatic holo-nomic quantum computers is clearly necessary.Given that measurements are the sole means of extracting information,they play an indispensable role in nonadiabatic holonomic quantum computers.Accordingly,developing methods to reduce measurement errors in nonadiabatic holonomic quantum computers is of great importance.However,while much attention has been given to the research on nonadiabatic holonomic gates,the research on reducing measurement errors in nonadiabatic holonomic quantum computers is severely lacking.In this study,we propose a measurement error reduction method tailored for intermediate-scale nonadiabatic holonomic quantum computers.The reason we say this is because our method can not only reduce the measurement errors in the computer but also be useful in mitigating errors originating from nonadiabatic holonomic gates.Given these features,our method significantly advances the construction of accurate intermediate-scale nonadiabatic holonomic quantum computers.
文摘Background Describing where distribution hotspots and coldspots are located is crucial for any science-based species management and governance.Thus,here we created the world's first Super Species Distribution Models(SDMs)including all described primate species and the best-available predictor set.These Super SDMs are conducted using an ensemble of modern Machine Learning algorithms,including Maxent,Tree Net,Random Forest,CART,CART Boosting and Bagging,and MARS with the utilization of cloud supercomputers(as an add-on option for more powerful models).For the global cold/hotspot models,we obtained global distribution data from www.GBIF.org(approx.420,000 raw occurrence records)and utilized the world's largest Open Access environmental predictor set of 201 layers.For this analysis,all occurrences have been merged into one multi-species(400+species)pixel-based analysis.Results We present the first quantified pixel-based global primate hotspot prediction for Central and Northern South America,West Africa,East Africa,Southeast Asia,Central Asia,and Southern Africa.The global primate coldspots are Antarctica,the Arctic,most temperate regions,and Oceania past the Wallace line.We additionally described all these modeled hotspots/coldspots and discussed reasons for a quantified understanding of where the world's non-human primates occur(or not).Conclusions This shows us where the focus for most future research and conservation management efforts should be,using state-of-the-art digital data indication tools with reasoning.Those areas should be considered of the highest conservation management priority,ideally following‘no killing zones'and sustainable land stewardship approaches if primates are to have a chance of survival.
文摘We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the simplest or most complex biological system. Evolution is manifested by land mark discoveries which deeply affect our social life. Demographic pressure, demand for improved living standards and devastation of the natural environment pose new and complex challenges. We believe that the implementation of new computational models based on the latest scientific methodology can provide a reasonable chance of overcoming today's social problems. To ensure this goal, however, we need a change of mindset, placing findings obtained from modern science above traditional concepts and beliefs. In particular, the type of modeling used with success in computational sciences must be extended to allow simulations of novel models for social life.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB3608300in part by the National Nature Science Foundation of China(NSFC)under Grants 62404050,U2341218,62574056,62204052。
文摘Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.
基金the assistance provided by Mr.Xiaoqiang Yue and Mr.Zheng Li from Xiangtan University in regard in our numerical experiments.Feng is partially supported by the NSFC Grant 11201398Program for Changjiang Scholars and Innovative Research Team in University of China Grant IRT1179+4 种基金Specialized research Fund for the Doctoral Program of Higher Education of China Grant 20124301110003Shu is partially supported by NSFC Grant 91130002 and 11171281the Scientific Research Fund of the Hunan Provincial Education Department of China Grant 12A138Xu is partially supported by NSFC Grant 91130011 and NSF DMS-1217142.Zhang is partially supported by the Dean Startup Fund,Academy of Mathematics and System Sciences,and by NSFC Grant 91130011.
文摘.The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously.Graphics processing units(GPUs)have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements.A central challenge in implementing GMG on GPUs,though,is that computational work on coarse levels cannot fully utilize the capacity of a GPU.In this work,we perform numerical studies of GMG on CPU–GPU heterogeneous computers.Furthermore,we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver,Fast Fourier Transform,in the cuFFT library developed by NVIDIA.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.