针对P1P3构型混联式混合动力汽车(hybrid electric vehicles,HEVs)的能量管理问题,本文提出一种基于模型预测控制(model predictive control,MPC)的能量管理策略。首先,根据控制算法构建系统预测模型,使用二次规划算法优化求解车辆最小...针对P1P3构型混联式混合动力汽车(hybrid electric vehicles,HEVs)的能量管理问题,本文提出一种基于模型预测控制(model predictive control,MPC)的能量管理策略。首先,根据控制算法构建系统预测模型,使用二次规划算法优化求解车辆最小化油耗的优化问题;然后,利用MATLAB/Simulink仿真平台,在2种标准循环工况下对本文所提出的能量管理控制策略进行仿真验证,并与基于规则的能量管理控制策略进行了对比分析。结果表明,相对于基于规则的控制策略,采用基于MPC的控制策略在2种循环工况下的车辆百公里油耗分别降低了5.6%和5.2%,可有效提升燃油经济性。展开更多
The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly depende...The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly dependent on the volume and quality of data.Data are often distributed across institutions and companies,making cross-organizational data transfer vulnerable to privacy breaches and subject to privacy laws and trade secret regulations.These privacy and security concerns continue to pose major challenges to collaborative training and inference in multi-source data environments.These challenges are particularly significant for Transformer models,where the complex internal encryption computations drastically reduce computational efficiency,ultimately threatening the model's practical applicability.We hence introduce Secformer,an innovative architecture specifically designed to protect the privacy of Transformer-like models.Secformer separates the encoder and decoder modules,enabling the decomposition of computation flows in Transformer-like models and their efficient mapping to Multi-Party Computation(MPC)protocols.This design effectively addresses privacy leakage issues during the collaborative computation process of Transformer models.To prevent performance degradation caused by encrypted attention modules,we propose a modular design strategy that optimizes high-level components by reconstructing low-level operators.We further analyze the security of Secformer's core components,presenting security definitions and formal proofs.We construct a library of fundamental operators and core modules using atomic-level component designs as the basic building blocks for encoders and decoders.Moreover,these components can serve as foundational operators for other Transformer-like models.Extensive experimental evaluations demonstrate Secformer's excellent performance while preserving privacy and offering universal adaptability for Transformer-like models.展开更多
The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in com...The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in computational power.This review synthesizes recent progress in the application of large language models to core oncological tasks,including medical imaging analysis,genomic interpretation,and personalized treatment planning.Underpinned by advanced computational infrastructures,such as graphics processing unit/tensor processing unit clusters,heterogeneous computing,and cloud platforms,these models enable superior representation learning and generalization across multimodal data sources.This review examines how these infrastructures overcome key bottlenecks in intelligent oncology through scalable optimization strategies,including mixed-precision training,memory optimization,and heterogeneous computing.Alongside these technical advancements,the review explores pressing challenges,such as data heterogeneity,limited model interpretability,regulatory uncertainties,and the environmental impact of artificial intelligence(AI)systems.Special emphasis is placed on emerging solutions,encompassing green AI and edge computing,which offer promising approaches for low-resource deployment scenarios.Additionally,the review highlights the critical role of interdisciplinary collaboration among oncology,computer science,ethics,and policy to ensure that AI systems are not only powerful but also transparent,safe,and clinically relevant.Finally,the review outlines potential avenues for future research aimed at developing robust,scalable,and human-centered frameworks for intelligent oncology.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
文摘由于地铁受电弓—接触线系统受电力列车运行速度限制,采用高空刚性架设接触供电方式.接触网实时参数测量和整定过程中,主要取决精密仪器结构状态与测量参数误差校正.传统的测量仪器采用激光相机与传感器组合方式,其数据传输整定慢、计算复杂融合难度大、受地铁隧道环境影响模型辨识度低等缺点.文中提出一种基于模糊理论的MPC(Model Prediction Control,模型预测控制)算法,在实时测量目标点定位和数据优化精度方面,与PID控制策略比较.AME/simulink试验仿真表明,基于模糊理论的MPC预测算法能提高测量仪器定位点位置预测校正,测量点轨迹协同精度提高15%,减少软件计算数据的作业量20%,提高数据处理精准度±10 mm.
文摘针对P1P3构型混联式混合动力汽车(hybrid electric vehicles,HEVs)的能量管理问题,本文提出一种基于模型预测控制(model predictive control,MPC)的能量管理策略。首先,根据控制算法构建系统预测模型,使用二次规划算法优化求解车辆最小化油耗的优化问题;然后,利用MATLAB/Simulink仿真平台,在2种标准循环工况下对本文所提出的能量管理控制策略进行仿真验证,并与基于规则的能量管理控制策略进行了对比分析。结果表明,相对于基于规则的控制策略,采用基于MPC的控制策略在2种循环工况下的车辆百公里油耗分别降低了5.6%和5.2%,可有效提升燃油经济性。
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
基金supported by the National Natural Science Foundation of China under Grant 62471205in part by the Yunnan Fundamental Research Projects under Grant 202301AV070003in part by the Major Science and Technology Projects in Yunnan Province under Grant 202302AG050009。
文摘The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly dependent on the volume and quality of data.Data are often distributed across institutions and companies,making cross-organizational data transfer vulnerable to privacy breaches and subject to privacy laws and trade secret regulations.These privacy and security concerns continue to pose major challenges to collaborative training and inference in multi-source data environments.These challenges are particularly significant for Transformer models,where the complex internal encryption computations drastically reduce computational efficiency,ultimately threatening the model's practical applicability.We hence introduce Secformer,an innovative architecture specifically designed to protect the privacy of Transformer-like models.Secformer separates the encoder and decoder modules,enabling the decomposition of computation flows in Transformer-like models and their efficient mapping to Multi-Party Computation(MPC)protocols.This design effectively addresses privacy leakage issues during the collaborative computation process of Transformer models.To prevent performance degradation caused by encrypted attention modules,we propose a modular design strategy that optimizes high-level components by reconstructing low-level operators.We further analyze the security of Secformer's core components,presenting security definitions and formal proofs.We construct a library of fundamental operators and core modules using atomic-level component designs as the basic building blocks for encoders and decoders.Moreover,these components can serve as foundational operators for other Transformer-like models.Extensive experimental evaluations demonstrate Secformer's excellent performance while preserving privacy and offering universal adaptability for Transformer-like models.
文摘The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in computational power.This review synthesizes recent progress in the application of large language models to core oncological tasks,including medical imaging analysis,genomic interpretation,and personalized treatment planning.Underpinned by advanced computational infrastructures,such as graphics processing unit/tensor processing unit clusters,heterogeneous computing,and cloud platforms,these models enable superior representation learning and generalization across multimodal data sources.This review examines how these infrastructures overcome key bottlenecks in intelligent oncology through scalable optimization strategies,including mixed-precision training,memory optimization,and heterogeneous computing.Alongside these technical advancements,the review explores pressing challenges,such as data heterogeneity,limited model interpretability,regulatory uncertainties,and the environmental impact of artificial intelligence(AI)systems.Special emphasis is placed on emerging solutions,encompassing green AI and edge computing,which offer promising approaches for low-resource deployment scenarios.Additionally,the review highlights the critical role of interdisciplinary collaboration among oncology,computer science,ethics,and policy to ensure that AI systems are not only powerful but also transparent,safe,and clinically relevant.Finally,the review outlines potential avenues for future research aimed at developing robust,scalable,and human-centered frameworks for intelligent oncology.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.