The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
In the realm of large-scale power system energy storage,sodium-based batteries represent a cost-effective post-lithium energy storage technology,making inorganic solid-state sodium batteries(ISSSB)a critical branch of...In the realm of large-scale power system energy storage,sodium-based batteries represent a cost-effective post-lithium energy storage technology,making inorganic solid-state sodium batteries(ISSSB)a critical branch of this development.Inorganic solid-state electrolytes(ISSEs)are the core components of sodium batteries;however,they face significant challenges such as insufficient ionic conductivity,interfacial instability,and dendrite growth,all of which severely hinder practical application.This review critically assesses experimental protocols and theoretical frameworks related to mainstream ISSEs and systematizes optimization strategies aimed at overcoming these challenges.Leveraging integrated insights from both experimental and computational studies,the review first categorizes and summarizes the primary types of ISSEs,namely oxide-,sulfide-,and halide-based electrolytes.It then details interfacial optimization strategies focused on addressing three core interfacial issues:ion transport barriers resulting from mechanical incompatibility,side reactions stemming from electrochemical mismatch,and dendrite formation.Finally,the review advocates prioritizing in-depth research that integrates experimental and theoretical approaches to establish a closed-loop methodology encompassing predictive design,multiscale investigation,mechanistic exploration,and high-throughput automated experimentation,with feedback-driven refinement.This work serves as a comprehensive reference and systematic roadmap for future research on solid-state electrolytes(SSEs).展开更多
Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different e...Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
With respect to oceanic fluid dynamics,certain models have appeared,e.g.,an extended time-dependent(3+1)-dimensional shallow water wave equation in an ocean or a river,which we investigate in this paper.Using symbolic...With respect to oceanic fluid dynamics,certain models have appeared,e.g.,an extended time-dependent(3+1)-dimensional shallow water wave equation in an ocean or a river,which we investigate in this paper.Using symbolic computation,we find out,on one hand,a set of bilinear auto-Backlund transformations,which could connect certain solutions of that equation with other solutions of that equation itself,and on the other hand,a set of similarity reductions,which could go from that equation to a known ordinary differential equation.The results in this paper depend on all the oceanic variable coefficients in that equation.展开更多
Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely...Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.展开更多
This paper explores the rich structure of peakon and pseudo-peakon solutions for a class of higher-order b-family equations,referred to as the J-th b-family(J-bF)equations.We propose several conjectures concerning the...This paper explores the rich structure of peakon and pseudo-peakon solutions for a class of higher-order b-family equations,referred to as the J-th b-family(J-bF)equations.We propose several conjectures concerning the weak solutions of these equations,including a b-independent pseudo-peakon solution,a b-independent peakon solution,and a b-dependent peakon solution.These conjectures are analytically verified for J≤14 and/or J≤9 using the symbolic computation system MAPLE,which includes a built-in definition of the higher-order derivatives of the sign function.The b-independent pseudo-peakon solution is a 3rd-order pseudo-peakon for general arbitrary constants,with higher-order pseudo-peakons derived under specific parameter constraints.Additionally,we identify both b-independent and b-dependent peakon solutions,highlighting their distinct properties and the nuanced relationship between the parameters b and J.The existence of these solutions underscores the rich dynamical structure of the J-bF equations and generalizes previous results for lower-order equations.Future research directions include higher-order generalizations,rigorous proofs of the conjectures,interactions between different types of peakons and pseudo-peakons,stability analysis,and potential physical applications.These advancements significantly contribute to the understanding of peakon systems and their broader implications in mathematics and physics.展开更多
We present a new perspective on the P vs NP problem by demonstrating that its answer is inherently observer-dependent in curved spacetime, revealing an oversight in the classical formulation of computational complexit...We present a new perspective on the P vs NP problem by demonstrating that its answer is inherently observer-dependent in curved spacetime, revealing an oversight in the classical formulation of computational complexity theory. By incorporating general relativistic effects into complexity theory through a gravitational correction factor, we prove that problems can transition between complexity classes depending on the observer’s reference frame and local gravitational environment. This insight emerges from recognizing that the definition of polynomial time implicitly assumes a universal time metric, an assumption that breaks down in curved spacetime due to gravitational time dilation. We demonstrate the existence of gravitational phase transitions in problem complexity, where an NP-complete problem in one reference frame becomes polynomially solvable in another frame experiencing extreme gravitational time dilation. Through rigorous mathematical formulation, we establish a gravitationally modified complexity theory that extends classical complexity classes to incorporate observer-dependent effects, leading to a complete framework for understanding how computational complexity transforms across different spacetime reference frames. This finding parallels other self-referential insights in mathematics and physics, such as Gödel’s incompleteness theorems and Einstein’s relativity, suggesting a deeper connection between computation, gravitation, and the nature of mathematical truth.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
Ciphertext-Policy Attribute-Based Encryption(CP-ABE)enables fine-grained access control on ciphertexts,making it a promising approach for managing data stored in the cloud-enabled Internet of Things.But existing schem...Ciphertext-Policy Attribute-Based Encryption(CP-ABE)enables fine-grained access control on ciphertexts,making it a promising approach for managing data stored in the cloud-enabled Internet of Things.But existing schemes often suffer from privacy breaches due to explicit attachment of access policies or partial hiding of critical attribute content.Additionally,resource-constrained IoT devices,especially those adopting wireless communication,frequently encounter affordability issues regarding decryption costs.In this paper,we propose an efficient and fine-grained access control scheme with fully hidden policies(named FHAC).FHAC conceals all attributes in the policy and utilizes bloom filters to efficiently locate them.A test phase before decryption is applied to assist authorized users in finding matches between their attributes and the access policy.Dictionary attacks are thwarted by providing unauthorized users with invalid values.The heavy computational overhead of both the test phase and most of the decryption phase is outsourced to two cloud servers.Additionally,users can verify the correctness of multiple outsourced decryption results simultaneously.Security analysis and performance comparisons demonstrate FHAC's effectiveness in protecting policy privacy and achieving efficient decryption.展开更多
As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-...As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-preserving computation framework based on BFV homomorphic encryption,threshold decryption,and blockchain technology.The proposed scheme leverages homomorphic encryption to enable secure computations between sales,finance,and taxation departments,ensuring that sensitive data remains encrypted throughout the entire process.A threshold decryption mechanism is employed to prevent single-point data leakage,while blockchain and IPFS are integrated to ensure verifiability and tamper-proof storage of computation results.Experimental results demonstrate that with 5,000 sample data entries,the framework performs efficiently and is highly scalable in key stages such as sales encryption,cost calculation,and tax assessment,thereby validating its practical feasibility and security.展开更多
The wide application of smart contracts allows industry companies to implement some complex distributed collaborative businesses,which involve the calculation of complex functions,such as matrix operations.However,com...The wide application of smart contracts allows industry companies to implement some complex distributed collaborative businesses,which involve the calculation of complex functions,such as matrix operations.However,complex functions such as matrix operations are difficult to implement on Ethereum Virtual Machine(EVM)-based smart contract platforms due to their distributed security environment limitations.Existing off-chain methods often result in a significant reduction in contract execution efficiency,thus a platform software development kit interface implementation method has become a feasible way to reduce overheads,but this method cannot verify operation correctness and may leak sensitive user data.To solve the above problems,we propose a verifiable EVM-based smart contract cross-language implementation scheme for complex operations,especially matrix operations,which can guarantee operation correctness and user privacy while ensuring computational efficiency.In this scheme,a verifiable interaction process is designed to verify the computation process and results,and a matrix blinding technology is introduced to protect sensitive user data in the calculation process.The security analysis and performance tests show that the proposed scheme can satisfy the correctness and privacy of the cross-language implementation of smart contracts at a small additional efficiency cost.展开更多
Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing be...Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.展开更多
In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with t...In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.展开更多
With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heurist...With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heuristic algorithms have shown remarkable effectiveness in solving these challenges across diverse domains,such as machine learning,process control,and engineering design,showcasing their capability to address complex optimization problems.The Stochastic Fractal Search(SFS)algorithm is one of the most popular meta-heuristic optimization methods inspired by the fractal growth patterns of natural materials.Since its introduction by Hamid Salimi in 2015,SFS has garnered significant attention from researchers and has been applied to diverse optimization problems acrossmultiple disciplines.Its popularity can be attributed to several factors,including its simplicity,practical computational efficiency,ease of implementation,rapid convergence,high effectiveness,and ability to address singleandmulti-objective optimization problems,often outperforming other established algorithms.This review paper offers a comprehensive and detailed analysis of the SFS algorithm,covering its standard version,modifications,hybridization,and multi-objective implementations.The paper also examines several SFS applications across diverse domains,including power and energy systems,image processing,machine learning,wireless sensor networks,environmental modeling,economics and finance,and numerous engineering challenges.Furthermore,the paper critically evaluates the SFS algorithm’s performance,benchmarking its effectiveness against recently published meta-heuristic algorithms.In conclusion,the review highlights key findings and suggests potential directions for future developments and modifications of the SFS algorithm.展开更多
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金the National Natural Science Foundation of China (52076076, 52006065)Fundamental Research Funds for Central Universities (2025JC003)Beijing Municipal Natural Science Foundation (3242022)
文摘In the realm of large-scale power system energy storage,sodium-based batteries represent a cost-effective post-lithium energy storage technology,making inorganic solid-state sodium batteries(ISSSB)a critical branch of this development.Inorganic solid-state electrolytes(ISSEs)are the core components of sodium batteries;however,they face significant challenges such as insufficient ionic conductivity,interfacial instability,and dendrite growth,all of which severely hinder practical application.This review critically assesses experimental protocols and theoretical frameworks related to mainstream ISSEs and systematizes optimization strategies aimed at overcoming these challenges.Leveraging integrated insights from both experimental and computational studies,the review first categorizes and summarizes the primary types of ISSEs,namely oxide-,sulfide-,and halide-based electrolytes.It then details interfacial optimization strategies focused on addressing three core interfacial issues:ion transport barriers resulting from mechanical incompatibility,side reactions stemming from electrochemical mismatch,and dendrite formation.Finally,the review advocates prioritizing in-depth research that integrates experimental and theoretical approaches to establish a closed-loop methodology encompassing predictive design,multiscale investigation,mechanistic exploration,and high-throughput automated experimentation,with feedback-driven refinement.This work serves as a comprehensive reference and systematic roadmap for future research on solid-state electrolytes(SSEs).
基金supported by the National Natural Science Foundation of China(62462040)the Yunnan Fundamental Research Projects(202501AT070345)the Major Science and Technology Projects in Yunnan Province(202202AD080013).
文摘Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金financially supported by the Scientific Research Foundation of North China University of Technology(Grant Nos.11005136024XN147-87 and 110051360024XN151-86).
文摘With respect to oceanic fluid dynamics,certain models have appeared,e.g.,an extended time-dependent(3+1)-dimensional shallow water wave equation in an ocean or a river,which we investigate in this paper.Using symbolic computation,we find out,on one hand,a set of bilinear auto-Backlund transformations,which could connect certain solutions of that equation with other solutions of that equation itself,and on the other hand,a set of similarity reductions,which could go from that equation to a known ordinary differential equation.The results in this paper depend on all the oceanic variable coefficients in that equation.
基金supported by the Basque Government through the ELKARTEK program for Research and Innovation,under the BRTAQUANTUM project(Grant Agreement No.KK-2022/00041)。
文摘Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.
基金supported by the National Natural Science Foundations of China(Grant Nos.12235007,12271324,and 11975131)。
文摘This paper explores the rich structure of peakon and pseudo-peakon solutions for a class of higher-order b-family equations,referred to as the J-th b-family(J-bF)equations.We propose several conjectures concerning the weak solutions of these equations,including a b-independent pseudo-peakon solution,a b-independent peakon solution,and a b-dependent peakon solution.These conjectures are analytically verified for J≤14 and/or J≤9 using the symbolic computation system MAPLE,which includes a built-in definition of the higher-order derivatives of the sign function.The b-independent pseudo-peakon solution is a 3rd-order pseudo-peakon for general arbitrary constants,with higher-order pseudo-peakons derived under specific parameter constraints.Additionally,we identify both b-independent and b-dependent peakon solutions,highlighting their distinct properties and the nuanced relationship between the parameters b and J.The existence of these solutions underscores the rich dynamical structure of the J-bF equations and generalizes previous results for lower-order equations.Future research directions include higher-order generalizations,rigorous proofs of the conjectures,interactions between different types of peakons and pseudo-peakons,stability analysis,and potential physical applications.These advancements significantly contribute to the understanding of peakon systems and their broader implications in mathematics and physics.
文摘We present a new perspective on the P vs NP problem by demonstrating that its answer is inherently observer-dependent in curved spacetime, revealing an oversight in the classical formulation of computational complexity theory. By incorporating general relativistic effects into complexity theory through a gravitational correction factor, we prove that problems can transition between complexity classes depending on the observer’s reference frame and local gravitational environment. This insight emerges from recognizing that the definition of polynomial time implicitly assumes a universal time metric, an assumption that breaks down in curved spacetime due to gravitational time dilation. We demonstrate the existence of gravitational phase transitions in problem complexity, where an NP-complete problem in one reference frame becomes polynomially solvable in another frame experiencing extreme gravitational time dilation. Through rigorous mathematical formulation, we establish a gravitationally modified complexity theory that extends classical complexity classes to incorporate observer-dependent effects, leading to a complete framework for understanding how computational complexity transforms across different spacetime reference frames. This finding parallels other self-referential insights in mathematics and physics, such as Gödel’s incompleteness theorems and Einstein’s relativity, suggesting a deeper connection between computation, gravitation, and the nature of mathematical truth.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金supported in part by the National Key R&D Program of China(Grant No.2019YFB2101700)the National Natural Science Foundation of China(Grant No.62272102,No.62172320,No.U21A20466)+4 种基金the Open Research Fund of Key Laboratory of Cryptography of Zhejiang Province(Grant No.ZCL21015)the Qinghai Key R&D and Transformation Projects(Grant No.2021-GX-112)the Natural Science Foundation of Nanjing University of Posts and Telecommunications(Grant No.NY222141)the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant(No.22KJB520029)Henan Key Laboratory of Network Cryptography Technology(No.LNCT2022-A10)。
文摘Ciphertext-Policy Attribute-Based Encryption(CP-ABE)enables fine-grained access control on ciphertexts,making it a promising approach for managing data stored in the cloud-enabled Internet of Things.But existing schemes often suffer from privacy breaches due to explicit attachment of access policies or partial hiding of critical attribute content.Additionally,resource-constrained IoT devices,especially those adopting wireless communication,frequently encounter affordability issues regarding decryption costs.In this paper,we propose an efficient and fine-grained access control scheme with fully hidden policies(named FHAC).FHAC conceals all attributes in the policy and utilizes bloom filters to efficiently locate them.A test phase before decryption is applied to assist authorized users in finding matches between their attributes and the access policy.Dictionary attacks are thwarted by providing unauthorized users with invalid values.The heavy computational overhead of both the test phase and most of the decryption phase is outsourced to two cloud servers.Additionally,users can verify the correctness of multiple outsourced decryption results simultaneously.Security analysis and performance comparisons demonstrate FHAC's effectiveness in protecting policy privacy and achieving efficient decryption.
文摘As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-preserving computation framework based on BFV homomorphic encryption,threshold decryption,and blockchain technology.The proposed scheme leverages homomorphic encryption to enable secure computations between sales,finance,and taxation departments,ensuring that sensitive data remains encrypted throughout the entire process.A threshold decryption mechanism is employed to prevent single-point data leakage,while blockchain and IPFS are integrated to ensure verifiability and tamper-proof storage of computation results.Experimental results demonstrate that with 5,000 sample data entries,the framework performs efficiently and is highly scalable in key stages such as sales encryption,cost calculation,and tax assessment,thereby validating its practical feasibility and security.
基金supported in part by the National Natural Science Foundation of China under Grant 62272007,U23B2002in part by the Excellent Young Talents Project of the Beijing Municipal University Teacher Team Construction Support Plan under Grant BPHR202203031+1 种基金in part by the Yunnan Key Laboratory of Blockchain Application Technology under Grant 2021105AG070005(YNB202102)in part by the Open Topics of Key Laboratory of Blockchain Technology and Data Security,The Ministry of Industry and Information Technology of the People’s Republic of China under Grant 20243222。
文摘The wide application of smart contracts allows industry companies to implement some complex distributed collaborative businesses,which involve the calculation of complex functions,such as matrix operations.However,complex functions such as matrix operations are difficult to implement on Ethereum Virtual Machine(EVM)-based smart contract platforms due to their distributed security environment limitations.Existing off-chain methods often result in a significant reduction in contract execution efficiency,thus a platform software development kit interface implementation method has become a feasible way to reduce overheads,but this method cannot verify operation correctness and may leak sensitive user data.To solve the above problems,we propose a verifiable EVM-based smart contract cross-language implementation scheme for complex operations,especially matrix operations,which can guarantee operation correctness and user privacy while ensuring computational efficiency.In this scheme,a verifiable interaction process is designed to verify the computation process and results,and a matrix blinding technology is introduced to protect sensitive user data in the calculation process.The security analysis and performance tests show that the proposed scheme can satisfy the correctness and privacy of the cross-language implementation of smart contracts at a small additional efficiency cost.
基金supported by NSF China(No.T2421002,62061146002,62020106005)。
文摘Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.
文摘In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.
基金supported by Prince Sattam bin Abdulaziz University for funding this research work through the project number(2024/RV/06).
文摘With the rapid advancements in technology and science,optimization theory and algorithms have become increasingly important.A wide range of real-world problems is classified as optimization challenges,and meta-heuristic algorithms have shown remarkable effectiveness in solving these challenges across diverse domains,such as machine learning,process control,and engineering design,showcasing their capability to address complex optimization problems.The Stochastic Fractal Search(SFS)algorithm is one of the most popular meta-heuristic optimization methods inspired by the fractal growth patterns of natural materials.Since its introduction by Hamid Salimi in 2015,SFS has garnered significant attention from researchers and has been applied to diverse optimization problems acrossmultiple disciplines.Its popularity can be attributed to several factors,including its simplicity,practical computational efficiency,ease of implementation,rapid convergence,high effectiveness,and ability to address singleandmulti-objective optimization problems,often outperforming other established algorithms.This review paper offers a comprehensive and detailed analysis of the SFS algorithm,covering its standard version,modifications,hybridization,and multi-objective implementations.The paper also examines several SFS applications across diverse domains,including power and energy systems,image processing,machine learning,wireless sensor networks,environmental modeling,economics and finance,and numerous engineering challenges.Furthermore,the paper critically evaluates the SFS algorithm’s performance,benchmarking its effectiveness against recently published meta-heuristic algorithms.In conclusion,the review highlights key findings and suggests potential directions for future developments and modifications of the SFS algorithm.