Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con...Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.展开更多
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met...Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce different...Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce differential equations,constitutive relations,and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models,particularly for solid and structural mechanics,where data are often limited or noisy.This review offers a comprehensive assessment of recent developments in PINNs,combining bibliometric analysis,theoretical foundations,application-oriented insights,and methodological innovations.A biblio-metric survey indicates a rapid increase in publications on PINNs since 2018,with prominent research clusters focused on numerical methods,structural analysis,and forecasting.Building upon this trend,the review consolidates advance-ments across five principal application domains,including forward structural analysis,inverse modeling and parameter identification,structural and topology optimization,assessment of structural integrity,and manufacturing processes.These applications are propelled by substantial methodological advancements,encompassing rigorous enforcement of boundary conditions,modified loss functions,adaptive training,domain decomposition strategies,multi-fidelity and transfer learning approaches,as well as hybrid finite element–PINN integration.These advances address recurring challenges in solid mechanics,such as high-order governing equations,material heterogeneity,complex geometries,localized phenomena,and limited experimental data.Despite remaining challenges in computational cost,scalability,and experimental validation,PINNs are increasingly evolving into specialized,physics-aware tools for practical solid and structural mechanics applications.展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
We introduce a model to implement incremental update of views. The principle is that unless a view is accessed, the modification related to the view is not computed. This modification information is used only when vie...We introduce a model to implement incremental update of views. The principle is that unless a view is accessed, the modification related to the view is not computed. This modification information is used only when views are updated. Modification information is embodied in the classes (including inheritance classes and nesting classes) that derive the view. We establish a modify list consisted of tuples (one tuple for each view which is related to the class) to implement view update. A method is used to keep views from re-update. Key words object-oriented database - incremental computation - view-computation - engineering information system CLC number TP 391 Foundation item: Supported by the National Natural Science Foundation of China(60235025)Biography: Guo Hai-ying (1971-), female, Ph. D, research direction: CAD and engineering information system.展开更多
A method is presented for incrementally computing success patterns of logic programs. The set of success patterns of a logic program with respect to an abstraction is formulated as the success set of an equational log...A method is presented for incrementally computing success patterns of logic programs. The set of success patterns of a logic program with respect to an abstraction is formulated as the success set of an equational logic program modulo an equality theory that is induced by the abstraction. The method is exemplified via depth and stump abstractions. Also presented are algorithms for computing most general unifiers modulo equality theories induced by depth and stump abstractions.展开更多
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
A new analytical model for geometric size and forming force prediction in incremental flanging(IF)is presented in this work.The complex deformation characteristics of IF are considered in the modeling process,which ca...A new analytical model for geometric size and forming force prediction in incremental flanging(IF)is presented in this work.The complex deformation characteristics of IF are considered in the modeling process,which can accurately describe the strain and stress states in IF.Based on strain analysis,the model can predict the material thickness distribution and neck height after IF.By considering contact area,strain characteristics,material thickness changes,and friction,the model can predict specific moments and corresponding values of maximum axial forming force and maximum horizontal forming force during IF.In addition,an IF experiment involving different tool diameters,flanging diameters,and opening hole diameters is conducted.On the basis of the experimental strain paths,the strain characteristics of different deformation zones are studied,and the stable strain ratio is quantitatively described through two dimensionless parameters:relative tool diameter and relative hole diameter.Then,the changing of material thickness and forming force in IF,and the variation of minimum material thickness,neck height,maximum axial forming force,and maximum horizontal forming force with flanging parameters are studied,and the reliability of the analytical model is verified in this process.Finally,the influence of the horizontal forming force on the tool design and the fluctuation of the forming force are explained.展开更多
This paper presents the design of an asymmetrically variable wingtip anhedral angles morphing aircraft,inspired by biomimetic mechanisms,to enhance lateral maneuver capability.Firstly,we establish a lateral dynamic mo...This paper presents the design of an asymmetrically variable wingtip anhedral angles morphing aircraft,inspired by biomimetic mechanisms,to enhance lateral maneuver capability.Firstly,we establish a lateral dynamic model considering additional forces and moments resulting during the morphing process,and convert it into a Multiple Input Multiple Output(MIMO)virtual control system by importing virtual inputs.Secondly,a classical dynamics inversion controller is designed for the outer-loop system.A new Global Fast Terminal Incremental Sliding Mode Controller(NDO-GFTISMC)is proposed for the inner-loop system,in which an adaptive law is implemented to weaken control surface chattering,and a Nonlinear Disturbance Observer(NDO)is integrated to compensate for unknown disturbances.The whole control system is proven semiglobally uniformly ultimately bounded based on the multi-Lyapunov function method.Furthermore,we consider tracking errors and self-characteristics of actuators,a quadratic programmingbased dynamic control allocation law is designed,which allocates virtual control inputs to the asymmetrically deformed wingtip and rudder.Actuator dynamic models are incorporated to ensure physical realizability of designed allocation law.Finally,comparative experimental results validate the effectiveness of the designed control system and control allocation law.The NDO-GFTISMC features faster convergence,stronger robustness,and 81.25%and 75.0%reduction in maximum state tracking error under uncertainty compared to the Incremental Nonlinear Dynamic Inversion Controller based on NDO(NDO-INDI)and Incremental Sliding Mode Controller based on NDO(NDO-ISMC),respectively.The design of the morphing aircraft significantly enhances lateral maneuver capability,maintaining a substantial control margin during lateral maneuvering,reducing the burden of the rudder surface,and effectively solving the actuator saturation problem of traditional aircraft during lateral maneuvering.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
The influence of geometric configuration on the friction characteristics during incremental sheet forming of AA5052 was analyzed by integrating surface morphology and its characteristic parameters,along with plastic s...The influence of geometric configuration on the friction characteristics during incremental sheet forming of AA5052 was analyzed by integrating surface morphology and its characteristic parameters,along with plastic strain,contact pressure,and area.The interface promotes lubrication and support when wall angles were≤40°,a 0.5 mm-thin sheet was used,and a 10 mm-large tool radius was employed.This mainly results in micro-plowing and plastic extrusion flow,leading to lower friction coefficient.However,when wall angles exceed 40°,significant plastic strain roughening occurs,leading to inadequate lubrication on the newly formed surface.Increased sheet thickness and decreased tool radius elevate contact pressure.These actions trigger micro-cutting and adhesion,potentially leading to localized scuffing and dimple tears,and higher friction coefficient.The friction mechanisms remain unaffected by the part’s plane curve features.As the forming process progresses,abrasive wear intensifies,and surface morphology evolves unfavorably for lubrication and friction reduction.展开更多
Incremental Nonlinear Dynamic Inversion(INDI)is a control approach that has gained popularity in flight control over the past decade.Besides the INDI law,several common additional components complement an INDI-based c...Incremental Nonlinear Dynamic Inversion(INDI)is a control approach that has gained popularity in flight control over the past decade.Besides the INDI law,several common additional components complement an INDI-based controller.This paper,the second part of a two-part series of surveys on INDI,aims to summarize the modern trends in INDI and its related components.Besides a comprehensive components specification,it addresses their most common challenges,compares different variants,and discusses proposed advances.Further important aspects of INDI are gain design,stability,and robustness.This paper also provides an overview of research conducted concerning these aspects.This paper is written in a tutorial style to familiarize researchers with the essential specifics and pitfalls of INDI and its components.At the same time,it can also serve as a reference for readers already familiar with INDI.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digit...The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.展开更多
Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely...Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.展开更多
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
文摘Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.
基金supported by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)(No.RS-2022-00143178)the Ministry of Education(MOE)(Nos.2022R1A6A3A13053896 and 2022R1F1A1074616),Republic of Korea.
文摘Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金funded by National Research Council of Thailand(contract No.N42A671047).
文摘Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce differential equations,constitutive relations,and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models,particularly for solid and structural mechanics,where data are often limited or noisy.This review offers a comprehensive assessment of recent developments in PINNs,combining bibliometric analysis,theoretical foundations,application-oriented insights,and methodological innovations.A biblio-metric survey indicates a rapid increase in publications on PINNs since 2018,with prominent research clusters focused on numerical methods,structural analysis,and forecasting.Building upon this trend,the review consolidates advance-ments across five principal application domains,including forward structural analysis,inverse modeling and parameter identification,structural and topology optimization,assessment of structural integrity,and manufacturing processes.These applications are propelled by substantial methodological advancements,encompassing rigorous enforcement of boundary conditions,modified loss functions,adaptive training,domain decomposition strategies,multi-fidelity and transfer learning approaches,as well as hybrid finite element–PINN integration.These advances address recurring challenges in solid mechanics,such as high-order governing equations,material heterogeneity,complex geometries,localized phenomena,and limited experimental data.Despite remaining challenges in computational cost,scalability,and experimental validation,PINNs are increasingly evolving into specialized,physics-aware tools for practical solid and structural mechanics applications.
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
文摘We introduce a model to implement incremental update of views. The principle is that unless a view is accessed, the modification related to the view is not computed. This modification information is used only when views are updated. Modification information is embodied in the classes (including inheritance classes and nesting classes) that derive the view. We establish a modify list consisted of tuples (one tuple for each view which is related to the class) to implement view update. A method is used to keep views from re-update. Key words object-oriented database - incremental computation - view-computation - engineering information system CLC number TP 391 Foundation item: Supported by the National Natural Science Foundation of China(60235025)Biography: Guo Hai-ying (1971-), female, Ph. D, research direction: CAD and engineering information system.
文摘A method is presented for incrementally computing success patterns of logic programs. The set of success patterns of a logic program with respect to an abstraction is formulated as the success set of an equational logic program modulo an equality theory that is induced by the abstraction. The method is exemplified via depth and stump abstractions. Also presented are algorithms for computing most general unifiers modulo equality theories induced by depth and stump abstractions.
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
基金supported in part by financial support from the National Key R&D Program of China(No.2023YFB3407003)the National Natural Science Foundation of China(No.52375378).
文摘A new analytical model for geometric size and forming force prediction in incremental flanging(IF)is presented in this work.The complex deformation characteristics of IF are considered in the modeling process,which can accurately describe the strain and stress states in IF.Based on strain analysis,the model can predict the material thickness distribution and neck height after IF.By considering contact area,strain characteristics,material thickness changes,and friction,the model can predict specific moments and corresponding values of maximum axial forming force and maximum horizontal forming force during IF.In addition,an IF experiment involving different tool diameters,flanging diameters,and opening hole diameters is conducted.On the basis of the experimental strain paths,the strain characteristics of different deformation zones are studied,and the stable strain ratio is quantitatively described through two dimensionless parameters:relative tool diameter and relative hole diameter.Then,the changing of material thickness and forming force in IF,and the variation of minimum material thickness,neck height,maximum axial forming force,and maximum horizontal forming force with flanging parameters are studied,and the reliability of the analytical model is verified in this process.Finally,the influence of the horizontal forming force on the tool design and the fluctuation of the forming force are explained.
基金supported by the National Natural Science Foundation of China(Nos.62103052 and No.52175214)。
文摘This paper presents the design of an asymmetrically variable wingtip anhedral angles morphing aircraft,inspired by biomimetic mechanisms,to enhance lateral maneuver capability.Firstly,we establish a lateral dynamic model considering additional forces and moments resulting during the morphing process,and convert it into a Multiple Input Multiple Output(MIMO)virtual control system by importing virtual inputs.Secondly,a classical dynamics inversion controller is designed for the outer-loop system.A new Global Fast Terminal Incremental Sliding Mode Controller(NDO-GFTISMC)is proposed for the inner-loop system,in which an adaptive law is implemented to weaken control surface chattering,and a Nonlinear Disturbance Observer(NDO)is integrated to compensate for unknown disturbances.The whole control system is proven semiglobally uniformly ultimately bounded based on the multi-Lyapunov function method.Furthermore,we consider tracking errors and self-characteristics of actuators,a quadratic programmingbased dynamic control allocation law is designed,which allocates virtual control inputs to the asymmetrically deformed wingtip and rudder.Actuator dynamic models are incorporated to ensure physical realizability of designed allocation law.Finally,comparative experimental results validate the effectiveness of the designed control system and control allocation law.The NDO-GFTISMC features faster convergence,stronger robustness,and 81.25%and 75.0%reduction in maximum state tracking error under uncertainty compared to the Incremental Nonlinear Dynamic Inversion Controller based on NDO(NDO-INDI)and Incremental Sliding Mode Controller based on NDO(NDO-ISMC),respectively.The design of the morphing aircraft significantly enhances lateral maneuver capability,maintaining a substantial control margin during lateral maneuvering,reducing the burden of the rudder surface,and effectively solving the actuator saturation problem of traditional aircraft during lateral maneuvering.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金the support of the Key Research and Development Program of Shaanxi Province,China(No.2021GXLH-Z-049)。
文摘The influence of geometric configuration on the friction characteristics during incremental sheet forming of AA5052 was analyzed by integrating surface morphology and its characteristic parameters,along with plastic strain,contact pressure,and area.The interface promotes lubrication and support when wall angles were≤40°,a 0.5 mm-thin sheet was used,and a 10 mm-large tool radius was employed.This mainly results in micro-plowing and plastic extrusion flow,leading to lower friction coefficient.However,when wall angles exceed 40°,significant plastic strain roughening occurs,leading to inadequate lubrication on the newly formed surface.Increased sheet thickness and decreased tool radius elevate contact pressure.These actions trigger micro-cutting and adhesion,potentially leading to localized scuffing and dimple tears,and higher friction coefficient.The friction mechanisms remain unaffected by the part’s plane curve features.As the forming process progresses,abrasive wear intensifies,and surface morphology evolves unfavorably for lubrication and friction reduction.
文摘Incremental Nonlinear Dynamic Inversion(INDI)is a control approach that has gained popularity in flight control over the past decade.Besides the INDI law,several common additional components complement an INDI-based controller.This paper,the second part of a two-part series of surveys on INDI,aims to summarize the modern trends in INDI and its related components.Besides a comprehensive components specification,it addresses their most common challenges,compares different variants,and discusses proposed advances.Further important aspects of INDI are gain design,stability,and robustness.This paper also provides an overview of research conducted concerning these aspects.This paper is written in a tutorial style to familiarize researchers with the essential specifics and pitfalls of INDI and its components.At the same time,it can also serve as a reference for readers already familiar with INDI.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
文摘The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.
基金supported by the Basque Government through the ELKARTEK program for Research and Innovation,under the BRTAQUANTUM project(Grant Agreement No.KK-2022/00041)。
文摘Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.