Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met...Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely...Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.展开更多
1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers ...1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.展开更多
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
To fundamentally alleviate the excavation chamber clogging during slurry tunnel boring machine(TBM)advancing in hard rock,large-diameter short screw conveyor was adopted to slurry TBM of Qingdao Jiaozhou Bay Second Un...To fundamentally alleviate the excavation chamber clogging during slurry tunnel boring machine(TBM)advancing in hard rock,large-diameter short screw conveyor was adopted to slurry TBM of Qingdao Jiaozhou Bay Second Undersea Tunnel.To evaluate the discharging performance of short screw conveyor in different cases,the full-scale transient slurry-rock two-phase model for a short screw conveyor actively discharging rocks was established using computational fluid dynamics-discrete element method(CFD-DEM)coupling approach.In the fluid domain of coupling model,the sliding mesh technology was utilized to describe the rotations of the atmospheric composite cutterhead and the short screw conveyor.In the particle domain of coupling model,the dynamic particle factories were established to produce rock particles with the rotation of the cutterhead.And the accuracy and reliability of the CFD-DEM simulation results were validated via the field test and model test.Furthermore,a comprehensive parameter analysis was conducted to examine the effects of TBM operating parameters,the geometric design of screw conveyor and the size of rocks on the discharging performance of short screw conveyor.Accordingly,a reasonable rotational speed of screw conveyor was suggested and applied to Jiaozhou Bay Second Undersea Tunnel project.The findings in this paper could provide valuable references for addressing the excavation chamber clogging during ultra-large-diameter slurry TBM tunneling in hard rock for similar future.展开更多
The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digit...The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.展开更多
The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computat...The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with t...In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.展开更多
As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-...As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-preserving computation framework based on BFV homomorphic encryption,threshold decryption,and blockchain technology.The proposed scheme leverages homomorphic encryption to enable secure computations between sales,finance,and taxation departments,ensuring that sensitive data remains encrypted throughout the entire process.A threshold decryption mechanism is employed to prevent single-point data leakage,while blockchain and IPFS are integrated to ensure verifiability and tamper-proof storage of computation results.Experimental results demonstrate that with 5,000 sample data entries,the framework performs efficiently and is highly scalable in key stages such as sales encryption,cost calculation,and tax assessment,thereby validating its practical feasibility and security.展开更多
This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,...This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,including PTB7-Th:FOIC,PTB7-Th:O6T-4F,PM6:Y6,and PM6:FM,were systematically tested.When coupled with electron transport layer(ETL)contacts,these blends exhibit exceptional charge separation and extraction,with PM6:Y6 achieving saturation photocurrents up to 16.8 mA cm^(-2) at 1.23 VRHE(oxygen evolution thermodynamic potential).For the first time,a tandem structure utilizing organic photoanodes has been computationally designed and fabricated and the implementation of a double PM6:Y6 photoanode/photovoltaic structure resulted in photogenerated currents exceeding 7mA cm^(-2) at 0 VRHE(hydrogen evolution thermodynamic potential)and anodic current onset potentials as low as-0.5 VRHE.The herein-presented organic-based approach paves the way for further exploration of different blend combinations to target specific oxidative reactions by selecting precise donor/acceptor candidates among the multiple existing ones.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of...With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection.展开更多
The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heig...The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction.展开更多
Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization prob...Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization problems,while MAS provides a flexible model for distributed artificial intelligence.Since their group interaction mechanisms can be borrowed from each other,many studies have attempted to combine EC and MAS.With the rapid development of the Internet of Things,the confluence of EC and MAS has become more and more important,and related articles have shown a continuously growing trend during the last decades.In this survey,we first elaborate on the mutual assistance of EC and MAS from two aspects,agent-based EC and EC-assisted MAS.Agent-based EC aims to introduce characteristics of MAS into EC to improve the performance and parallelism of EC,while EC-assisted MAS aims to use EC to better solve optimization problems in MAS.Furthermore,we review studies that combine the cooperation mechanisms of EC and MAS,which greatly leverage the strengths of both sides.A description framework is built to elaborate existing studies.Promising future research directions are also discussed in conjunction with emerging technologies and real-world applications.展开更多
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
基金supported by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)(No.RS-2022-00143178)the Ministry of Education(MOE)(Nos.2022R1A6A3A13053896 and 2022R1F1A1074616),Republic of Korea.
文摘Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金supported by the Basque Government through the ELKARTEK program for Research and Innovation,under the BRTAQUANTUM project(Grant Agreement No.KK-2022/00041)。
文摘Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.
文摘1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.2023YJS053)the National Natural Science Foundation of China(Grant No.52278386).
文摘To fundamentally alleviate the excavation chamber clogging during slurry tunnel boring machine(TBM)advancing in hard rock,large-diameter short screw conveyor was adopted to slurry TBM of Qingdao Jiaozhou Bay Second Undersea Tunnel.To evaluate the discharging performance of short screw conveyor in different cases,the full-scale transient slurry-rock two-phase model for a short screw conveyor actively discharging rocks was established using computational fluid dynamics-discrete element method(CFD-DEM)coupling approach.In the fluid domain of coupling model,the sliding mesh technology was utilized to describe the rotations of the atmospheric composite cutterhead and the short screw conveyor.In the particle domain of coupling model,the dynamic particle factories were established to produce rock particles with the rotation of the cutterhead.And the accuracy and reliability of the CFD-DEM simulation results were validated via the field test and model test.Furthermore,a comprehensive parameter analysis was conducted to examine the effects of TBM operating parameters,the geometric design of screw conveyor and the size of rocks on the discharging performance of short screw conveyor.Accordingly,a reasonable rotational speed of screw conveyor was suggested and applied to Jiaozhou Bay Second Undersea Tunnel project.The findings in this paper could provide valuable references for addressing the excavation chamber clogging during ultra-large-diameter slurry TBM tunneling in hard rock for similar future.
文摘The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research.
文摘The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
文摘In this study,the flow characteristics around a group of three piers arranged in tandem were investigated both numerically and experimentally.The simulation utilised the volume of fluid(VOF)model in conjunction with the k–ɛmethod(i.e.,for flow turbulence representations),implemented through the ANSYS FLUENT software,to model the free-surface flow.The simulation results were validated against laboratory measurements obtained using an acoustic Doppler velocimeter.The comparative analysis revealed discrepancies between the simulated and measured maximum velocities within the investigated flow field.However,the numerical results demonstrated a distinct vortex-induced flow pattern following the first pier and throughout the vicinity of the entire pier group,which aligned reasonably well with experimental data.In the heavily narrowed spaces between the piers,simulated velocity profiles were overestimated in the free-surface region and underestimated in the areas near the bed to the mid-stream when compared to measurements.These discrepancies diminished away from the regions with intense vortices,indicating that the employed model was capable of simulating relatively less disturbed flow turbulence.Furthermore,velocity results from both simulations and measurements were compared based on velocity distributions at three different depth ratios(0.15,0.40,and 0.62)to assess vortex characteristic around the piers.This comparison revealed consistent results between experimental and simulated data.This research contributes to a deeper understanding of flow dynamics around complex interactive pier systems,which is critical for designing stable and sustainable hydraulic structures.Furthermore,the insights gained from this study provide valuable information for engineers aiming to develop effective strategies for controlling scour and minimizing destructive vortex effects,thereby guiding the design and maintenance of sustainable infrastructure.
文摘As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-preserving computation framework based on BFV homomorphic encryption,threshold decryption,and blockchain technology.The proposed scheme leverages homomorphic encryption to enable secure computations between sales,finance,and taxation departments,ensuring that sensitive data remains encrypted throughout the entire process.A threshold decryption mechanism is employed to prevent single-point data leakage,while blockchain and IPFS are integrated to ensure verifiability and tamper-proof storage of computation results.Experimental results demonstrate that with 5,000 sample data entries,the framework performs efficiently and is highly scalable in key stages such as sales encryption,cost calculation,and tax assessment,thereby validating its practical feasibility and security.
基金partly funded by a BIST Ignite Programme grant from the Barcelona Institute of Science and Technology(Code:MOLOPEC)financial support from LICROX and SOREC2 EUFunded projects(Codes:951843 and 101084326)+7 种基金the BIST Program,and Severo Ochoa Programpartially funded by CEX2019-000910-S(MCIN/AEI/10.13039/501100011033 and PID2020-112650RBI00),Fundació Cellex,Fundació Mir-PuigGeneralitat de Catalunya through CERCAfunding from the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101081441financial support by the Agencia Estatal de Investigación(grant PRE2018-084881)the financial support by from the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101081441support from the MCIN/AEI JdC-F Fellowship(FJC2020-043223-I)the Severo Ochoa Excellence Postdoctoral Fellowship(CEX2019-000910-S).
文摘This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,including PTB7-Th:FOIC,PTB7-Th:O6T-4F,PM6:Y6,and PM6:FM,were systematically tested.When coupled with electron transport layer(ETL)contacts,these blends exhibit exceptional charge separation and extraction,with PM6:Y6 achieving saturation photocurrents up to 16.8 mA cm^(-2) at 1.23 VRHE(oxygen evolution thermodynamic potential).For the first time,a tandem structure utilizing organic photoanodes has been computationally designed and fabricated and the implementation of a double PM6:Y6 photoanode/photovoltaic structure resulted in photogenerated currents exceeding 7mA cm^(-2) at 0 VRHE(hydrogen evolution thermodynamic potential)and anodic current onset potentials as low as-0.5 VRHE.The herein-presented organic-based approach paves the way for further exploration of different blend combinations to target specific oxidative reactions by selecting precise donor/acceptor candidates among the multiple existing ones.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by Communication University of China(HG23035)partly supported by the Fundamental Research Funds for the Central Universities(CUC230A013).
文摘With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection.
基金Supported by the National Key Research and Development Program of China(No.2022ZD0119003)and the National Natural Science Founda-tion of China(No.61834005).
文摘The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction.
基金supported in part by the National Key Research and Development Project(2023YFE0206200)the National Natural Science Foundation of China(U23B2058)+3 种基金in part by Guangdong Regional Joint Foundation Key Project(2022B1515120076)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(RS-2025-00555463&RS-2025-25456394)the Tianjin Top Scientist Studio Project(24JRRCRC00030)the Tianjin Belt and Road Joint Laboratory(24PTLYHZ00250).
文摘Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization problems,while MAS provides a flexible model for distributed artificial intelligence.Since their group interaction mechanisms can be borrowed from each other,many studies have attempted to combine EC and MAS.With the rapid development of the Internet of Things,the confluence of EC and MAS has become more and more important,and related articles have shown a continuously growing trend during the last decades.In this survey,we first elaborate on the mutual assistance of EC and MAS from two aspects,agent-based EC and EC-assisted MAS.Agent-based EC aims to introduce characteristics of MAS into EC to improve the performance and parallelism of EC,while EC-assisted MAS aims to use EC to better solve optimization problems in MAS.Furthermore,we review studies that combine the cooperation mechanisms of EC and MAS,which greatly leverage the strengths of both sides.A description framework is built to elaborate existing studies.Promising future research directions are also discussed in conjunction with emerging technologies and real-world applications.