期刊文献+
共找到255,008篇文章
< 1 2 250 >
每页显示 20 50 100
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
1
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
DRL-Based Cross-Regional Computation Offloading Algorithm
2
作者 Lincong Zhang Yuqing Liu +2 位作者 Kefeng Wei Weinan Zhao Bo Qian 《Computers, Materials & Continua》 2026年第1期901-918,共18页
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e... In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads. 展开更多
关键词 Edge computing computational task offloading deep reinforcement learning D3QN device-to-device communication system latency optimization
在线阅读 下载PDF
CUDA‑based GPU‑only computation for efficient tracking simulation of single and multi‑bunch collective effects
3
作者 Keon Hee Kim Eun‑San Kim 《Nuclear Science and Techniques》 2026年第1期61-79,共19页
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met... Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation. 展开更多
关键词 Code development GPU computing Collective effects
在线阅读 下载PDF
High-Dimensional Multi-Objective Computation Offloading for MEC in Serial Isomerism Tasks via Flexible Optimization Framework
4
作者 Zheng Yao Puqing Chang 《Computers, Materials & Continua》 2026年第1期1160-1177,共18页
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays... As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality. 展开更多
关键词 Edge computing offload serial Isomerism applications many-objective optimization flexible resource scheduling
在线阅读 下载PDF
Random State Approach to Quantum Computation of Electronic-Structure Properties
5
作者 Yiran Bai Feng Xiong Xueheng Kuang 《Chinese Physics Letters》 2026年第1期89-104,共16页
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v... Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials. 展开更多
关键词 periodic materials random state circuit random state quantum algorithms electronic structure properties density states aperiodic materials quantum algorithms quantum computation
原文传递
Data-Driven Healthcare:The Role of Computational Methods in Medical Innovation 被引量:1
6
作者 Hariharasakthisudhan Ponnarengan Sivakumar Rajendran +2 位作者 Vikas Khalkar Gunapriya Devarajan Logesh Kamaraj 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期1-48,共48页
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r... The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable. 展开更多
关键词 computational models biomedical engineering BIOINFORMATICS machine learning wearable technology
在线阅读 下载PDF
Computation and wireless resource management in 6G space-integrated-ground access networks 被引量:1
7
作者 Ning Hui Qian Sun +2 位作者 Lin Tian Yuanyuan Wang Yiqing Zhou 《Digital Communications and Networks》 2025年第3期768-777,共10页
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces... In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks. 展开更多
关键词 Space-integrated-ground Radio access network MEC-based computation resource management Mixed numerology-based wireless resource management
在线阅读 下载PDF
Digital Humanities,Computational Criticism and the Stanford Literary Lab:An Interviewwith Mark Algee-Hewittr
8
作者 Hui Haifeng Mark Algee-Hewitt 《外国文学研究》 北大核心 2025年第4期1-10,共10页
The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digit... The Literary Lab at Stanford University is one of the birthplaces of digital humanities and has maintained significant influence in this field over the years.Professor Hui Haifeng has been engaged in research on digital humanities and computational criticism in recent years.During his visiting scholarship at Stanford University,he participated in the activities of the Literary Lab.Taking this opportunity,he interviewed Professor Mark Algee-Hewitt,the director of the Literary Lab,discussing important topics such as the current state and reception of DH(digital humanities)in the English Department,the operations of the Literary Lab,and the landscape of computational criticism.Mark Algee-Hewitt's research focuses on the eighteenth and early nineteenth centuries in England and Germany and seeks to combine literary criticism with digital and quantitative analyses of literary texts.In particular,he is interested in the history of aesthetic theory and the development and transmission of aesthetic and philosophical concepts during the Enlightenment and Romantic periods.He is also interested in the relationship between aesthetic theory and the poetry of the long eighteenth century.Although his primary background is English literature,he also has a degree in computer science.He believes that the influence of digital humanities within the humanities disciplines is growing increasingly significant.This impact is evident in both the attraction and assistance it offers to students,as well as in the new interpretations it brings to traditional literary studies.He argues that the key to effectively integrating digital humanities into the English Department is to focus on literary research questions,exploring how digital tools can raise new questions or provide new insights into traditional research. 展开更多
关键词 digital humanities computational criticism literary research Literary Lab
原文传递
Privacy-preserving computation meets quantum computing:A scoping review
9
作者 Aitor Gómez-Goiri Iñaki Seco-Aguirre +1 位作者 Oscar Lage Alejandra Ruiz 《Digital Communications and Networks》 2025年第6期1707-1721,共15页
Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely... Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques. 展开更多
关键词 Quantum computing Privacy-preserving computation Oblivious transfer Secure multi-party computation Homomorphic encryption Scoping review
在线阅读 下载PDF
Computational Offloading and Resource Allocation for Internet of Vehicles Based on UAV-Assisted Mobile Edge Computing System
10
作者 Fang Yujie Li Meng +3 位作者 Si Pengbo Yang Ruizhe Sun Enchang Zhang Yanhua 《China Communications》 2025年第9期333-351,共19页
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ... As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant. 展开更多
关键词 computational offloading Internet of Vehicles mobile edge computing resource optimization unmanned aerial vehicle
在线阅读 下载PDF
Assessment of slurry chamber clogging alleviation during ultra-large-diameter slurry tunnel boring machine tunneling in hard-rock using computational fluid dynamics-discrete element method:A case study 被引量:1
11
作者 Yidong Guo Xinggao Li +2 位作者 Dalong Jin Hongzhi Liu Yingran Fang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第8期4715-4734,共20页
To fundamentally alleviate the excavation chamber clogging during slurry tunnel boring machine(TBM)advancing in hard rock,large-diameter short screw conveyor was adopted to slurry TBM of Qingdao Jiaozhou Bay Second Un... To fundamentally alleviate the excavation chamber clogging during slurry tunnel boring machine(TBM)advancing in hard rock,large-diameter short screw conveyor was adopted to slurry TBM of Qingdao Jiaozhou Bay Second Undersea Tunnel.To evaluate the discharging performance of short screw conveyor in different cases,the full-scale transient slurry-rock two-phase model for a short screw conveyor actively discharging rocks was established using computational fluid dynamics-discrete element method(CFD-DEM)coupling approach.In the fluid domain of coupling model,the sliding mesh technology was utilized to describe the rotations of the atmospheric composite cutterhead and the short screw conveyor.In the particle domain of coupling model,the dynamic particle factories were established to produce rock particles with the rotation of the cutterhead.And the accuracy and reliability of the CFD-DEM simulation results were validated via the field test and model test.Furthermore,a comprehensive parameter analysis was conducted to examine the effects of TBM operating parameters,the geometric design of screw conveyor and the size of rocks on the discharging performance of short screw conveyor.Accordingly,a reasonable rotational speed of screw conveyor was suggested and applied to Jiaozhou Bay Second Undersea Tunnel project.The findings in this paper could provide valuable references for addressing the excavation chamber clogging during ultra-large-diameter slurry TBM tunneling in hard rock for similar future. 展开更多
关键词 Slurry tunnel boring machine(TBM) Short screw conveyor Slurry chamber clogging computational fluid dynamics-discrete element method(CFD-DEM)coupled modeling Engineering application
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
12
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
Harnessing the Power of PM6:Y6 Semitransparent Photoanodes by Computational Balancement of Photon Absorption in Photoanode/Photovoltaic Organic Tandems:>7mA cm^(-2) Solar Synthetic Fuels Production at Bias-Free Potentials
13
作者 Francisco Bernal-Texca Emmanouela Andrioti +1 位作者 Jordi Martorell Carles Ros 《Energy & Environmental Materials》 2025年第1期197-202,共6页
This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,... This study first demonstrates the potential of organic photoabsorbing blends in overcoming a critical limitation of metal oxide photoanodes in tandem modules:insufficient photogenerated current.Various organic blends,including PTB7-Th:FOIC,PTB7-Th:O6T-4F,PM6:Y6,and PM6:FM,were systematically tested.When coupled with electron transport layer(ETL)contacts,these blends exhibit exceptional charge separation and extraction,with PM6:Y6 achieving saturation photocurrents up to 16.8 mA cm^(-2) at 1.23 VRHE(oxygen evolution thermodynamic potential).For the first time,a tandem structure utilizing organic photoanodes has been computationally designed and fabricated and the implementation of a double PM6:Y6 photoanode/photovoltaic structure resulted in photogenerated currents exceeding 7mA cm^(-2) at 0 VRHE(hydrogen evolution thermodynamic potential)and anodic current onset potentials as low as-0.5 VRHE.The herein-presented organic-based approach paves the way for further exploration of different blend combinations to target specific oxidative reactions by selecting precise donor/acceptor candidates among the multiple existing ones. 展开更多
关键词 computationAL hydrogen ORGANIC photoanodes photovoltaics tandem
在线阅读 下载PDF
Introduction to the Special Issue on Mathematical Aspects of Computational Biology and Bioinformatics-Ⅱ
14
作者 Dumitru Baleanu Carla M.A.Pinto Sunil Kumar 《Computer Modeling in Engineering & Sciences》 2025年第5期1297-1299,共3页
1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers ... 1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al. 展开更多
关键词 computational techniquesresearchers effectsfractional dynamicsvariable order understanding complex dynamics infectious diseases chronic health conditionswith computational techniques mathematical modeling infectious diseases chronic health conditions DELAYS
暂未订购
Merging computational intelligence and wearable technologies for adolescent idiopathic scoliosis: a quest for multiscale modelling, long-term monitoring and personalized treatment
15
作者 Chun-Zhi Yi Xiao-Lei Sun 《Medical Data Mining》 2025年第2期21-30,共10页
Adolescent idiopathic scoliosis(AIS)is a dynamic progression during growth,which requires long-term collaborations and efforts from clinicians,patients and their families.It would be beneficial to have a precise inter... Adolescent idiopathic scoliosis(AIS)is a dynamic progression during growth,which requires long-term collaborations and efforts from clinicians,patients and their families.It would be beneficial to have a precise intervention based on cross-scale understandings of the etiology,real-time sensing and actuating to enable early detection,screening and personalized treatment.We argue that merging computational intelligence and wearable technologies can bridge the gap between the current trajectory of the techniques applied to AIS and this vision.Wearable technologies such as inertial measurement units(IMUs)and surface electromyography(sEMG)have shown great potential in monitoring spinal curvature and muscle activity in real-time.For instance,IMUs can track the kinematics of the spine during daily activities,while sEMG can detect asymmetric muscle activation patterns that may contribute to scoliosis progression.Computational intelligence,particularly deep learning algorithms,can process these multi-modal data streams to identify early signs of scoliosis and adapt treatment strategies dynamically.By using their combination,we can find potential solutions for a better understanding of the disease,a more effective and intelligent way for treatment and rehabilitation. 展开更多
关键词 adolescent idiopathic scoliosis computational intelligence wearable technologies
暂未订购
A Study for Inter-Satellite Cooperative Computation Offloading in LEO Satellite Networks
16
作者 Gang Yuanshuo Zhang Yuexia +2 位作者 Wu Peng Zheng Hui Fan Guangteng 《China Communications》 2025年第2期12-25,共14页
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int... Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption. 展开更多
关键词 computation offloading inter-satellite co-operation LEO satellite networks
在线阅读 下载PDF
Adiabatic holonomic quantum computation in decoherence-free subspace with two-body interaction
17
作者 Xiaoyu Sun Lei Qiao Peizi Zhao 《Chinese Physics B》 2025年第9期97-102,共6页
Adiabatic holonomic gates possess the geometric robustness of adiabatic geometric phases,i.e.,dependence only on the evolution path of the parameter space but not on the evolution details of the quantum system,which,w... Adiabatic holonomic gates possess the geometric robustness of adiabatic geometric phases,i.e.,dependence only on the evolution path of the parameter space but not on the evolution details of the quantum system,which,when coordinated with decoherence-free subspaces,permits additional resilience to the collective dephasing environment.However,the previous scheme[Phys.Rev.Lett.95130501(2005)]of adiabatic holonomic quantum computation in decoherence-free subspaces requires four-body interaction that is challenging in practical implementation.In this work,we put forward a scheme to realize universal adiabatic holonomic quantum computation in decoherence-free subspaces using only realistically available two-body interaction,thereby avoiding the difficulty of implementing four-body interaction.Furthermore,an arbitrary one-qubit gate in our scheme can be realized by a single-shot implementation,which eliminates the need to combine multiple gates for realizing such a gate. 展开更多
关键词 adiabatic evolution holonomic quantum computation decoherence-free subspaces
原文传递
Secure and Privacy-Preserving Cross-Departmental Computation Framework Based on BFV and Blockchain
18
作者 Peng Zhao Yu Du 《Journal of Electronic Research and Application》 2025年第6期207-217,共11页
As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-... As the demand for cross-departmental data collaboration continues to grow,traditional encryption methods struggle to balance data privacy with computational efficiency.This paper proposes a cross-departmental privacy-preserving computation framework based on BFV homomorphic encryption,threshold decryption,and blockchain technology.The proposed scheme leverages homomorphic encryption to enable secure computations between sales,finance,and taxation departments,ensuring that sensitive data remains encrypted throughout the entire process.A threshold decryption mechanism is employed to prevent single-point data leakage,while blockchain and IPFS are integrated to ensure verifiability and tamper-proof storage of computation results.Experimental results demonstrate that with 5,000 sample data entries,the framework performs efficiently and is highly scalable in key stages such as sales encryption,cost calculation,and tax assessment,thereby validating its practical feasibility and security. 展开更多
关键词 Homomorphic encryption Zero-knowledge proof Blockchain Cross-departmental privacy-preserving computation
在线阅读 下载PDF
Latency minimization for multiuser computation offloading in fog-radio access networks
19
作者 Wei Zhang Shafei Wang +3 位作者 Ye Pan Qiang Li Jingran Lin Xiaoxiao Wu 《Digital Communications and Networks》 2025年第1期160-171,共12页
Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is con... Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance. 展开更多
关键词 Fog-radio access network Fog computing Majorization minimization WMMSE
在线阅读 下载PDF
Robust Transmission Design for Federated Learning Through Over-the-Air Computation
20
作者 Hamideh Zamanpour Abyaneh Saba Asaad Amir Masoud Rabiei 《China Communications》 2025年第3期65-75,共11页
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche... Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme. 展开更多
关键词 federated learning imperfect CSI optimization over-the-air computing robust design
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部