As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expo...As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems.展开更多
The inclusion of blockchain in smart homes increases data security and accuracy within home ecosystems but presents latency issues that hinder real-time interactions. This study addresses the important challenge of bl...The inclusion of blockchain in smart homes increases data security and accuracy within home ecosystems but presents latency issues that hinder real-time interactions. This study addresses the important challenge of blockchain latency in smart homes through the development and application of the Blockchain Low Latency (BLL) model using Hyperledger Fabric v2.2. With respect to latency, the BLL model proposes the optimization of the following fundamental blockchain parameters: transmission rate, endorsement policy, batch size, and batch timeout. After conducting hypothesis testing on system parameters, we found that transactions per second (tps) of 30, OutOf (2) endorsement policy, in which any two of five peers endorse a batch size of 10 and batch timeout of 1 s, considerably decrease latency. The BLL model achieved an average latency of 0.39 s, approximately 30 times faster than Ethereum’s average latency of 12 s, thereby enhancing the efficiency of blockchain-based smart home applications. The results of this study demonstrate that despite introducing certain latency issues, proper selection of parameters in blockchain configurations can eliminate these latency problems, making blockchain technology more viable for real-time Internet of Things (IoT) applications such as smart homes. Future work involves applying the proposed model to a larger overlay and deploying it in real-world smart home environments using sensor devices, enhancing the given configuration to accommodate a large number of transactions, and adjusting the overlay in line with the complexity of the network. Therefore, this study provides practical recommendations for solving the latency issue in blockchain systems, relates theoretical advancements to real-life applications in IoT environments, and stresses the significance of parameter optimization for maximum effectiveness.展开更多
Response speed is vital for the railway environment monitoring system,especially for the sudden-onset disasters.The edge-cloud collaboration scheme is proved efficient to reduce the latency.However,the data characteri...Response speed is vital for the railway environment monitoring system,especially for the sudden-onset disasters.The edge-cloud collaboration scheme is proved efficient to reduce the latency.However,the data characteristics and communication demand of the tasks in the railway environment monitoring system are all different and changeable,and the latency contribution of each task to the system is discrepant.Hence,two valid latency minimization strategies based on the edge-cloud collaboration scheme is developed in this paper.First,the processing resources are allocated to the tasks based on the priorities,and the tasks are processed parallly with the allocated resources to minimize the system valid latency.Furthermore,considering the differences in the data volume of the tasks,which will induce the waste of the resources for the tasks finished in advance.Thus,the tasks with similar priorities are graded into the same group,and the serial and parallel processing strategies are performed intra-group and inter-group simultaneously.Compared with the other four strategies in four railway monitoring scenarios,the proposed strategies proved latency efficiency to the high-priority tasks,and the system valid latency is reduced synchronously.The performance of the railway environment monitoring system in security and efficiency will be promoted greatly with the proposed scheme and strategies.展开更多
Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex systems.Microservices Architecture is one of th...Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex systems.Microservices Architecture is one of the recent techniques in which Enterprise Systems can be developed as fine-grained smaller components and deployed independently.This methodology brings numerous benefits like scalability,resilience,flexibility in development,faster time to market,etc.and the advantages;Microservices bring some challenges too.Multiple microservices need to be invoked one by one as a chain.In most applications,more than one chain of microservices runs in parallel to complete a particular requirement To complete a user’s request.It results in competition for resources and the need for more inter-service communication among the services,which increases the overall latency of the application.A new approach has been proposed in this paper to handle a complex chain of microservices and reduce the latency of user requests.A machine learning technique is followed to predict the weighting time of different types of requests.The communication time among services distributed among different physical machines are estimated based on that and obtained insights are applied to an algorithm to calculate their priorities dynamically and select suitable service instances to minimize the latency based on the shortest queue waiting time.Experiments were done for both interactive as well as non interactive workloads to test the effectiveness of the solution.The approach has been proved to be very effective in reducing latency in the case of long service chains.展开更多
A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Ea...A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.展开更多
Multi-beam satellite communication systems can improve the resource utilization and system capacity effectively.However,the inter-beam interference,especially for the satellite system with full frequency reuse,will de...Multi-beam satellite communication systems can improve the resource utilization and system capacity effectively.However,the inter-beam interference,especially for the satellite system with full frequency reuse,will degrade the system performance greatly due to the characteristics of multi-beam satellite antennas.In this article,the user scheduling and resource allocation of a multi-beam satellite system with full frequency reuse are jointly studied,in which all beams can use the full bandwidth.With the strong inter-beam interference,we aim to minimize the system latency experienced by the users during the process of data downloading.To solve this problem,deep reinforcement learning is used to schedule users and allocate bandwidth and power resources to mitigate the inter-beam interference.The simulation results are compared with other reference algorithms to verify the effectiveness of the proposed algorithm.展开更多
Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info...Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant No.61701197in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4)in part by the 111 project under Grant No.B23008.
文摘As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems.
文摘The inclusion of blockchain in smart homes increases data security and accuracy within home ecosystems but presents latency issues that hinder real-time interactions. This study addresses the important challenge of blockchain latency in smart homes through the development and application of the Blockchain Low Latency (BLL) model using Hyperledger Fabric v2.2. With respect to latency, the BLL model proposes the optimization of the following fundamental blockchain parameters: transmission rate, endorsement policy, batch size, and batch timeout. After conducting hypothesis testing on system parameters, we found that transactions per second (tps) of 30, OutOf (2) endorsement policy, in which any two of five peers endorse a batch size of 10 and batch timeout of 1 s, considerably decrease latency. The BLL model achieved an average latency of 0.39 s, approximately 30 times faster than Ethereum’s average latency of 12 s, thereby enhancing the efficiency of blockchain-based smart home applications. The results of this study demonstrate that despite introducing certain latency issues, proper selection of parameters in blockchain configurations can eliminate these latency problems, making blockchain technology more viable for real-time Internet of Things (IoT) applications such as smart homes. Future work involves applying the proposed model to a larger overlay and deploying it in real-world smart home environments using sensor devices, enhancing the given configuration to accommodate a large number of transactions, and adjusting the overlay in line with the complexity of the network. Therefore, this study provides practical recommendations for solving the latency issue in blockchain systems, relates theoretical advancements to real-life applications in IoT environments, and stresses the significance of parameter optimization for maximum effectiveness.
基金supported by the National Natural Science Foundation of China(No.61903023)the Natural Science Foundation of Bejing Municipality(No.4204110)+1 种基金State Key Laboratory of Rail Traffic Control and Safety(No.RCS2020ZT006,RCS2021ZT006)the Fundamental Research Funds for the Central Universities(No.2020JBM087).
文摘Response speed is vital for the railway environment monitoring system,especially for the sudden-onset disasters.The edge-cloud collaboration scheme is proved efficient to reduce the latency.However,the data characteristics and communication demand of the tasks in the railway environment monitoring system are all different and changeable,and the latency contribution of each task to the system is discrepant.Hence,two valid latency minimization strategies based on the edge-cloud collaboration scheme is developed in this paper.First,the processing resources are allocated to the tasks based on the priorities,and the tasks are processed parallly with the allocated resources to minimize the system valid latency.Furthermore,considering the differences in the data volume of the tasks,which will induce the waste of the resources for the tasks finished in advance.Thus,the tasks with similar priorities are graded into the same group,and the serial and parallel processing strategies are performed intra-group and inter-group simultaneously.Compared with the other four strategies in four railway monitoring scenarios,the proposed strategies proved latency efficiency to the high-priority tasks,and the system valid latency is reduced synchronously.The performance of the railway environment monitoring system in security and efficiency will be promoted greatly with the proposed scheme and strategies.
文摘Advancements in cloud computing and virtualization technologies have revolutionized Enterprise Application Development with innovative ways to design and develop complex systems.Microservices Architecture is one of the recent techniques in which Enterprise Systems can be developed as fine-grained smaller components and deployed independently.This methodology brings numerous benefits like scalability,resilience,flexibility in development,faster time to market,etc.and the advantages;Microservices bring some challenges too.Multiple microservices need to be invoked one by one as a chain.In most applications,more than one chain of microservices runs in parallel to complete a particular requirement To complete a user’s request.It results in competition for resources and the need for more inter-service communication among the services,which increases the overall latency of the application.A new approach has been proposed in this paper to handle a complex chain of microservices and reduce the latency of user requests.A machine learning technique is followed to predict the weighting time of different types of requests.The communication time among services distributed among different physical machines are estimated based on that and obtained insights are applied to an algorithm to calculate their priorities dynamically and select suitable service instances to minimize the latency based on the shortest queue waiting time.Experiments were done for both interactive as well as non interactive workloads to test the effectiveness of the solution.The approach has been proved to be very effective in reducing latency in the case of long service chains.
基金This work was supported by the National Key R&D Program of China(2021YFB2900604).
文摘A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.
基金supported in part by the National Natural Science Foundation of China under Grant 62171052,Grant 61971054Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory Foundation under Grant HHX21641X002。
文摘Multi-beam satellite communication systems can improve the resource utilization and system capacity effectively.However,the inter-beam interference,especially for the satellite system with full frequency reuse,will degrade the system performance greatly due to the characteristics of multi-beam satellite antennas.In this article,the user scheduling and resource allocation of a multi-beam satellite system with full frequency reuse are jointly studied,in which all beams can use the full bandwidth.With the strong inter-beam interference,we aim to minimize the system latency experienced by the users during the process of data downloading.To solve this problem,deep reinforcement learning is used to schedule users and allocate bandwidth and power resources to mitigate the inter-beam interference.The simulation results are compared with other reference algorithms to verify the effectiveness of the proposed algorithm.
基金supported by the National Postdoctoral Science Foundation of China(2014M550068)
文摘Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.