Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of comput...Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.展开更多
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ...Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illuminati...Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.展开更多
The emergence of multi-access edge computing(MEC)aims at extending cloud computing capabilities to the edge of the radio access network.As the large-scale internet of things(IoT)services are rapidly growing,a single e...The emergence of multi-access edge computing(MEC)aims at extending cloud computing capabilities to the edge of the radio access network.As the large-scale internet of things(IoT)services are rapidly growing,a single edge infrastructure provider(EIP)may not be sufficient to handle the data traffic generated by these services.Most of the existing work addressed the computing resource shortage problem by optimizing tasks schedule,whereas others overcome such issue by placing computing resources on demand.However,when considering a multiple EIPs scenario,an urgent challenge is how to generate a coalition structure to maximize each EIP’s gain with a suitable price for computing resource block corresponding to a container.To this end,we design a scheme of EIPs collaboration with a market price for containers under a scenario that considers a collection of service providers(SPs)with different budgets and several EIPs distributed in geographical locations.First,we bring in the net profit market price model to generate a more reasonable equilibrium price and select the optimal EIPs for each SP by a convex program.Then we use a mathematical model to maximize EIP’s profits and form stable coalitions between EIPs by a distributed coalition formation algorithm.Numerical results demonstrate that our proposed collaborative scheme among EIPs enhances EIPs’gain and increases users’surplus.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
Research has been conducted to reduce resource consumption in 3D medical image segmentation for diverse resource-constrained environments.However,decreasing the number of parameters to enhance computational efficiency...Research has been conducted to reduce resource consumption in 3D medical image segmentation for diverse resource-constrained environments.However,decreasing the number of parameters to enhance computational efficiency can also lead to performance degradation.Moreover,these methods face challenges in balancing global and local features,increasing the risk of errors in multi-scale segmentation.This issue is particularly pronounced when segmenting small and complex structures within the human body.To address this problem,we propose a multi-stage hierarchical architecture composed of a detector and a segmentor.The detector extracts regions of interest(ROIs)in a 3D image,while the segmentor performs segmentation in the extracted ROI.Removing unnecessary areas in the detector allows the segmentation to be performed on a more compact input.The segmentor is designed with multiple stages,where each stage utilizes different input sizes.It implements a stage-skippingmechanism that deactivates certain stages using the initial input size.This approach minimizes unnecessary computations on segmenting the essential regions to reduce computational overhead.The proposed framework preserves segmentation performance while reducing resource consumption,enabling segmentation even in resource-constrained environments.展开更多
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we...In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.展开更多
To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computi...To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computing capabilities within RAN,it helps to transform RAN into a natural cost effective radio edge computing platform,offering great opportunity to further enhance RAN agility for diversified services and improve users’quality of experience(Qo E).In this article,a logical architecture enabling deep convergence of communication and computing in RAN is proposed based on O-RAN.The scenarios and potential benefits of sharing RAN computing resources are first analyzed.Then,the requirements,design principles and logical architecture are introduced.Involved key technologies are also discussed,including heterogeneous computing infrastructure,unified computing and communication task modeling,joint communication and computing orchestration and RAN computing data routing.Followed by that,a VR use case is studied to illustrate the superiority of the joint communication and computing optimization.Finally,challenges and future trends are highlighted to provide some insights on the potential future work for researchers in this field.展开更多
The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular n...The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.展开更多
Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocatio...Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.展开更多
Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,...Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the id...Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the idea of resource virtualization and the analysis of the resource status transition, the resource allocation process and the necessity of resource reconstruction are presented, l^esource reconstruction algorithms are designed to determine the resource reconstruction types, and it is shown that they can achieve the goal of resource on-demand allocation through three methodologies: resource combination, resource split, and resource random adjustment. The effects that the resource users have on the resource reconstruction results, the deviation between resources and requirements, and the uniformity of resource distribution are studied by three experiments. The experiments show that resource reconstruction has a close relationship with resource requirements, but it is not the same with current distribution of resources. The algorithms can complete the resource adjustment with a lower cost and form the logic resources to match the demands of resource users easily.展开更多
In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significa...In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.展开更多
Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability o...Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability of VEC cannot simultaneously respond to large amounts of offloading requests,thus restricting the performance of VEC system.Besides,a mass of traffic data can incur tremendous pressure on the front-haul links between vehicles and the edge server.To strengthen the performance of VEC,in this paper we propose to place services beforehand at the edge server,e.g.,by deploying the services/tasks-oriented data(e.g.,related libraries and databases)in advance at the network edge,instead of downloading them from the remote data center or offloading them from vehicles during the runtime.In this paper,we formulate the service placement problem in VEC to minimize the average response latency for all requested services along the slotted timeline.Specifically,the time slot spanned optimization problem is converted into per-slot optimization problems based on the Lyapunov optimization.Then a greedy heuristic is introduced to the drift-plus-penalty-based algorithm for seeking the approximate solution.The simulation results reveal its advantages over others in terms of optimal values and our strategy can satisfy the long-term energy constraint.展开更多
Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing re...Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent child agent framework and primary slave agent framework were proposed respectively and discussed in detail.展开更多
Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study...Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.展开更多
Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected respo...Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.展开更多
Federated Edge Learning(FEL),an emerging distributed Machine Learning(ML)paradigm,enables model training in a distributed environment while ensuring user privacy by using physical separation for each user’s data.Howe...Federated Edge Learning(FEL),an emerging distributed Machine Learning(ML)paradigm,enables model training in a distributed environment while ensuring user privacy by using physical separation for each user’s data.However,with the development of complex application scenarios such as the Internet of Things(IoT)and Smart Earth,the conventional resource allocation schemes can no longer effectively support these growing computational and communication demands.Therefore,joint resource optimization may be the key solution to the scaling problem.This paper simultaneously addresses the multifaceted challenges of computation and communication,with the growing multiple resource demands.We systematically review the joint allocation strategies for different resources(computation,data,communication,and network topology)in FEL,and summarize the advantages in improving system efficiency,reducing latency,enhancing resource utilization,and enhancing robustness.In addition,we present the potential ability of joint optimization to enhance privacy preservation by reducing communication requirements,indirectly.This work not only provides theoretical support for resource management in federated learning(FL)systems,but also provides ideas for potential optimal deployment in multiple real-world scenarios.By thoroughly discussing the current challenges and future research directions,it also provides some important insights into multi-resource optimization in complex application environments.展开更多
基金The National High Technology Research and Development Program of China (863 Program) (No2007AA01Z404)
文摘Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.
基金supported in part by Sub Project of National Key Research and Development plan in 2020(No.2020YFC1511704)scientific research level improvement project to promote the colleges connotation development of Beijing Information Science&Technology University(No.2020KYNH212,No.2021CGZH302)in part by the National Natural Science Foundation of China(Grant No.61971048).
文摘Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金This researchwas Sponsored by Xinjiang Uygur Autonomous Region Tianshan Talent Programme Project(2023TCLJ02)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01C349).
文摘Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.
基金supported by National Natural Science Foundation of China(No.6206020135)Key Research and Development Program of Gansu Province(No.20YF8GA123)+1 种基金Gansu Provincial Department of Education University Faculty Innovation Fund Project(No.2024B-059)Youth Science Fund Project of Lanzhou Jiaotong University(No.1200061307).
文摘The emergence of multi-access edge computing(MEC)aims at extending cloud computing capabilities to the edge of the radio access network.As the large-scale internet of things(IoT)services are rapidly growing,a single edge infrastructure provider(EIP)may not be sufficient to handle the data traffic generated by these services.Most of the existing work addressed the computing resource shortage problem by optimizing tasks schedule,whereas others overcome such issue by placing computing resources on demand.However,when considering a multiple EIPs scenario,an urgent challenge is how to generate a coalition structure to maximize each EIP’s gain with a suitable price for computing resource block corresponding to a container.To this end,we design a scheme of EIPs collaboration with a market price for containers under a scenario that considers a collection of service providers(SPs)with different budgets and several EIPs distributed in geographical locations.First,we bring in the net profit market price model to generate a more reasonable equilibrium price and select the optimal EIPs for each SP by a convex program.Then we use a mathematical model to maximize EIP’s profits and form stable coalitions between EIPs by a distributed coalition formation algorithm.Numerical results demonstrate that our proposed collaborative scheme among EIPs enhances EIPs’gain and increases users’surplus.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
文摘Research has been conducted to reduce resource consumption in 3D medical image segmentation for diverse resource-constrained environments.However,decreasing the number of parameters to enhance computational efficiency can also lead to performance degradation.Moreover,these methods face challenges in balancing global and local features,increasing the risk of errors in multi-scale segmentation.This issue is particularly pronounced when segmenting small and complex structures within the human body.To address this problem,we propose a multi-stage hierarchical architecture composed of a detector and a segmentor.The detector extracts regions of interest(ROIs)in a 3D image,while the segmentor performs segmentation in the extracted ROI.Removing unnecessary areas in the detector allows the segmentation to be performed on a more compact input.The segmentor is designed with multiple stages,where each stage utilizes different input sizes.It implements a stage-skippingmechanism that deactivates certain stages using the initial input size.This approach minimizes unnecessary computations on segmenting the essential regions to reduce computational overhead.The proposed framework preserves segmentation performance while reducing resource consumption,enabling segmentation even in resource-constrained environments.
基金This work was supported in part by the National Key R&D Program of China under Grant 2019YFE0114000in part by the National Natural Science Foundation of China under Grant 61701042+1 种基金in part by the 111 Project of China(Grant No.B16006)the research foundation of Ministry of EducationChina Mobile under Grant MCM20180101.
文摘In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.
基金jointly supported by the Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Centerthe National Key Research and Development Program of China under Grant 2021YFB2900200the National Natural Science Foundation of China under Grant 62201073 and 61925101。
文摘To accommodate the diversified emerging use cases in 5G,radio access networks(RAN)is required to be more flexible,open,and versatile.It is evolving towards cloudification,intelligence and openness.By embedding computing capabilities within RAN,it helps to transform RAN into a natural cost effective radio edge computing platform,offering great opportunity to further enhance RAN agility for diversified services and improve users’quality of experience(Qo E).In this article,a logical architecture enabling deep convergence of communication and computing in RAN is proposed based on O-RAN.The scenarios and potential benefits of sharing RAN computing resources are first analyzed.Then,the requirements,design principles and logical architecture are introduced.Involved key technologies are also discussed,including heterogeneous computing infrastructure,unified computing and communication task modeling,joint communication and computing orchestration and RAN computing data routing.Followed by that,a VR use case is studied to illustrate the superiority of the joint communication and computing optimization.Finally,challenges and future trends are highlighted to provide some insights on the potential future work for researchers in this field.
基金supported in part by the National Science Foundationof China under Grant number 61431001the Beijing Talents Fund under Grant number 2015000021223ZK31
文摘The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.
基金supported in part by the National Science Foundation Project of P.R.China(No.61931001)the Scientific and Technological Innovation Foundation of Foshan,USTB(No.BK20AF003)。
文摘Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.
基金ACKNOWLEDGEMENTS The authors would like to thank the reviewers for their detailed reviews and constructive comments, which have helped improve the quality of this paper. The research has been partly supported by National Natural Science Foundation of China No. 61272528 and No. 61034005, and the Central University Fund (ID-ZYGX2013J073).
文摘Many Task Computing(MTC)is a new class of computing paradigm in which the aggregate number of tasks,quantity of computing,and volumes of data may be extremely large.With the advent of Cloud computing and big data era,scheduling and executing large-scale computing tasks efficiently and allocating resources to tasks reasonably are becoming a quite challenging problem.To improve both task execution and resource utilization efficiency,we present a task scheduling algorithm with resource attribute selection,which can select the optimal node to execute a task according to its resource requirements and the fitness between the resource node and the task.Experiment results show that there is significant improvement in execution throughput and resource utilization compared with the other three algorithms and four scheduling frameworks.In the scheduling algorithm comparison,the throughput is 77%higher than Min-Min algorithm and the resource utilization can reach 91%.In the scheduling framework comparison,the throughput(with work-stealing)is at least 30%higher than the other frameworks and the resource utilization reaches 94%.The scheduling algorithm can make a good model for practical MTC applications.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2007AA010305)the Excellent Doctor Degree Dissertation Fund of Xi an University of Technology (No. 102-211007)
文摘Resource reconstruction algorithms are studied in this paper to solve the problem of resource on-demand allocation and improve the efficiency of resource utilization in virtual computing resource pool. Based on the idea of resource virtualization and the analysis of the resource status transition, the resource allocation process and the necessity of resource reconstruction are presented, l^esource reconstruction algorithms are designed to determine the resource reconstruction types, and it is shown that they can achieve the goal of resource on-demand allocation through three methodologies: resource combination, resource split, and resource random adjustment. The effects that the resource users have on the resource reconstruction results, the deviation between resources and requirements, and the uniformity of resource distribution are studied by three experiments. The experiments show that resource reconstruction has a close relationship with resource requirements, but it is not the same with current distribution of resources. The algorithms can complete the resource adjustment with a lower cost and form the logic resources to match the demands of resource users easily.
基金funded by the key project of the National Natural Science Foundation of China (No.61431001)the National High-Tech R&D Program (863 Program 2015AA01A705)New Technology Star Plan of Beijing (No.xx2013052)
文摘In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.
基金supported by National Natural Science Foundation of China(No.62071327)Tianjin Science and Technology Planning Project(No.22ZYYYJC00020)。
文摘Vehicular Edge Computing(VEC)brings the computational resources in close proximity to the service requestors and thus supports explosive computing demands from smart vehicles.However,the limited computing capability of VEC cannot simultaneously respond to large amounts of offloading requests,thus restricting the performance of VEC system.Besides,a mass of traffic data can incur tremendous pressure on the front-haul links between vehicles and the edge server.To strengthen the performance of VEC,in this paper we propose to place services beforehand at the edge server,e.g.,by deploying the services/tasks-oriented data(e.g.,related libraries and databases)in advance at the network edge,instead of downloading them from the remote data center or offloading them from vehicles during the runtime.In this paper,we formulate the service placement problem in VEC to minimize the average response latency for all requested services along the slotted timeline.Specifically,the time slot spanned optimization problem is converted into per-slot optimization problems based on the Lyapunov optimization.Then a greedy heuristic is introduced to the drift-plus-penalty-based algorithm for seeking the approximate solution.The simulation results reveal its advantages over others in terms of optimal values and our strategy can satisfy the long-term energy constraint.
基金Sponsored by National Nature Science Foundation of China.
文摘Web based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent child agent framework and primary slave agent framework were proposed respectively and discussed in detail.
基金This work was supported by Shandong medical and health science and technology development plan project(No.202012070393).
文摘Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.
基金Supported by the National Natural Science Foundation of China(61272454)
文摘Conventional resource provision algorithms focus on how to maximize resource utilization and meet a fixed constraint of response time which be written in service level agreement(SLA).Unfortunately,the expected response time is highly variable and it is usually longer than the value of SLA.So,it leads to a poor resource utilization and unnecessary servers migration.We develop a framework for customer-driven dynamic resource allocation in cloud computing.Termed CDSMS(customer-driven service manage system),and the framework’s contributions are twofold.First,it can reduce the total migration times by adjusting the value of parameters of response time dynamically according to customers’profiles.Second,it can choose a best resource provision algorithm automatically in different scenarios to improve resource utilization.Finally,we perform a serious experiment in a real cloud computing platform.Experimental results show that CDSMS provides a satisfactory solution for the prediction of expected response time and the interval period between two tasks and reduce the total resource usage cost.
基金supported in part by the National Natural Science Foundation of China under Grant No.61701197in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4)in part by the 111 Project under Grant No.B23008.
文摘Federated Edge Learning(FEL),an emerging distributed Machine Learning(ML)paradigm,enables model training in a distributed environment while ensuring user privacy by using physical separation for each user’s data.However,with the development of complex application scenarios such as the Internet of Things(IoT)and Smart Earth,the conventional resource allocation schemes can no longer effectively support these growing computational and communication demands.Therefore,joint resource optimization may be the key solution to the scaling problem.This paper simultaneously addresses the multifaceted challenges of computation and communication,with the growing multiple resource demands.We systematically review the joint allocation strategies for different resources(computation,data,communication,and network topology)in FEL,and summarize the advantages in improving system efficiency,reducing latency,enhancing resource utilization,and enhancing robustness.In addition,we present the potential ability of joint optimization to enhance privacy preservation by reducing communication requirements,indirectly.This work not only provides theoretical support for resource management in federated learning(FL)systems,but also provides ideas for potential optimal deployment in multiple real-world scenarios.By thoroughly discussing the current challenges and future research directions,it also provides some important insights into multi-resource optimization in complex application environments.