Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive require...The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.展开更多
Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the eq...Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a...Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.展开更多
Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise...Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.展开更多
In this paper,we investigate video quality enhancement using computation offloading to the mobile cloud computing(MCC)environment.Our objective is to reduce the computational complexity required to covert a low-resolu...In this paper,we investigate video quality enhancement using computation offloading to the mobile cloud computing(MCC)environment.Our objective is to reduce the computational complexity required to covert a low-resolution video to high-resolution video while minimizing computation at the mobile client and additional communication costs.To do so,we propose an energy-efficient computation offloading framework for video streaming services in a MCC over the fifth generation(5G)cellular networks.In the proposed framework,the mobile client offloads the computational burden for the video enhancement to the cloud,which renders the side information needed to enhance video without requiring much computation by the client.The cloud detects edges from the upsampled ultra-high-resolution video(UHD)and then compresses and transmits them as side information with the original low-resolution video(e.g.,full HD).Finally,the mobile client decodes the received content and integrates the SI and original content,which produces a high-quality video.In our extensive simulation experiments,we observed that the amount of computation needed to construct a UHD video in the client is 50%-60% lower than that required to decode UHD video compressed by legacy video encoding algorithms.Moreover,the bandwidth required to transmit a full HD video and its side information is around 70% lower than that required for a normal UHD video.The subjective quality of the enhanced UHD is similar to that of the original UHD video even though the client pays lower communication costs with reduced computing power.展开更多
Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtua...Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtual reality,mobile devices,and smart cities.In general,these IoT applications always bring higher energy consumption than traditional applications,which are usually energy-constrained.To provide persistent energy,many references have studied the offloading problem to save energy consumption.However,the dynamic environment dramatically increases the optimization difficulty of the offloading decision.In this paper,we aim to minimize the energy consumption of the entireMECsystemunder the latency constraint by fully considering the dynamic environment.UnderMarkov games,we propose amulti-agent deep reinforcement learning approach based on the bi-level actorcritic learning structure to jointly optimize the offloading decision and resource allocation,which can solve the combinatorial optimization problem using an asymmetric method and compute the Stackelberg equilibrium as a better convergence point than Nash equilibrium in terms of Pareto superiority.Our method can better adapt to a dynamic environment during the data transmission than the single-agent strategy and can effectively tackle the coordination problem in the multi-agent environment.The simulation results show that the proposed method could decrease the total computational overhead by 17.8%compared to the actor-critic-based method and reduce the total computational overhead by 31.3%,36.5%,and 44.7%compared with randomoffloading,all local execution,and all offloading execution,respectively.展开更多
In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature ...In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.展开更多
Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two ...Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two methods indicates that BCR method is superior to Fourier method in terms of speed and accuracy. Therefore. BCR method is applied to solve (?)2(?)= ζ and (?)2X= D from observed vorticity and divergent values. Thereafter the rotational and divergent components of the horizontal monsoon wind in the lower troposphere are reconstructed and are com pared with the results obtained by Successive Over-Relaxation (SOR) method as this indirect method is generally in more use for obtaining the streamfunction ((?)) and velocity potential (X) fields in NWP models. It is found that the results of BCR method are more reliable than SOR method.展开更多
With the new promising technique of mobile edge computing (MEC) emerging, by utilizing the edge computing and cloud computing capabilities to realize the HTTP adaptive video streaming transmission in MEC-based 5G netw...With the new promising technique of mobile edge computing (MEC) emerging, by utilizing the edge computing and cloud computing capabilities to realize the HTTP adaptive video streaming transmission in MEC-based 5G networks has been widely studied. Although many works have been done, most of the existing works focus on the issues of network resource utilization or the quality of experience (QoE) promotion, while the energy efficiency is largely ignored. In this paper, different from previous works, in order to realize the energy efficiency for video transmission in MEC-enhanced 5G networks, we propose a joint caching and transcoding schedule strategy for HTTP adaptive video streaming transmission by taking the caching and transcoding into consideration. We formulate the problem of energy-efficient joint caching and transcoding as an integer programming problem to minimize the system energy consumption. Due to solving the optimization problem brings huge computation complexity, therefore, to make the optimization problem tractable, a heuristic algorithm based on simulated annealing algorithm is proposed to iteratively reach the global optimum solution with a lower complexity and higher accuracy. Finally, numerical simulation results are illustrated to demonstrated that our proposed scheme brings an excellent performance.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utiliza...In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.展开更多
Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies...Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies that the attribute authority can access all encrypted data,which is known as the key escrow problem.In addition,because all access privileges are defined over a single attribute universe and attributes are shared among multiple data users,the revocation of users is inefficient for the existing ABE scheme.In this paper,we propose a novel scheme that solves the key escrow problem and supports efficient user revocation.First,an access controller is introduced into the existing scheme,and then,secret keys are generated corporately by the attribute authority and access controller.Second,an efficient user revocation mechanism is achieved using a version key that supports forward and backward security.The analysis proves that our scheme is secure and efficient in user authorization and revocation.展开更多
In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the ...In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.展开更多
Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexit...Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexity algorithm is proposed to complete task offloading and server allocation.In this paper,a multi-user with multiple tasks and single server scenario is considered for small network,taking full account of factors including data size,bandwidth,channel state information.Furthermore,we consider a multi-server scenario for bigger network,where the influence of task priority is taken into consideration.To jointly minimize delay and energy cost,we propose a distributed unsupervised learning-based offloading framework for task offloading and server allocation.We exploit a memory pool to store input data and corresponding decisions as key-value pairs for model to learn to solve optimization problems.To further reduce time cost and achieve near-optimal performance,we use convolutional neural networks to process mass data based on fully connected networks.Numerical results show that the proposed algorithm performs better than other offloading schemes,which can generate near-optimal offloading decision timely.展开更多
In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Conver...In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Convergence analysis proved that the order of convergence of the family of derivative free simultaneous iterative method is nine.Our main aim is to check out the most regularly used simultaneous iterative methods for finding all roots of non-linear equations by studying their dynamical planes,numerical experiments and CPU time-methodology.Dynamical planes of iterative methods are drawn by using MATLAB for the comparison of global convergence properties of simultaneous iterative methods.Convergence behavior of the higher order simultaneous iterative methods are also illustrated by residual graph obtained from some numerical test examples.Numerical test examples,dynamical behavior and computational efficiency are provided to present the performance and dominant efficiency of the newly constructed derivative free family of simultaneous iterative method over existing higher order simultaneous methods in literature.展开更多
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金supported by National Natural Science Foundation of China(No.62471254)National Natural Science Foundation of China(No.92367302)。
文摘The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.
基金supported by the Foundation of the Science and Technology of Jilin Province (20070541)985-Automotive Engineering of Jilin University and Innovation Fund for 985 Engineering of Jilin University (20080104).
文摘Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金partly funded by MOST Major Research and Development Project(Grant No 2021YFB2900204)Natural Science Foundation of China(Grant No 62132004)+1 种基金Sichuan Major R&D Project(Grant No 22QYCX0168)the Key Research and Development Program of Zhejiang Province(Grant No 2022C01093)。
文摘Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.
文摘Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.
文摘In this paper,we investigate video quality enhancement using computation offloading to the mobile cloud computing(MCC)environment.Our objective is to reduce the computational complexity required to covert a low-resolution video to high-resolution video while minimizing computation at the mobile client and additional communication costs.To do so,we propose an energy-efficient computation offloading framework for video streaming services in a MCC over the fifth generation(5G)cellular networks.In the proposed framework,the mobile client offloads the computational burden for the video enhancement to the cloud,which renders the side information needed to enhance video without requiring much computation by the client.The cloud detects edges from the upsampled ultra-high-resolution video(UHD)and then compresses and transmits them as side information with the original low-resolution video(e.g.,full HD).Finally,the mobile client decodes the received content and integrates the SI and original content,which produces a high-quality video.In our extensive simulation experiments,we observed that the amount of computation needed to construct a UHD video in the client is 50%-60% lower than that required to decode UHD video compressed by legacy video encoding algorithms.Moreover,the bandwidth required to transmit a full HD video and its side information is around 70% lower than that required for a normal UHD video.The subjective quality of the enhanced UHD is similar to that of the original UHD video even though the client pays lower communication costs with reduced computing power.
基金supported by the National Natural Science Foundation of China(62162050)the Fundamental Research Funds for the Central Universities(No.N2217002)the Natural Science Foundation of Liaoning ProvincialDepartment of Science and Technology(No.2022-KF-11-04).
文摘Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtual reality,mobile devices,and smart cities.In general,these IoT applications always bring higher energy consumption than traditional applications,which are usually energy-constrained.To provide persistent energy,many references have studied the offloading problem to save energy consumption.However,the dynamic environment dramatically increases the optimization difficulty of the offloading decision.In this paper,we aim to minimize the energy consumption of the entireMECsystemunder the latency constraint by fully considering the dynamic environment.UnderMarkov games,we propose amulti-agent deep reinforcement learning approach based on the bi-level actorcritic learning structure to jointly optimize the offloading decision and resource allocation,which can solve the combinatorial optimization problem using an asymmetric method and compute the Stackelberg equilibrium as a better convergence point than Nash equilibrium in terms of Pareto superiority.Our method can better adapt to a dynamic environment during the data transmission than the single-agent strategy and can effectively tackle the coordination problem in the multi-agent environment.The simulation results show that the proposed method could decrease the total computational overhead by 17.8%compared to the actor-critic-based method and reduce the total computational overhead by 31.3%,36.5%,and 44.7%compared with randomoffloading,all local execution,and all offloading execution,respectively.
文摘In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.
文摘Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two methods indicates that BCR method is superior to Fourier method in terms of speed and accuracy. Therefore. BCR method is applied to solve (?)2(?)= ζ and (?)2X= D from observed vorticity and divergent values. Thereafter the rotational and divergent components of the horizontal monsoon wind in the lower troposphere are reconstructed and are com pared with the results obtained by Successive Over-Relaxation (SOR) method as this indirect method is generally in more use for obtaining the streamfunction ((?)) and velocity potential (X) fields in NWP models. It is found that the results of BCR method are more reliable than SOR method.
基金support by the Major National Science and Technology Projects (No. 2018ZX03001014-003)
文摘With the new promising technique of mobile edge computing (MEC) emerging, by utilizing the edge computing and cloud computing capabilities to realize the HTTP adaptive video streaming transmission in MEC-based 5G networks has been widely studied. Although many works have been done, most of the existing works focus on the issues of network resource utilization or the quality of experience (QoE) promotion, while the energy efficiency is largely ignored. In this paper, different from previous works, in order to realize the energy efficiency for video transmission in MEC-enhanced 5G networks, we propose a joint caching and transcoding schedule strategy for HTTP adaptive video streaming transmission by taking the caching and transcoding into consideration. We formulate the problem of energy-efficient joint caching and transcoding as an integer programming problem to minimize the system energy consumption. Due to solving the optimization problem brings huge computation complexity, therefore, to make the optimization problem tractable, a heuristic algorithm based on simulated annealing algorithm is proposed to iteratively reach the global optimum solution with a lower complexity and higher accuracy. Finally, numerical simulation results are illustrated to demonstrated that our proposed scheme brings an excellent performance.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
文摘In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.
基金supported by the NSFC(61173141,U1536206,61232016, U1405254,61373133,61502242,61572258)BK20150925+3 种基金Fund of Jiangsu Engineering Center of Network Monitoring(KJR1402)Fund of MOE Internet Innovation Platform(KJRP1403)CICAEETthe PAPD fund
文摘Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies that the attribute authority can access all encrypted data,which is known as the key escrow problem.In addition,because all access privileges are defined over a single attribute universe and attributes are shared among multiple data users,the revocation of users is inefficient for the existing ABE scheme.In this paper,we propose a novel scheme that solves the key escrow problem and supports efficient user revocation.First,an access controller is introduced into the existing scheme,and then,secret keys are generated corporately by the attribute authority and access controller.Second,an efficient user revocation mechanism is achieved using a version key that supports forward and backward security.The analysis proves that our scheme is secure and efficient in user authorization and revocation.
基金the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)+1 种基金Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2110)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.
基金presented in part at the EAI CHINACOM 2020supported in part by Natural Science Foundation of Jiangxi Province (Grant No.20202BAB212003)+1 种基金Projects of Humanities and Social Sciences of universities in Jiangxi (JC18224)Science and technology project of Jiangxi Provincial Department of Education(GJJ210817, GJJ210854)
文摘Mobile edge computing has emerged as a new paradigm to enhance computing capabilities by offloading complicated tasks to nearby cloud server.To conserve energy as well as maintain quality of service,low time complexity algorithm is proposed to complete task offloading and server allocation.In this paper,a multi-user with multiple tasks and single server scenario is considered for small network,taking full account of factors including data size,bandwidth,channel state information.Furthermore,we consider a multi-server scenario for bigger network,where the influence of task priority is taken into consideration.To jointly minimize delay and energy cost,we propose a distributed unsupervised learning-based offloading framework for task offloading and server allocation.We exploit a memory pool to store input data and corresponding decisions as key-value pairs for model to learn to solve optimization problems.To further reduce time cost and achieve near-optimal performance,we use convolutional neural networks to process mass data based on fully connected networks.Numerical results show that the proposed algorithm performs better than other offloading schemes,which can generate near-optimal offloading decision timely.
基金the Natural Science Foundation of China(Grant Nos.61673169,11301127,11701176,11626101,and 11601485)The Natural Science Foundation of Huzhou City(Grant No.2018YZ07).
文摘In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Convergence analysis proved that the order of convergence of the family of derivative free simultaneous iterative method is nine.Our main aim is to check out the most regularly used simultaneous iterative methods for finding all roots of non-linear equations by studying their dynamical planes,numerical experiments and CPU time-methodology.Dynamical planes of iterative methods are drawn by using MATLAB for the comparison of global convergence properties of simultaneous iterative methods.Convergence behavior of the higher order simultaneous iterative methods are also illustrated by residual graph obtained from some numerical test examples.Numerical test examples,dynamical behavior and computational efficiency are provided to present the performance and dominant efficiency of the newly constructed derivative free family of simultaneous iterative method over existing higher order simultaneous methods in literature.