Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Air traffic flow management has been a major means for balancing air traffic demandand airport or airspace capacity to reduce congestion and flight delays.However,unpredictable fac-tors,such as weather and equipment m...Air traffic flow management has been a major means for balancing air traffic demandand airport or airspace capacity to reduce congestion and flight delays.However,unpredictable fac-tors,such as weather and equipment malfunctions,can cause dynamic changes in airport and sectorcapacity,resulting in significant alterations to optimized flight schedules and the calculated pre-departure slots.Therefore,taking into account capacity uncertainties is essential to create a moreresilient flight schedule.This paper addresses the flight pre-departure sequencing issue and intro-duces a capacity uncertainty model for optimizing flight schedule at the airport network level.The goal of the model is to reduce the total cost of flight delays while increasing the robustnessof the optimized schedule.A chance-constrained model is developed to address the capacity uncer-tainty of airports and sectors,and the significance of airports and sectors in the airport network isconsidered when setting the violation probability.The performance of the model is evaluated usingreal flight data by comparing them with the results of the deterministic model.The development ofthe model based on the characteristics of this special optimization mechanism can significantlyenhance its performance in addressing the pre-departure flight scheduling problem at the airportnetwork level.展开更多
This paper aims to present a comprehensive proposal for project scheduling and control by applying fuzzy earned value.It goes a step further than the existing literature:in the formulation of the fuzzy earned value we...This paper aims to present a comprehensive proposal for project scheduling and control by applying fuzzy earned value.It goes a step further than the existing literature:in the formulation of the fuzzy earned value we consider not only its duration,but also cost and production,and alternatives in the scheduling between the earliest and latest times.The mathematical model is implemented in a prototypical construction project with all the estimated values taken as fuzzy numbers.Our findings suggest that different possible schedules and the fuzzy arithmetic provide more objective results in uncertain environments than the traditional methodology.The proposed model allows for controlling the vagueness of the environment through the adjustment of the α-cut,adapting it to the specific circumstances of the project.展开更多
Technological advancements in unmanned aerial vehicles(UAVs)have revolutionized various industries,enabling the widespread adoption of UAV-based solutions.In engineering management,UAV-based inspection has emerged as ...Technological advancements in unmanned aerial vehicles(UAVs)have revolutionized various industries,enabling the widespread adoption of UAV-based solutions.In engineering management,UAV-based inspection has emerged as a highly efficient method for identifying hidden risks in high-risk construction environments,surpassing traditional inspection techniques.Building on this foundation,this paper delves into the optimization of UAV inspection routing and scheduling,addressing the complexity introduced by factors such as no-fly zones,monitoring-interval time windows,and multiple monitoring rounds.To tackle this challenging problem,we propose a mixed-integer linear programming(MILP)model that optimizes inspection task assignments,monitoring sequence schedules,and charging decisions.The comprehensive consideration of these factors differentiates our problem from conventional vehicle routing problem(VRP),leading to a mathematically intractable model for commercial solvers in the case of large-scale instances.To overcome this limitation,we design a tailored variable neighborhood search(VNS)metaheuristic,customizing the algorithm to efficiently solve our model.Extensive numerical experiments are conducted to validate the efficacy of our proposed algorithm,demonstrating its scalability for both large-scale and real-scale instances.Sensitivity experiments and a case study based on an actual engineering project are also conducted,providing valuable insights for engineering managers to enhance inspection work efficiency.展开更多
Recently, several novel computing paradigms are proposed, e.g., fog computing and edge computing. In such more decentralized computing paradigms, the location and resource for code execution and data storage of end ap...Recently, several novel computing paradigms are proposed, e.g., fog computing and edge computing. In such more decentralized computing paradigms, the location and resource for code execution and data storage of end applications could also be optionally distributed among different places or machines. In this paper, we position that this situation requires a new transparent and usercentric approach to unify the resource management and code scheduling from the perspective of end users. We elaborate our vision and propose a software-defined code scheduling framework. The proposed framework allows the code execution or data storage of end applications to be adaptively done at appropriate machines under the help of a performance and capacity monitoring facility, intelligently improving application performance for end users. A pilot system and preliminary results show the advantage of the framework and thus the advocated vision for end users.展开更多
The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unload...The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unloading efficiency of container terminals. These terminals can also increase the port’s transportation volume while ensuring the quality of cargo loading and unloading, which has become an inevitable trend in the future development of ports. However, the continuous growth of the port’s transportation volume has increased the horizontal transportation pressure on the automated terminal, and the problems of route conflicts and road locks faced by automated guided vehicles (AGV) have become increasingly prominent. Accordingly, this work takes Xiamen Yuanhai automated container terminal as an example. This work focuses on analyzing the interference problem of path conflict in its horizontal transportation AGV scheduling. Results show that path conflict, the most prominent interference factor, will cause AGV scheduling to be unable to execute the original plan. Consequently, the disruption management was used to establish a disturbance recovery model, and the Dijkstra algorithm for combining with time windows is adopted to plan a conflict-free path. Based on the comparison with the rescheduling method, the research obtains that the deviation of the transportation path and the deviation degree of the transportation path under the disruption management method are much lower than those of the rescheduling method. The transportation path deviation degree of the disruption management method is only 5.56%. Meanwhile, the deviation degree of the transportation path under the rescheduling method is 44.44%.展开更多
The fast acceptance of cloud technology to industry explains increasing energy conservation needs and adoption of energy aware scheduling methods to cloud. Power consumption is one of the top of mind issues in cloud, ...The fast acceptance of cloud technology to industry explains increasing energy conservation needs and adoption of energy aware scheduling methods to cloud. Power consumption is one of the top of mind issues in cloud, because the usage of cloud storage by the individuals or organization grows rapidly. Developing an efficient power management processor architecture has gained considerable attention. However, the conventional power management mechanism fails to consider task scheduling policies. Therefore, this work presents a novel energy aware framework for power management. The proposed system leads to the development of Inclusive Power-Cognizant Processor Controller (IPCPC) for efficient power utilization. To evaluate the performance of the proposed method, simulation experiments inputting random tasks as well as tasks collected from Google Trace Logs were conducted to validate the supremacy of IPCPC. The research based on Real world Google Trace Logs gives results that proposed framework leads to less than 9% of total power consumption per task of server which proves reduction in the overall power needed.展开更多
Efficient fast-charging technology is necessary for the extension of the driving range of electric vehicles.However,lithium-ion cells generate immense heat at high-current charging rates.In order to address this probl...Efficient fast-charging technology is necessary for the extension of the driving range of electric vehicles.However,lithium-ion cells generate immense heat at high-current charging rates.In order to address this problem,an efficient fast charging–cooling scheduling method is urgently needed.In this study,a liquid cooling-based thermal management system equipped with mini-channels was designed for the fastcharging process of a lithium-ion battery module.A neural network-based regression model was proposed based on 81 sets of experimental data,which consisted of three sub-models and considered three outputs:maximum temperature,temperature standard deviation,and energy consumption.Each sub-model had a desirable testing accuracy(99.353%,97.332%,and 98.381%)after training.The regression model was employed to predict all three outputs among a full dataset,which combined different charging current rates(0.5C,1C,1.5C,2C,and 2.5C(1C=5 A))at three different charging stages,and a range of coolant rates(0.0006,0.0012,and 0.0018 kg·s^(-1)).An optimal charging–cooling schedule was selected from the predicted dataset and was validated by the experiments.The results indicated that the battery module’s state of charge value increased by 0.5 after 15 min,with an energy consumption lower than 0.02 J.The maximum temperature and temperature standard deviation could be controlled within 33.35 and 0.8C,respectively.The approach described herein can be used by the electric vehicles industry in real fast-charging conditions.Moreover,optimal fast charging-cooling schedule can be predicted based on the experimental data obtained,that in turn,can significantly improve the efficiency of the charging process design as well as control energy consumption during cooling.展开更多
With the rapid improvement of China’s economic level and the rapid development of e-commerce, the demand for logistics warehousing, which is one of the most important links in the logistics transportation system, has...With the rapid improvement of China’s economic level and the rapid development of e-commerce, the demand for logistics warehousing, which is one of the most important links in the logistics transportation system, has also greatly increased. The dispatching management system applied in the warehouse has a wide range of applications in the logistics storage system. Improving the operational efficiency of the dispatching management system can effectively improve the efficiency of the automated warehouse system and reduce the cost of logistics transportation. Therefore, this paper designs a set of dispatching systems with the tracking car as the control target and realizes the positioning and path planning functions of the tracking car in the system. At present, the tracking car can completely receive the commands sent by the system, accurately follow the established path. At the same time, the system software can also obtain the video images taken by the camera on the car and play them smoothly.展开更多
Staff scheduling and rostering problems, with application in several application areas, from transportation systems to hospitals, have been widely addressed by researchers. This is not the case of hospitality services...Staff scheduling and rostering problems, with application in several application areas, from transportation systems to hospitals, have been widely addressed by researchers. This is not the case of hospitality services, which have been forgotten by the quantitative research literature. The purpose of this paper is to provide some insights on the application of staff scheduling and rostering problems to hospitality management operations, reviewing existing approaches developed in other similar areas, such as nurse rostering or examining adaptable problem models, such as the tour scheduling.展开更多
Grid technique is taken as the third generation internet technology and resource management is the core of it. Aiming at the problems of resource management of CEDAGrid (China Earthquake Disaster Alleviation and Simu...Grid technique is taken as the third generation internet technology and resource management is the core of it. Aiming at the problems of resource management of CEDAGrid (China Earthquake Disaster Alleviation and Simulation Grid) in its preliminary construction, this paper presents a resource management and job scheduling model: ProRMJS to solve these problems. For platform supposed agreeably each computing node can provide computation service, ProRMJS uses "computation pool" to support scheduler, and then the scheduler allocates jobs dynamically according to computing capability and status of each node to ensure the stability of the platform. At the same time, ProRMJS monitors the status of job on each node and sets a time threshold to manage the job scheduling. By estimating the computing capability of each node, ProRMJS allocates jobs on demand to solve the problem of supposing each node can finish the job acquiescently. When calculating the computing capability of each node, ProRMJS allows for the various factors that affect the computing capability and then the efficiency of the platform is improved. Finally the validity of the model is verified by an example.展开更多
Chip multiprocessors(CMPs) allow thread level parallelism,thus increasing performance.However,this comes with the cost of temperature problem.CMPs require more power,creating non uniform power map and hotspots.Aiming ...Chip multiprocessors(CMPs) allow thread level parallelism,thus increasing performance.However,this comes with the cost of temperature problem.CMPs require more power,creating non uniform power map and hotspots.Aiming at this problem,a thread scheduling algorithm,the greedy scheduling algorithm,was proposed to reduce the thermal emergencies and to improve the throughput.The greedy scheduling algorithm was implemented in the Linux kernel on Intel's Quad-Core system.The experimental results show that the greedy scheduling algorithm can reduce 9.6%-78.5% of the hardware dynamic thermal management(DTM) in various combinations of workloads,and has an average of 5.2% and up to 9.7% throughput higher than the Linux standard scheduler.展开更多
With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such ...With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such as de-signalization.How to effectively manage autonomous vehicles for traffic control with high throughput at unsignalized intersections while ensuring safety has been a research hotspot.This paper proposes a collision-free autonomous vehicle scheduling framework based on edge-cloud computing power networks for unsignalized intersections where the lanes entering the intersections are undirectional,and designs an efficient communication system and protocol.First,by analyzing the collision point occupation time,this paper formulates an absolute value programming problem.Second,this problem is solved with low complexity by the Edge Intelligence Optimal Entry Time(EI-OET)algorithm based on edge-cloud computing power support.Then,the communication system and protocol are designed for the proposed scheduling scheme to realize efficient and low-latency vehicular communications.Finally,simulation experiments compare the proposed scheduling framework with directional and traditional traffic light scheduling mechanisms,and the experimental results demonstrate its high efficiency,low latency,and low complexity.展开更多
<span style="font-family:Verdana;">In the present deregulated electricity market, power system congestion is the main complication that an independent system operator (ISO) faces on a regular basis. Tr...<span style="font-family:Verdana;">In the present deregulated electricity market, power system congestion is the main complication that an independent system operator (ISO) faces on a regular basis. Transmission line congestion trigger serious problems for smooth functioning in restructured power system causing an increase in the cost of transmission hence affecting market efficiency. Thus, it is of utmost importance for the investigation of various techniques in order to relieve congestion in the transmission network. Generation rescheduling is one of the most efficacious techniques to do away with the problem of congestion. For optimiz</span><span style="font-family:Verdana;">ing the congestion cost, this work suggests a hybrid optimization based on</span><span style="font-family:Verdana;"> two effective algorithms viz Teaching learning-based optimization (TLBO) algorithm and Particle swarm optimization (PSO) algorithm. For binding the constraints, the traditional penalty function technique is incorporated. Modified IEEE 30-bus test system and modified IEEE 57-bus test system are used to inspect the usefulness of the suggested methodology.</span>展开更多
Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in oper...Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.展开更多
The rapid growth of distributed data-centric applications and AI workloads increases demand for low-latency,high-throughput communication,necessitating frequent and flexible updates to network routing configurations.H...The rapid growth of distributed data-centric applications and AI workloads increases demand for low-latency,high-throughput communication,necessitating frequent and flexible updates to network routing configurations.However,maintaining consistent forwarding states during these updates is challenging,particularly when rerouting multiple flows simultaneously.Existing approaches pay little attention to multi-flow update,where improper update sequences across data plane nodes may construct deadlock dependencies.Moreover,these methods typically involve excessive control-data plane interactions,incurring significant resource overhead and performance degradation.This paper presents P4LoF,an efficient loop-free update approach that enables the controller to reroute multiple flows through minimal interactions.P4LoF first utilizes a greedy-based algorithm to generate the shortest update dependency chain for the single-flow update.These chains are then dynamically merged into a dependency graph and resolved as a Shortest Common Super-sequence(SCS)problem to produce the update sequence of multi-flow update.To address deadlock dependencies in multi-flow updates,P4LoF builds a deadlock-fix forwarding model that leverages the flexible packet processing capabilities of the programmable data plane.Experimental results show that P4LoF reduces control-data plane interactions by at least 32.6%with modest overhead,while effectively guaranteeing loop-free consistency.展开更多
Cellulose frameworks have emerged as promising materials for light management due to their exceptional light-scattering capabilities and sustainable nature.Conventional biomass-derived cellulose frameworks face a fund...Cellulose frameworks have emerged as promising materials for light management due to their exceptional light-scattering capabilities and sustainable nature.Conventional biomass-derived cellulose frameworks face a fundamental trade-off between haze and transparency,coupled with impractical thicknesses(≥1 mm).Inspired by squid’s skin-peeling mechanism,this work develops a peroxyformic acid(HCOOOH)-enabled precision peeling strategy to isolate intact 10-μm-thick bamboo green(BG)frameworks—100×thinner than wood-based counterparts while achieving an unprecedented optical performance(88%haze with 80%transparency).This performance surpasses delignified biomass(transparency<40%at 1 mm)and matches engineered cellulose composites,yet requires no energy-intensive nanofibrillation.The preserved native cellulose I crystalline structure(64.76%crystallinity)and wax-coated uniaxial fibril alignment(Hermans factor:0.23)contribute to high mechanical strength(903 MPa modulus)and broadband light scattering.As a light-management layer in polycrystalline silicon solar cells,the BG framework boosts photoelectric conversion efficiency by 0.41%absolute(18.74%→19.15%),outperforming synthetic anti-reflective coatings.The work establishes a scalable,waste-to-wealth route for optical-grade cellulose materials in next-generation optoelectronics.展开更多
Perovskite solar cells(PSCs)have emerged as promising photovoltaic technologies owing to their remarkable power conversion efficiency(PCE).However,heat accumulation under continuous illumination remains a critical bot...Perovskite solar cells(PSCs)have emerged as promising photovoltaic technologies owing to their remarkable power conversion efficiency(PCE).However,heat accumulation under continuous illumination remains a critical bottleneck,severely affecting device stability and long-term operational performance.Herein,we present a multifunctional strategy by incorporating highly thermally conductive Ti_(3)C_(2)T_(X) MXene nanosheets into the perovskite layer to simultaneously enhance thermal management and optoelectronic properties.The Ti_(3)C_(2)T_(X) nanosheets,embedded at perovskite grain boundaries,construct efficient thermal conduction pathways,significantly improving the thermal conductivity and diffusivity of the film.This leads to a notable reduction in the device’s steady-state operating temperature from 42.96 to 39.97 under 100 mW cm^(−2) illumination,thereby alleviating heat-induced performance degradation.Beyond thermal regulation,Ti_(3)C_(2)T_(X),with high conductivity and negatively charged surface terminations,also serves as an effective defect passivation agent,reducing trap-assisted recombination,while simultaneously facilitating charge extraction and transport by optimizing interfacial energy alignment.As a result,the Ti_(3)C_(2)T_(X)-modified PSC achieve a champion PCE of 25.13%and exhibit outstanding thermal stability,retaining 80%of the initial PCE after 500 h of thermal aging at 85 and 30±5%relative humidity.(In contrast,control PSC retain only 58%after 200 h.)Moreover,under continuous maximum power point tracking in N2 atmosphere,Ti_(3)C_(2)T_(X)-modified PSC retained 70%of the initial PCE after 500 h,whereas the control PSC drop sharply to 20%.These findings highlight the synergistic role of Ti_(3)C_(2)T_(X) in thermal management and optoelectronic performance,paving the way for the development of high-efficiency and heat-resistant perovskite photovoltaics.展开更多
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金supported by the National Natural Science Foundation of China(Nos.U2033203,U1833126,61773203,61304190)。
文摘Air traffic flow management has been a major means for balancing air traffic demandand airport or airspace capacity to reduce congestion and flight delays.However,unpredictable fac-tors,such as weather and equipment malfunctions,can cause dynamic changes in airport and sectorcapacity,resulting in significant alterations to optimized flight schedules and the calculated pre-departure slots.Therefore,taking into account capacity uncertainties is essential to create a moreresilient flight schedule.This paper addresses the flight pre-departure sequencing issue and intro-duces a capacity uncertainty model for optimizing flight schedule at the airport network level.The goal of the model is to reduce the total cost of flight delays while increasing the robustnessof the optimized schedule.A chance-constrained model is developed to address the capacity uncer-tainty of airports and sectors,and the significance of airports and sectors in the airport network isconsidered when setting the violation probability.The performance of the model is evaluated usingreal flight data by comparing them with the results of the deterministic model.The development ofthe model based on the characteristics of this special optimization mechanism can significantlyenhance its performance in addressing the pre-departure flight scheduling problem at the airportnetwork level.
基金Project partially supported by the Spanish Ministry of Science and Innovation (No.BIA2011-23602)the European Community with the European Regional Development Fund (FEDER),Spain
文摘This paper aims to present a comprehensive proposal for project scheduling and control by applying fuzzy earned value.It goes a step further than the existing literature:in the formulation of the fuzzy earned value we consider not only its duration,but also cost and production,and alternatives in the scheduling between the earliest and latest times.The mathematical model is implemented in a prototypical construction project with all the estimated values taken as fuzzy numbers.Our findings suggest that different possible schedules and the fuzzy arithmetic provide more objective results in uncertain environments than the traditional methodology.The proposed model allows for controlling the vagueness of the environment through the adjustment of the α-cut,adapting it to the specific circumstances of the project.
基金supported by the National Natural Science Foundation of China(72201229,72025103,72394360,72394362,72361137001,72071173,and 71831008).
文摘Technological advancements in unmanned aerial vehicles(UAVs)have revolutionized various industries,enabling the widespread adoption of UAV-based solutions.In engineering management,UAV-based inspection has emerged as a highly efficient method for identifying hidden risks in high-risk construction environments,surpassing traditional inspection techniques.Building on this foundation,this paper delves into the optimization of UAV inspection routing and scheduling,addressing the complexity introduced by factors such as no-fly zones,monitoring-interval time windows,and multiple monitoring rounds.To tackle this challenging problem,we propose a mixed-integer linear programming(MILP)model that optimizes inspection task assignments,monitoring sequence schedules,and charging decisions.The comprehensive consideration of these factors differentiates our problem from conventional vehicle routing problem(VRP),leading to a mathematically intractable model for commercial solvers in the case of large-scale instances.To overcome this limitation,we design a tailored variable neighborhood search(VNS)metaheuristic,customizing the algorithm to efficiently solve our model.Extensive numerical experiments are conducted to validate the efficacy of our proposed algorithm,demonstrating its scalability for both large-scale and real-scale instances.Sensitivity experiments and a case study based on an actual engineering project are also conducted,providing valuable insights for engineering managers to enhance inspection work efficiency.
基金supported in part by Initiative Scientific Research Program in Tsinghua University under Grant No.20161080066in part by International Science&Technology Cooperation Program of China under Grant No.2013DFB10070
文摘Recently, several novel computing paradigms are proposed, e.g., fog computing and edge computing. In such more decentralized computing paradigms, the location and resource for code execution and data storage of end applications could also be optionally distributed among different places or machines. In this paper, we position that this situation requires a new transparent and usercentric approach to unify the resource management and code scheduling from the perspective of end users. We elaborate our vision and propose a software-defined code scheduling framework. The proposed framework allows the code execution or data storage of end applications to be adaptively done at appropriate machines under the help of a performance and capacity monitoring facility, intelligently improving application performance for end users. A pilot system and preliminary results show the advantage of the framework and thus the advocated vision for end users.
文摘The recent rapid development of China’s foreign trade has led to the significant increase in waterway transportation and automated container ports. Automated terminals can significantly improve the loading and unloading efficiency of container terminals. These terminals can also increase the port’s transportation volume while ensuring the quality of cargo loading and unloading, which has become an inevitable trend in the future development of ports. However, the continuous growth of the port’s transportation volume has increased the horizontal transportation pressure on the automated terminal, and the problems of route conflicts and road locks faced by automated guided vehicles (AGV) have become increasingly prominent. Accordingly, this work takes Xiamen Yuanhai automated container terminal as an example. This work focuses on analyzing the interference problem of path conflict in its horizontal transportation AGV scheduling. Results show that path conflict, the most prominent interference factor, will cause AGV scheduling to be unable to execute the original plan. Consequently, the disruption management was used to establish a disturbance recovery model, and the Dijkstra algorithm for combining with time windows is adopted to plan a conflict-free path. Based on the comparison with the rescheduling method, the research obtains that the deviation of the transportation path and the deviation degree of the transportation path under the disruption management method are much lower than those of the rescheduling method. The transportation path deviation degree of the disruption management method is only 5.56%. Meanwhile, the deviation degree of the transportation path under the rescheduling method is 44.44%.
文摘The fast acceptance of cloud technology to industry explains increasing energy conservation needs and adoption of energy aware scheduling methods to cloud. Power consumption is one of the top of mind issues in cloud, because the usage of cloud storage by the individuals or organization grows rapidly. Developing an efficient power management processor architecture has gained considerable attention. However, the conventional power management mechanism fails to consider task scheduling policies. Therefore, this work presents a novel energy aware framework for power management. The proposed system leads to the development of Inclusive Power-Cognizant Processor Controller (IPCPC) for efficient power utilization. To evaluate the performance of the proposed method, simulation experiments inputting random tasks as well as tasks collected from Google Trace Logs were conducted to validate the supremacy of IPCPC. The research based on Real world Google Trace Logs gives results that proposed framework leads to less than 9% of total power consumption per task of server which proves reduction in the overall power needed.
基金This work was supported by the Program for Huazhong University of Science and Technology(HUST)Academic Frontier Youth Team(2017QYTD04)the Program for HUST Graduate Innovation and Entrepreneurship Fund(2019YGSCXCY037)+2 种基金Authors acknowledge Grant DMETKF2018019 by State Key Laboratory of Digital Manufacturing Equipment and Technology,Huazhong University of Science and TechnologyThis study was also financially supported by the Guangdong Science and Technology Project(2016B020240001)the Guangdong Natural Science Foundation(2018A030310150).
文摘Efficient fast-charging technology is necessary for the extension of the driving range of electric vehicles.However,lithium-ion cells generate immense heat at high-current charging rates.In order to address this problem,an efficient fast charging–cooling scheduling method is urgently needed.In this study,a liquid cooling-based thermal management system equipped with mini-channels was designed for the fastcharging process of a lithium-ion battery module.A neural network-based regression model was proposed based on 81 sets of experimental data,which consisted of three sub-models and considered three outputs:maximum temperature,temperature standard deviation,and energy consumption.Each sub-model had a desirable testing accuracy(99.353%,97.332%,and 98.381%)after training.The regression model was employed to predict all three outputs among a full dataset,which combined different charging current rates(0.5C,1C,1.5C,2C,and 2.5C(1C=5 A))at three different charging stages,and a range of coolant rates(0.0006,0.0012,and 0.0018 kg·s^(-1)).An optimal charging–cooling schedule was selected from the predicted dataset and was validated by the experiments.The results indicated that the battery module’s state of charge value increased by 0.5 after 15 min,with an energy consumption lower than 0.02 J.The maximum temperature and temperature standard deviation could be controlled within 33.35 and 0.8C,respectively.The approach described herein can be used by the electric vehicles industry in real fast-charging conditions.Moreover,optimal fast charging-cooling schedule can be predicted based on the experimental data obtained,that in turn,can significantly improve the efficiency of the charging process design as well as control energy consumption during cooling.
文摘With the rapid improvement of China’s economic level and the rapid development of e-commerce, the demand for logistics warehousing, which is one of the most important links in the logistics transportation system, has also greatly increased. The dispatching management system applied in the warehouse has a wide range of applications in the logistics storage system. Improving the operational efficiency of the dispatching management system can effectively improve the efficiency of the automated warehouse system and reduce the cost of logistics transportation. Therefore, this paper designs a set of dispatching systems with the tracking car as the control target and realizes the positioning and path planning functions of the tracking car in the system. At present, the tracking car can completely receive the commands sent by the system, accurately follow the established path. At the same time, the system software can also obtain the video images taken by the camera on the car and play them smoothly.
文摘Staff scheduling and rostering problems, with application in several application areas, from transportation systems to hospitals, have been widely addressed by researchers. This is not the case of hospitality services, which have been forgotten by the quantitative research literature. The purpose of this paper is to provide some insights on the application of staff scheduling and rostering problems to hospitality management operations, reviewing existing approaches developed in other similar areas, such as nurse rostering or examining adaptable problem models, such as the tour scheduling.
基金Project "Seismic Data Share" from Ministry of Science and Technology of China.
文摘Grid technique is taken as the third generation internet technology and resource management is the core of it. Aiming at the problems of resource management of CEDAGrid (China Earthquake Disaster Alleviation and Simulation Grid) in its preliminary construction, this paper presents a resource management and job scheduling model: ProRMJS to solve these problems. For platform supposed agreeably each computing node can provide computation service, ProRMJS uses "computation pool" to support scheduler, and then the scheduler allocates jobs dynamically according to computing capability and status of each node to ensure the stability of the platform. At the same time, ProRMJS monitors the status of job on each node and sets a time threshold to manage the job scheduling. By estimating the computing capability of each node, ProRMJS allocates jobs on demand to solve the problem of supposing each node can finish the job acquiescently. When calculating the computing capability of each node, ProRMJS allows for the various factors that affect the computing capability and then the efficiency of the platform is improved. Finally the validity of the model is verified by an example.
基金Projects(2009AA01Z124,2009AA01Z102) supported by the National High Technology Research and Development Program of ChinaProjects(60970036,61076025) supported by the National Natural Science Foundation of China
文摘Chip multiprocessors(CMPs) allow thread level parallelism,thus increasing performance.However,this comes with the cost of temperature problem.CMPs require more power,creating non uniform power map and hotspots.Aiming at this problem,a thread scheduling algorithm,the greedy scheduling algorithm,was proposed to reduce the thermal emergencies and to improve the throughput.The greedy scheduling algorithm was implemented in the Linux kernel on Intel's Quad-Core system.The experimental results show that the greedy scheduling algorithm can reduce 9.6%-78.5% of the hardware dynamic thermal management(DTM) in various combinations of workloads,and has an average of 5.2% and up to 9.7% throughput higher than the Linux standard scheduler.
基金supported by the Natural Science Fund for Distinguished Young Scholars of Jiangsu Province under Grant BK20220067。
文摘With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such as de-signalization.How to effectively manage autonomous vehicles for traffic control with high throughput at unsignalized intersections while ensuring safety has been a research hotspot.This paper proposes a collision-free autonomous vehicle scheduling framework based on edge-cloud computing power networks for unsignalized intersections where the lanes entering the intersections are undirectional,and designs an efficient communication system and protocol.First,by analyzing the collision point occupation time,this paper formulates an absolute value programming problem.Second,this problem is solved with low complexity by the Edge Intelligence Optimal Entry Time(EI-OET)algorithm based on edge-cloud computing power support.Then,the communication system and protocol are designed for the proposed scheduling scheme to realize efficient and low-latency vehicular communications.Finally,simulation experiments compare the proposed scheduling framework with directional and traditional traffic light scheduling mechanisms,and the experimental results demonstrate its high efficiency,low latency,and low complexity.
文摘<span style="font-family:Verdana;">In the present deregulated electricity market, power system congestion is the main complication that an independent system operator (ISO) faces on a regular basis. Transmission line congestion trigger serious problems for smooth functioning in restructured power system causing an increase in the cost of transmission hence affecting market efficiency. Thus, it is of utmost importance for the investigation of various techniques in order to relieve congestion in the transmission network. Generation rescheduling is one of the most efficacious techniques to do away with the problem of congestion. For optimiz</span><span style="font-family:Verdana;">ing the congestion cost, this work suggests a hybrid optimization based on</span><span style="font-family:Verdana;"> two effective algorithms viz Teaching learning-based optimization (TLBO) algorithm and Particle swarm optimization (PSO) algorithm. For binding the constraints, the traditional penalty function technique is incorporated. Modified IEEE 30-bus test system and modified IEEE 57-bus test system are used to inspect the usefulness of the suggested methodology.</span>
基金supported by the National Natural Science Foundation of China(Grant No.52475543)Natural Science Foundation of Henan(Grant No.252300421101)+1 种基金Henan Province University Science and Technology Innovation Talent Support Plan(Grant No.24HASTIT048)Science and Technology Innovation Team Project of Zhengzhou University of Light Industry(Grant No.23XNKJTD0101).
文摘Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB2901501in part by the Science and Technology Innovation leading Talents Subsidy Project of Central Plains under Grant 244200510038.
文摘The rapid growth of distributed data-centric applications and AI workloads increases demand for low-latency,high-throughput communication,necessitating frequent and flexible updates to network routing configurations.However,maintaining consistent forwarding states during these updates is challenging,particularly when rerouting multiple flows simultaneously.Existing approaches pay little attention to multi-flow update,where improper update sequences across data plane nodes may construct deadlock dependencies.Moreover,these methods typically involve excessive control-data plane interactions,incurring significant resource overhead and performance degradation.This paper presents P4LoF,an efficient loop-free update approach that enables the controller to reroute multiple flows through minimal interactions.P4LoF first utilizes a greedy-based algorithm to generate the shortest update dependency chain for the single-flow update.These chains are then dynamically merged into a dependency graph and resolved as a Shortest Common Super-sequence(SCS)problem to produce the update sequence of multi-flow update.To address deadlock dependencies in multi-flow updates,P4LoF builds a deadlock-fix forwarding model that leverages the flexible packet processing capabilities of the programmable data plane.Experimental results show that P4LoF reduces control-data plane interactions by at least 32.6%with modest overhead,while effectively guaranteeing loop-free consistency.
基金supported by National Natural Science Foundation of China(32494793).
文摘Cellulose frameworks have emerged as promising materials for light management due to their exceptional light-scattering capabilities and sustainable nature.Conventional biomass-derived cellulose frameworks face a fundamental trade-off between haze and transparency,coupled with impractical thicknesses(≥1 mm).Inspired by squid’s skin-peeling mechanism,this work develops a peroxyformic acid(HCOOOH)-enabled precision peeling strategy to isolate intact 10-μm-thick bamboo green(BG)frameworks—100×thinner than wood-based counterparts while achieving an unprecedented optical performance(88%haze with 80%transparency).This performance surpasses delignified biomass(transparency<40%at 1 mm)and matches engineered cellulose composites,yet requires no energy-intensive nanofibrillation.The preserved native cellulose I crystalline structure(64.76%crystallinity)and wax-coated uniaxial fibril alignment(Hermans factor:0.23)contribute to high mechanical strength(903 MPa modulus)and broadband light scattering.As a light-management layer in polycrystalline silicon solar cells,the BG framework boosts photoelectric conversion efficiency by 0.41%absolute(18.74%→19.15%),outperforming synthetic anti-reflective coatings.The work establishes a scalable,waste-to-wealth route for optical-grade cellulose materials in next-generation optoelectronics.
基金the National Natural Science Foundation of China(Nos.62374029,22175029,62474033,and W2433038)the Young Elite Scientists Sponsorship Program by CAST(No.YESS20220550)+2 种基金the Sichuan Science and Technology Program(No.2024NSFSC0250)the Natural Science Foundation of Shenzhen Innovation Committee(JCYJ20210324135614040)the Fundamental Research Funds for the Central Universities of China(No.ZYGX2022J032).
文摘Perovskite solar cells(PSCs)have emerged as promising photovoltaic technologies owing to their remarkable power conversion efficiency(PCE).However,heat accumulation under continuous illumination remains a critical bottleneck,severely affecting device stability and long-term operational performance.Herein,we present a multifunctional strategy by incorporating highly thermally conductive Ti_(3)C_(2)T_(X) MXene nanosheets into the perovskite layer to simultaneously enhance thermal management and optoelectronic properties.The Ti_(3)C_(2)T_(X) nanosheets,embedded at perovskite grain boundaries,construct efficient thermal conduction pathways,significantly improving the thermal conductivity and diffusivity of the film.This leads to a notable reduction in the device’s steady-state operating temperature from 42.96 to 39.97 under 100 mW cm^(−2) illumination,thereby alleviating heat-induced performance degradation.Beyond thermal regulation,Ti_(3)C_(2)T_(X),with high conductivity and negatively charged surface terminations,also serves as an effective defect passivation agent,reducing trap-assisted recombination,while simultaneously facilitating charge extraction and transport by optimizing interfacial energy alignment.As a result,the Ti_(3)C_(2)T_(X)-modified PSC achieve a champion PCE of 25.13%and exhibit outstanding thermal stability,retaining 80%of the initial PCE after 500 h of thermal aging at 85 and 30±5%relative humidity.(In contrast,control PSC retain only 58%after 200 h.)Moreover,under continuous maximum power point tracking in N2 atmosphere,Ti_(3)C_(2)T_(X)-modified PSC retained 70%of the initial PCE after 500 h,whereas the control PSC drop sharply to 20%.These findings highlight the synergistic role of Ti_(3)C_(2)T_(X) in thermal management and optoelectronic performance,paving the way for the development of high-efficiency and heat-resistant perovskite photovoltaics.