With the in-depth advancement of rural revitalization and urban-rural integration strategies,the integration of agriculture,culture,and tourism has become an important path for promoting high-quality development in me...With the in-depth advancement of rural revitalization and urban-rural integration strategies,the integration of agriculture,culture,and tourism has become an important path for promoting high-quality development in metropolitan suburbs.Taking Cihui Subdistrict in Wuhan as an example,this research systematically sorts out its resource endowments,development models,and implementation effectiveness of the agriculture-culture-tourism integration through field research and case analysis.It further delves into the existing problems,such as insufficient planning and coordination,weak factor support,and insufficient industrial integration,along with their underlying causes.On such basis,targeted countermeasures are proposed from the aspects of scientific planning,industrial collaboration,talent introduction and cultivation,brand building,and policy optimization.The study aims to build an integrated development system of agriculture,culture,and tourism tailored to the characteristics of metropolitan suburbs,providing theoretical references and policy inspiration for similar regions.展开更多
This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the cha...This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the characteristics of the hedging task, and a reward function was developed according to the cost function of the options. Second, combining the concept of curriculum learning, the agent was guided to adopt a simulated-to-real learning approach for dynamic hedging tasks, reducing the learning difficulty and addressing the issue of insufficient option data. A dynamic hedging strategy for 50ETF options was constructed. Finally, numerical experiments demonstrate the superiority of the designed algorithm over traditional hedging strategies in terms of hedging effectiveness.展开更多
Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch ...Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch due to problems such as reward sparsity.To solve the problem,we propose a experience evolution proximal policy optimization(EEPPO)algorithm which integrates PPO with priori knowledge highlighting by evolutionary strategy.We use the successful trained samples as priori knowledge to guide the learning direction in order to increase the success probability of the learning algorithm.To verify the effectiveness of the proposed EEPPO algorithm,we have conducted simulation experiments of the quadruped robot gait learning task on Pybullet.Experimental results show that the central pattern generator based radial basis function(CPG-RBF)network and the policy network are simultaneously updated to achieve the quadruped robot’s bionic diagonal trot gait learning task using key information such as the robot’s speed,posture and joints information.Experimental comparison results with the traditional soft actor-critic(SAC)algorithm validate the superiority of the proposed EEPPO algorithm,which can learn a more stable diagonal trot gait in flat terrain.展开更多
Currently,most of the policies for the dynamic demand vehicle routing problem are based on the traditional method for static problems as there is no general method for constructing a real-time optimization policy for ...Currently,most of the policies for the dynamic demand vehicle routing problem are based on the traditional method for static problems as there is no general method for constructing a real-time optimization policy for the case of dynamic demand.Here,a new approach based on a combination of the rules from the static sub-problem to building real-time optimization policy is proposed.Real-time optimization policy is dividing the dynamic problem into a series of static sub-problems along the time axis and then solving the static ones.The static sub-problems’transformation and solution rules include:Division rule,batch rule,objective rule,action rule and algorithm rule,and so on.Different combinations of these rules may constitute a variety of real-time optimization policy.According to this general method,two new policies called flexible G/G/m and flexible D/G/m were developed.The competitive analysis and the simulation results of these two policies proved that both are improvements upon the best existing policy.展开更多
In communication networks with policy-based Transport Control on-Demand (TCoD) function,the transport control policies play a great impact on the network effectiveness. To evaluate and optimize the transport policies ...In communication networks with policy-based Transport Control on-Demand (TCoD) function,the transport control policies play a great impact on the network effectiveness. To evaluate and optimize the transport policies in communication network,a policy-based TCoD network model is given and a comprehensive evaluation index system of the network effectiveness is put forward from both network application and handling mechanism perspectives. A TCoD network prototype system based on Asynchronous Transfer Mode/Multi-Protocol Label Switching (ATM/MPLS) is introduced and some experiments are performed on it. The prototype system is evaluated and analyzed with the comprehensive evaluation index system. The results show that the index system can be used to judge whether the communication network can meet the application requirements or not,and can provide references for the optimization of the transport policies so as to improve the communication network effectiveness.展开更多
In this paper,we study the robustness property of policy optimization(particularly Gauss-Newton gradient descent algorithm which is equivalent to the policy iteration in reinforcement learning)subject to noise at each...In this paper,we study the robustness property of policy optimization(particularly Gauss-Newton gradient descent algorithm which is equivalent to the policy iteration in reinforcement learning)subject to noise at each iteration.By invoking the concept of input-to-state stability and utilizing Lyapunov's direct method,it is shown that,if the noise is sufficiently small,the policy iteration algorithm converges to a small neighborhood of the optimal solution even in the presence of noise at each iteration.Explicit expressions of the upperbound on the noise and the size of the neighborhood to which the policies ultimately converge are provided.Based on Willems'fundamental lemma,a learning-based policy iteration algorithm is proposed.The persistent excitation condition can be readily guaranteed by checking the rank of the Hankel matrix related to an exploration signal.The robustness of the learning-based policy iteration to measurement noise and unknown system disturbances is theoretically demonstrated by the input-to-state stability of the policy iteration.Several numerical simulations are conducted to demonstrate the efficacy of the proposed method.展开更多
In order to achieve an intelligent and automated self-management network,dynamic policy configuration and selection are needed.A certain policy only suits to a certain network environment.If the network environment ch...In order to achieve an intelligent and automated self-management network,dynamic policy configuration and selection are needed.A certain policy only suits to a certain network environment.If the network environment changes,the certain policy does not suit any more.Thereby,the policy-based management should also have similar "natural selection" process.Useful policy will be retained,and policies which have lost their effectiveness are eliminated.A policy optimization method based on evolutionary learning was proposed.For different shooting times,the priority of policy with high shooting times is improved,while policy with a low rate has lower priority,and long-term no shooting policy will be dormant.Thus the strategy for the survival of the fittest is realized,and the degree of self-learning in policy management is improved.展开更多
This paper examines the transformation and development of the Xinhui Chenpi industry under the rural revitalization strategy in China.The study highlights the significant growth of the industry,with the annual product...This paper examines the transformation and development of the Xinhui Chenpi industry under the rural revitalization strategy in China.The study highlights the significant growth of the industry,with the annual production of chenpi reaching approximately 7,000 tons and the total output value surpassing 26 billion yuan in 2024.The paper proposes strategies to foster sustainable growth in industries facing challenges such as inefficient production processes,inconsistent product quality,and a lack of policy awareness among operators.These strategies include optimizing support policies,enhancing regulatory frameworks,and leveraging digital technologies for brand building and market expansion.The research contributes to understanding the development trajectory of the Xinhui Chenpi industry and provides insights for policymakers and industry practitioners.展开更多
Against the backdrop of uneven pressure on the three-pillar pension system and a mismatch between pension funds and the demographic structure,a large number of employees in new forms of employment remain outside the p...Against the backdrop of uneven pressure on the three-pillar pension system and a mismatch between pension funds and the demographic structure,a large number of employees in new forms of employment remain outside the pension security system,facing relatively high pension risks.Due to their high job mobility,weak long-term planning ability,and large income fluctuations,on the basis of maintaining the balance of the three-pillar pension system,individual pension schemes may become a breakthrough point for improving the pension situation of employees in new forms of employment.In line with the national goal of building a multi-level and multi-pillar old-age insurance system,to study the supplementary role of the third-pillar individual pension policy for employees in new forms of employment,this article constructs an evaluation system using the analytic hierarchy process and designs a questionnaire.After conducting a questionnaire survey in six cities in Shandong Province,the collected data are analyzed.It is found that the short-term effect of the current policy is that residents'awareness of pension issues is gradually improving,and the participation rate is increasing,but the behavior is short-term,and residents generally tend to avoid pension risks.Therefore,regarding the deepening of the individual pension system,the article puts forward three suggestions:(1)Conduct comprehensive publicity through multiple channels and with emphasis on key points;(2)Enhance the system's attractiveness according to the characteristics of the target population;(3)Improve the public's awareness of pension planning and financial literacy;(4)Strengthen the connection and transformation among different pillars of the pension system.展开更多
Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most ...Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most promising directions of development.This paper proposes an optimized schedulingmodel for a hydrogen-coupled electro-heat-gas integrated energy system(HCEHG-IES)using generative adversarial imitation learning(GAIL).The model aims to enhance renewable-energy absorption,reduce carbon emissions,and improve grid-regulation flexibility.First,the optimal scheduling problem of HCEHG-IES under uncertainty is modeled as a Markov decision process(MDP).To overcome the limitations of conventional deep reinforcement learning algorithms—including long optimization time,slow convergence,and subjective reward design—this study augments the PPO algorithm by incorporating a discriminator network and expert data.The newly developed algorithm,termed GAIL,enables the agent to perform imitation learning from expert data.Based on this model,dynamic scheduling decisions are made in continuous state and action spaces,generating optimal energy-allocation and management schemes.Simulation results indicate that,compared with traditional reinforcement-learning algorithms,the proposed algorithmoffers better economic performance.Guided by expert data,the agent avoids blind optimization,shortens the offline training time,and improves convergence performance.In the online phase,the algorithm enables flexible energy utilization,thereby promoting renewable-energy absorption and reducing carbon emissions.展开更多
This article studies the inshore-offshore fishery model with impulsive diffusion. The existence and global asymptotic stability of both the trivial periodic solution and the positive periodic solution are obtained. Th...This article studies the inshore-offshore fishery model with impulsive diffusion. The existence and global asymptotic stability of both the trivial periodic solution and the positive periodic solution are obtained. The complexity of this system is also analyzed. Moreover, the optimal harvesting policy are given for the inshore subpopulation, which includes the maximum sustainable yield and the corresponding harvesting effort.展开更多
This paper employs a stochastic endogenous growth model extended to the case of a recursive utility function which can disentangle intertemporal substitution from risk aversion to analyze productive government expendi...This paper employs a stochastic endogenous growth model extended to the case of a recursive utility function which can disentangle intertemporal substitution from risk aversion to analyze productive government expenditure and optimal fiscal policy, particularly stresses the importance of factor income. First, the explicit solutions of the central planner's stochastic optimization problem are derived, the growth maximizing and welfare-maximizing government expenditure policies are obtained and their standing in conflict or coincidence depends upon intertemporal substitution. Second, the explicit solutions of the representative individual's stochastic optimization problem which permits to tax on capital income and labor income separately are derived ,and it is found that the effect of risk on growth crucially depends on the degree of risk aversion,the intertemporal elasticity of substitution and the capital income share. Finally, a flexible optimal tax policy which can be internally adjusted to a certain extent is derived, and it is found that the distribution of factor income plays an important role in designing the optimal tax policy.展开更多
This paper aims to improve the performance of a class of distributed parameter systems for the optimal switching of actuators and controllers based on event-driven control. It is assumed that in the available multiple...This paper aims to improve the performance of a class of distributed parameter systems for the optimal switching of actuators and controllers based on event-driven control. It is assumed that in the available multiple actuators, only one actuator can receive the control signal and be activated over an unfixed time interval, and the other actuators keep dormant. After incorporating a state observer into the event generator, the event-driven control loop and the minimum inter-event time are ultimately bounded. Based on the event-driven state feedback control, the time intervals of unfixed length can be obtained. The optimal switching policy is based on finite horizon linear quadratic optimal control at the beginning of each time subinterval. A simulation example demonstrate the effectiveness of the proposed policy.展开更多
This paper studies the optimal policy for joint control of admission, routing, service, and jockeying in a queueing system consisting of two exponential servers in parallel.Jobs arrive according to a Poisson process.U...This paper studies the optimal policy for joint control of admission, routing, service, and jockeying in a queueing system consisting of two exponential servers in parallel.Jobs arrive according to a Poisson process.Upon each arrival, an admission/routing decision is made, and the accepted job is routed to one of the two servers with each being associated with a queue.After each service completion, the servers have an option of serving a job from its own queue, serving a jockeying job from another queue, or staying idle.The system performance is inclusive of the revenues from accepted jobs, the costs of holding jobs in queues, the service costs and the job jockeying costs.To maximize the total expected discounted return, we formulate a Markov decision process(MDP) model for this system.The value iteration method is employed to characterize the optimal policy as a hedging point policy.Numerical studies verify the structure of the hedging point policy which is convenient for implementing control actions in practice.展开更多
The maintenance model of simple repairable system is studied.We assume that there are two types of failure,namely type Ⅰ failure(repairable failure)and type Ⅱ failure(irrepairable failure).As long as the type Ⅰ fai...The maintenance model of simple repairable system is studied.We assume that there are two types of failure,namely type Ⅰ failure(repairable failure)and type Ⅱ failure(irrepairable failure).As long as the type Ⅰ failure occurs,the system will be repaired immediately,which is failure repair(FR).Between the(n-1)th and the nth FR,the system is supposed to be preventively repaired(PR)as the consecutive working time of the system reaches λ^(n-1) T,where λ and T are specified values.Further,we assume that the system will go on working when the repair is finished and will be replaced at the occurrence of the Nth type Ⅰ failure or the occurrence of the first type Ⅱ failure,whichever occurs first.In practice,the system will degrade with the increasing number of repairs.That is,the consecutive working time of the system forms a decreasing generalized geometric process(GGP)whereas the successive repair time forms an increasing GGP.A simple bivariate policy(T,N)repairable model is introduced based on GGP.The alternative searching method is used to minimize the cost rate function C(N,T),and the optimal(T,N)^(*) is obtained.Finally,numerical cases are applied to demonstrate the reasonability of this model.展开更多
To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method ...To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method of dynamic programming under the introduction of distributive disturbance and elastic labor supply. Furthermore, the effects of two types of shocks on labor-leisure choice, economic growth rate and welfare were numerically analyzed, and then the optimal tax policy was derived.展开更多
Reinforcement learning encounters formidable challenges when tasked with intricate decision-making scenarios,primarily due to the expansive parameterized action spaces and the vastness of the corresponding policy land...Reinforcement learning encounters formidable challenges when tasked with intricate decision-making scenarios,primarily due to the expansive parameterized action spaces and the vastness of the corresponding policy landscapes.To surmount these difficulties,we devise a practical structured action graph model augmented by guiding policies that integrate trust region constraints.Based on this,we propose guided proximal policy optimization with structured action graph(GPPO-SAG),which has demonstrated pronounced efficacy in refining policy learning and enhancing performance across sophisticated tasks characterized by parameterized action spaces.Rigorous empirical evaluations of our model have been performed on comprehensive gaming platforms,including the entire suite of StarCraft II and Hearthstone,yielding exceptionally favorable outcomes.Our source code is at https://github.com/sachiel321/GPPO-SAG.展开更多
At the beginning of 2025,China’s national carbon market carbon price trend exhibited a continuous unilateral downward trajectory,representing a departure from the overall steady upward trend in carbon prices since th...At the beginning of 2025,China’s national carbon market carbon price trend exhibited a continuous unilateral downward trajectory,representing a departure from the overall steady upward trend in carbon prices since the carbon market launched in 2021.The analysis suggests that the primary reason for the recent decline in carbon prices is the reversal of supply and demand dynamics in the carbon market,with increased quota supply amid a sluggish economy.It is expected that downward pressure on carbon prices will persist in the short term,but with more industries being included and continued policy optimization and improvement,a rise in China’s medium-to long-term carbon prices is highly probable.Recommendations for enterprises involved in carbon asset operations and management:first,refining carbon asset reserves and trading strategies;second,accelerating internal CCER project development;third,exploring carbon financial instrument applications;fourth,establishing and improving internal carbon pricing mechanisms;fifth,proactively planning for new industry inclusion.展开更多
Unmanned Aerial Vehicle(UAV)stands as a burgeoning electric transportation carrier,holding substantial promise for the logistics sector.A reinforcement learning framework Centralized-S Proximal Policy Optimization(C-S...Unmanned Aerial Vehicle(UAV)stands as a burgeoning electric transportation carrier,holding substantial promise for the logistics sector.A reinforcement learning framework Centralized-S Proximal Policy Optimization(C-SPPO)based on centralized decision process and considering policy entropy(S)is proposed.The proposed framework aims to plan the best scheduling scheme with the objective of minimizing both the timeout of order requests and the flight impact of UAVs that may lead to conflicts.In this framework,the intents of matching act are generated through the observations of UAV agents,and the ultimate conflict-free matching results are output under the guidance of a centralized decision maker.Concurrently,a pre-activation operation is introduced to further enhance the cooperation among UAV agents.Simulation experiments based on real-world data from New York City are conducted.The results indicate that the proposed CSPPO outperforms the baseline algorithms in the Average Delay Time(ADT),the Maximum Delay Time(MDT),the Order Delay Rate(ODR),the Average Flight Distance(AFD),and the Flight Impact Ratio(FIR).Furthermore,the framework demonstrates scalability to scenarios of different sizes without requiring additional training.展开更多
Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soar...Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soaring trajectory is crucial for maximizing energy efficiency during flight.Existing nonlinear programming methods are heavily dependent on the choice of initial values which is hard to determine.Therefore,this paper introduces a deep reinforcement learning method based on a differentially flat model for dynamic soaring trajectory planning and optimization.Initially,the gliding trajectory is parameterized using Fourier basis functions,achieving a flexible trajectory representation with a minimal number of hyperparameters.Subsequently,the trajectory optimization problem is formulated as a dynamic interactive process of Markov decision-making.The hyperparameters of the trajectory are optimized using the Proximal Policy Optimization(PPO2)algorithm from deep reinforcement learning(DRL),reducing the strong reliance on initial value settings in the optimization process.Finally,a comparison between the proposed method and the nonlinear programming method reveals that the trajectory generated by the proposed approach is smoother while meeting the same performance requirements.Specifically,the proposed method achieves a 34%reduction in maximum thrust,a 39.4%decrease in maximum thrust difference,and a 33%reduction in maximum airspeed difference.展开更多
基金Sponsored by Innovation and Entrepreneurship Training Program for College Students of Wuhan Polytechnic University(202510496165)Youth Project of the Education Department of Hubei Province(24Q195).
文摘With the in-depth advancement of rural revitalization and urban-rural integration strategies,the integration of agriculture,culture,and tourism has become an important path for promoting high-quality development in metropolitan suburbs.Taking Cihui Subdistrict in Wuhan as an example,this research systematically sorts out its resource endowments,development models,and implementation effectiveness of the agriculture-culture-tourism integration through field research and case analysis.It further delves into the existing problems,such as insufficient planning and coordination,weak factor support,and insufficient industrial integration,along with their underlying causes.On such basis,targeted countermeasures are proposed from the aspects of scientific planning,industrial collaboration,talent introduction and cultivation,brand building,and policy optimization.The study aims to build an integrated development system of agriculture,culture,and tourism tailored to the characteristics of metropolitan suburbs,providing theoretical references and policy inspiration for similar regions.
基金supported by the Foundation of Key Laboratory of System Control and Information Processing,Ministry of Education,China,Scip20240111Aeronautical Science Foundation of China,Grant 2024Z071108001the Foundation of Key Laboratory of Traffic Information and Safety of Anhui Higher Education Institutes,Anhui Sanlian University,KLAHEI18018.
文摘This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the characteristics of the hedging task, and a reward function was developed according to the cost function of the options. Second, combining the concept of curriculum learning, the agent was guided to adopt a simulated-to-real learning approach for dynamic hedging tasks, reducing the learning difficulty and addressing the issue of insufficient option data. A dynamic hedging strategy for 50ETF options was constructed. Finally, numerical experiments demonstrate the superiority of the designed algorithm over traditional hedging strategies in terms of hedging effectiveness.
基金the National Natural Science Foundation of China(No.62103009)。
文摘Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch due to problems such as reward sparsity.To solve the problem,we propose a experience evolution proximal policy optimization(EEPPO)algorithm which integrates PPO with priori knowledge highlighting by evolutionary strategy.We use the successful trained samples as priori knowledge to guide the learning direction in order to increase the success probability of the learning algorithm.To verify the effectiveness of the proposed EEPPO algorithm,we have conducted simulation experiments of the quadruped robot gait learning task on Pybullet.Experimental results show that the central pattern generator based radial basis function(CPG-RBF)network and the policy network are simultaneously updated to achieve the quadruped robot’s bionic diagonal trot gait learning task using key information such as the robot’s speed,posture and joints information.Experimental comparison results with the traditional soft actor-critic(SAC)algorithm validate the superiority of the proposed EEPPO algorithm,which can learn a more stable diagonal trot gait in flat terrain.
基金Supported by the National Natural Science Foundation of China(71461006,71461007,71761009)Hainan Province Planning Program of Philosophy and Social Science(HNSK(YB)19-06,HNSK(YB)19-11)a Key Program of Hainan Educational Committee(hnky2019ZD-10).
文摘Currently,most of the policies for the dynamic demand vehicle routing problem are based on the traditional method for static problems as there is no general method for constructing a real-time optimization policy for the case of dynamic demand.Here,a new approach based on a combination of the rules from the static sub-problem to building real-time optimization policy is proposed.Real-time optimization policy is dividing the dynamic problem into a series of static sub-problems along the time axis and then solving the static ones.The static sub-problems’transformation and solution rules include:Division rule,batch rule,objective rule,action rule and algorithm rule,and so on.Different combinations of these rules may constitute a variety of real-time optimization policy.According to this general method,two new policies called flexible G/G/m and flexible D/G/m were developed.The competitive analysis and the simulation results of these two policies proved that both are improvements upon the best existing policy.
基金Supported by the National 863 Program (No.2007AA-701210)
文摘In communication networks with policy-based Transport Control on-Demand (TCoD) function,the transport control policies play a great impact on the network effectiveness. To evaluate and optimize the transport policies in communication network,a policy-based TCoD network model is given and a comprehensive evaluation index system of the network effectiveness is put forward from both network application and handling mechanism perspectives. A TCoD network prototype system based on Asynchronous Transfer Mode/Multi-Protocol Label Switching (ATM/MPLS) is introduced and some experiments are performed on it. The prototype system is evaluated and analyzed with the comprehensive evaluation index system. The results show that the index system can be used to judge whether the communication network can meet the application requirements or not,and can provide references for the optimization of the transport policies so as to improve the communication network effectiveness.
基金supported in part by the National Science Foundation(Nos.ECCS-2210320,CNS-2148304).
文摘In this paper,we study the robustness property of policy optimization(particularly Gauss-Newton gradient descent algorithm which is equivalent to the policy iteration in reinforcement learning)subject to noise at each iteration.By invoking the concept of input-to-state stability and utilizing Lyapunov's direct method,it is shown that,if the noise is sufficiently small,the policy iteration algorithm converges to a small neighborhood of the optimal solution even in the presence of noise at each iteration.Explicit expressions of the upperbound on the noise and the size of the neighborhood to which the policies ultimately converge are provided.Based on Willems'fundamental lemma,a learning-based policy iteration algorithm is proposed.The persistent excitation condition can be readily guaranteed by checking the rank of the Hankel matrix related to an exploration signal.The robustness of the learning-based policy iteration to measurement noise and unknown system disturbances is theoretically demonstrated by the input-to-state stability of the policy iteration.Several numerical simulations are conducted to demonstrate the efficacy of the proposed method.
基金National Natural Science Foundation of China(No.60534020)Cultivation Fund of the Key Scientific and Technical Innovation Project from Ministry of Education of China(No.706024)International Science Cooperation Foundation of Shanghai,China(No.061307041)
文摘In order to achieve an intelligent and automated self-management network,dynamic policy configuration and selection are needed.A certain policy only suits to a certain network environment.If the network environment changes,the certain policy does not suit any more.Thereby,the policy-based management should also have similar "natural selection" process.Useful policy will be retained,and policies which have lost their effectiveness are eliminated.A policy optimization method based on evolutionary learning was proposed.For different shooting times,the priority of policy with high shooting times is improved,while policy with a low rate has lower priority,and long-term no shooting policy will be dormant.Thus the strategy for the survival of the fittest is realized,and the degree of self-learning in policy management is improved.
基金Research on the Digital Transformation of the Xinhui Dried Tangerine Peel Industry under the Rural Revitalization Strategy(2023HSQX100)。
文摘This paper examines the transformation and development of the Xinhui Chenpi industry under the rural revitalization strategy in China.The study highlights the significant growth of the industry,with the annual production of chenpi reaching approximately 7,000 tons and the total output value surpassing 26 billion yuan in 2024.The paper proposes strategies to foster sustainable growth in industries facing challenges such as inefficient production processes,inconsistent product quality,and a lack of policy awareness among operators.These strategies include optimizing support policies,enhancing regulatory frameworks,and leveraging digital technologies for brand building and market expansion.The research contributes to understanding the development trajectory of the Xinhui Chenpi industry and provides insights for policymakers and industry practitioners.
基金funded by the National College Students'Innovation and Entrepreneurship Training Program(No.202410456025)supported by the China Center of the Serbian Academy of Sciences and Arts and the Hong Kong Institute of Humanities and Natural Sciences and Technology.
文摘Against the backdrop of uneven pressure on the three-pillar pension system and a mismatch between pension funds and the demographic structure,a large number of employees in new forms of employment remain outside the pension security system,facing relatively high pension risks.Due to their high job mobility,weak long-term planning ability,and large income fluctuations,on the basis of maintaining the balance of the three-pillar pension system,individual pension schemes may become a breakthrough point for improving the pension situation of employees in new forms of employment.In line with the national goal of building a multi-level and multi-pillar old-age insurance system,to study the supplementary role of the third-pillar individual pension policy for employees in new forms of employment,this article constructs an evaluation system using the analytic hierarchy process and designs a questionnaire.After conducting a questionnaire survey in six cities in Shandong Province,the collected data are analyzed.It is found that the short-term effect of the current policy is that residents'awareness of pension issues is gradually improving,and the participation rate is increasing,but the behavior is short-term,and residents generally tend to avoid pension risks.Therefore,regarding the deepening of the individual pension system,the article puts forward three suggestions:(1)Conduct comprehensive publicity through multiple channels and with emphasis on key points;(2)Enhance the system's attractiveness according to the characteristics of the target population;(3)Improve the public's awareness of pension planning and financial literacy;(4)Strengthen the connection and transformation among different pillars of the pension system.
基金supported by State Grid Corporation Technology Project(No.522437250003).
文摘Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most promising directions of development.This paper proposes an optimized schedulingmodel for a hydrogen-coupled electro-heat-gas integrated energy system(HCEHG-IES)using generative adversarial imitation learning(GAIL).The model aims to enhance renewable-energy absorption,reduce carbon emissions,and improve grid-regulation flexibility.First,the optimal scheduling problem of HCEHG-IES under uncertainty is modeled as a Markov decision process(MDP).To overcome the limitations of conventional deep reinforcement learning algorithms—including long optimization time,slow convergence,and subjective reward design—this study augments the PPO algorithm by incorporating a discriminator network and expert data.The newly developed algorithm,termed GAIL,enables the agent to perform imitation learning from expert data.Based on this model,dynamic scheduling decisions are made in continuous state and action spaces,generating optimal energy-allocation and management schemes.Simulation results indicate that,compared with traditional reinforcement-learning algorithms,the proposed algorithmoffers better economic performance.Guided by expert data,the agent avoids blind optimization,shortens the offline training time,and improves convergence performance.In the online phase,the algorithm enables flexible energy utilization,thereby promoting renewable-energy absorption and reducing carbon emissions.
文摘This article studies the inshore-offshore fishery model with impulsive diffusion. The existence and global asymptotic stability of both the trivial periodic solution and the positive periodic solution are obtained. The complexity of this system is also analyzed. Moreover, the optimal harvesting policy are given for the inshore subpopulation, which includes the maximum sustainable yield and the corresponding harvesting effort.
文摘This paper employs a stochastic endogenous growth model extended to the case of a recursive utility function which can disentangle intertemporal substitution from risk aversion to analyze productive government expenditure and optimal fiscal policy, particularly stresses the importance of factor income. First, the explicit solutions of the central planner's stochastic optimization problem are derived, the growth maximizing and welfare-maximizing government expenditure policies are obtained and their standing in conflict or coincidence depends upon intertemporal substitution. Second, the explicit solutions of the representative individual's stochastic optimization problem which permits to tax on capital income and labor income separately are derived ,and it is found that the effect of risk on growth crucially depends on the degree of risk aversion,the intertemporal elasticity of substitution and the capital income share. Finally, a flexible optimal tax policy which can be internally adjusted to a certain extent is derived, and it is found that the distribution of factor income plays an important role in designing the optimal tax policy.
基金supported by the National Natural Science Foundation of China(Grant Nos.61174021 and 61104155)the Fundamental Research Funds for theCentral Universities,China(Grant Nos.JUDCF13037 and JUSRP51322B)+1 种基金the Programme of Introducing Talents of Discipline to Universities,China(GrantNo.B12018)the Jiangsu Innovation Program for Graduates,China(Grant No.CXZZ13-0740)
文摘This paper aims to improve the performance of a class of distributed parameter systems for the optimal switching of actuators and controllers based on event-driven control. It is assumed that in the available multiple actuators, only one actuator can receive the control signal and be activated over an unfixed time interval, and the other actuators keep dormant. After incorporating a state observer into the event generator, the event-driven control loop and the minimum inter-event time are ultimately bounded. Based on the event-driven state feedback control, the time intervals of unfixed length can be obtained. The optimal switching policy is based on finite horizon linear quadratic optimal control at the beginning of each time subinterval. A simulation example demonstrate the effectiveness of the proposed policy.
基金supported by the National Social Science Fund of China (19BGL100)。
文摘This paper studies the optimal policy for joint control of admission, routing, service, and jockeying in a queueing system consisting of two exponential servers in parallel.Jobs arrive according to a Poisson process.Upon each arrival, an admission/routing decision is made, and the accepted job is routed to one of the two servers with each being associated with a queue.After each service completion, the servers have an option of serving a job from its own queue, serving a jockeying job from another queue, or staying idle.The system performance is inclusive of the revenues from accepted jobs, the costs of holding jobs in queues, the service costs and the job jockeying costs.To maximize the total expected discounted return, we formulate a Markov decision process(MDP) model for this system.The value iteration method is employed to characterize the optimal policy as a hedging point policy.Numerical studies verify the structure of the hedging point policy which is convenient for implementing control actions in practice.
基金supported by the National Natural Science Foundation of China(61573014)the Fundamental Research Funds for the Central Universities(JB180702).
文摘The maintenance model of simple repairable system is studied.We assume that there are two types of failure,namely type Ⅰ failure(repairable failure)and type Ⅱ failure(irrepairable failure).As long as the type Ⅰ failure occurs,the system will be repaired immediately,which is failure repair(FR).Between the(n-1)th and the nth FR,the system is supposed to be preventively repaired(PR)as the consecutive working time of the system reaches λ^(n-1) T,where λ and T are specified values.Further,we assume that the system will go on working when the repair is finished and will be replaced at the occurrence of the Nth type Ⅰ failure or the occurrence of the first type Ⅱ failure,whichever occurs first.In practice,the system will degrade with the increasing number of repairs.That is,the consecutive working time of the system forms a decreasing generalized geometric process(GGP)whereas the successive repair time forms an increasing GGP.A simple bivariate policy(T,N)repairable model is introduced based on GGP.The alternative searching method is used to minimize the cost rate function C(N,T),and the optimal(T,N)^(*) is obtained.Finally,numerical cases are applied to demonstrate the reasonability of this model.
文摘To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method of dynamic programming under the introduction of distributive disturbance and elastic labor supply. Furthermore, the effects of two types of shocks on labor-leisure choice, economic growth rate and welfare were numerically analyzed, and then the optimal tax policy was derived.
基金supported by National Nature Science Foundation of China(Nos.62073324,6200629,61771471 and 91748131)in part by the InnoHK Project,China.
文摘Reinforcement learning encounters formidable challenges when tasked with intricate decision-making scenarios,primarily due to the expansive parameterized action spaces and the vastness of the corresponding policy landscapes.To surmount these difficulties,we devise a practical structured action graph model augmented by guiding policies that integrate trust region constraints.Based on this,we propose guided proximal policy optimization with structured action graph(GPPO-SAG),which has demonstrated pronounced efficacy in refining policy learning and enhancing performance across sophisticated tasks characterized by parameterized action spaces.Rigorous empirical evaluations of our model have been performed on comprehensive gaming platforms,including the entire suite of StarCraft II and Hearthstone,yielding exceptionally favorable outcomes.Our source code is at https://github.com/sachiel321/GPPO-SAG.
文摘At the beginning of 2025,China’s national carbon market carbon price trend exhibited a continuous unilateral downward trajectory,representing a departure from the overall steady upward trend in carbon prices since the carbon market launched in 2021.The analysis suggests that the primary reason for the recent decline in carbon prices is the reversal of supply and demand dynamics in the carbon market,with increased quota supply amid a sluggish economy.It is expected that downward pressure on carbon prices will persist in the short term,but with more industries being included and continued policy optimization and improvement,a rise in China’s medium-to long-term carbon prices is highly probable.Recommendations for enterprises involved in carbon asset operations and management:first,refining carbon asset reserves and trading strategies;second,accelerating internal CCER project development;third,exploring carbon financial instrument applications;fourth,establishing and improving internal carbon pricing mechanisms;fifth,proactively planning for new industry inclusion.
基金the support of the Chinese Special Research Project for Civil Aircraft(No.MJZ17N22)the National Natural Science Foundation of China(Nos.U2133207,U2333214)+1 种基金the China Postdoctoral Science Foundation(No.2023M741687)the National Social Science Fund of China(No.22&ZD169)。
文摘Unmanned Aerial Vehicle(UAV)stands as a burgeoning electric transportation carrier,holding substantial promise for the logistics sector.A reinforcement learning framework Centralized-S Proximal Policy Optimization(C-SPPO)based on centralized decision process and considering policy entropy(S)is proposed.The proposed framework aims to plan the best scheduling scheme with the objective of minimizing both the timeout of order requests and the flight impact of UAVs that may lead to conflicts.In this framework,the intents of matching act are generated through the observations of UAV agents,and the ultimate conflict-free matching results are output under the guidance of a centralized decision maker.Concurrently,a pre-activation operation is introduced to further enhance the cooperation among UAV agents.Simulation experiments based on real-world data from New York City are conducted.The results indicate that the proposed CSPPO outperforms the baseline algorithms in the Average Delay Time(ADT),the Maximum Delay Time(MDT),the Order Delay Rate(ODR),the Average Flight Distance(AFD),and the Flight Impact Ratio(FIR).Furthermore,the framework demonstrates scalability to scenarios of different sizes without requiring additional training.
基金support received by the National Natural Science Foundation of China(Grant Nos.52372398&62003272).
文摘Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soaring trajectory is crucial for maximizing energy efficiency during flight.Existing nonlinear programming methods are heavily dependent on the choice of initial values which is hard to determine.Therefore,this paper introduces a deep reinforcement learning method based on a differentially flat model for dynamic soaring trajectory planning and optimization.Initially,the gliding trajectory is parameterized using Fourier basis functions,achieving a flexible trajectory representation with a minimal number of hyperparameters.Subsequently,the trajectory optimization problem is formulated as a dynamic interactive process of Markov decision-making.The hyperparameters of the trajectory are optimized using the Proximal Policy Optimization(PPO2)algorithm from deep reinforcement learning(DRL),reducing the strong reliance on initial value settings in the optimization process.Finally,a comparison between the proposed method and the nonlinear programming method reveals that the trajectory generated by the proposed approach is smoother while meeting the same performance requirements.Specifically,the proposed method achieves a 34%reduction in maximum thrust,a 39.4%decrease in maximum thrust difference,and a 33%reduction in maximum airspeed difference.