In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has b...In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has become an important goal of the research community.Existing task assignment algorithms can be categorized as offline(performs better with datasets but struggles to achieve good real-life results)or online(works well with real-life input but is difficult to optimize regarding in-depth assignments).This paper proposes a Cross-regional Online Task(CROT)assignment problem based on the online assignment model.Given the CROT problem,an Online Task Assignment across Regions based on Prediction(OTARP)algorithm is proposed.OTARP is a two-stage graphics-driven bilateral assignment strategy that uses edge cloud and graph embedding to complete task assignments.The first stage uses historical data to make offline predictions,with a graph-driven method for offline bipartite graph matching.The second stage uses a bipartite graph to complete the online task assignment process.This paper proposes accelerating the task assignment process through multiple assignment rounds and optimizing the process by combining offline guidance and online assignment strategies.To encourage crowd workers to complete crowd tasks across regions,an incentive strategy is designed to encourage crowd workers’movement.To avoid the idle problem in the process of crowd worker movement,a drop-by-rider problem is used to help crowd workers accept more crowd tasks,optimize the number of assignments,and increase utility.Finally,through comparison experiments on real datasets,the performance of the proposed algorithm on crowd worker utility value and the matching number is evaluated.展开更多
Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield env...Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.展开更多
In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this pr...In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this problem, a makespan and reliability driven (MRD) sufferage scheduling algorithm is designed and implemented. Different from the traditional Grid scheduling algorithms, the algorithm addresses the makespan as well as reliability of tasks. The simulation experimental results show that the MRD sufferage scheduling algorithm can increase reliability of tasks and can trade off reliability against makespan of tasks by adjusting the weighting parameter in its cost function. So it can be applied to the complex Grid computing environment well.展开更多
In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is ...In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is just the time to explore a new botany field practice model.The attempt of a new task-driven model for botany field practice will greatly enhance students thinking about plants and nature,plants and environment,and plant and ecological civilization,and will inevitably enhance students initiative awareness and practical ability to protect and rationally utilize plant resources.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite chall...Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite challenging in simultaneously achieving high activity and selectivity for target products under mild conditions,especially when synthesizing high-value C2t chemicals such as ethanol[2].The conversion of methane to ethanol by photocatalysis is promising for achieving transformation under ambient temperature and pressure conditions.Currently,the apparent quantum efficiency(AQE)of solar-driven methane-to-ethanol conversion is generally below 0.5%[3,4].Furthermore,the stability of photocatalysts remains inadequate,offering substantial potential for further improvement.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions...With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.展开更多
The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical poin...The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical point with two independent relevant directions have not been adequately studied.Here,we employ the time-dependent variational principle to study the driven critical dynamics at a one-dimensional supersymmetric Ising tricritical point.For the relevant direction along the Ising critical line,the AIS apparently breaks down.Nevertheless,we find that the critical dynamics can still be described by finite-time scaling in which the driving rate has a dimension of r_(μ)=z+1/v_(μ)with z and v_(μ)being the dynamic exponent and correlation length exponent in this direction,respectively.For driven dynamics along another direction,the driving rate has a dimension of r_(p)=z+1/v_(p)with v_(p)being another correlation length exponent.Our work brings a new fundamental perspective into nonequilibrium critical dynamics near the tricritical point,which could be realized in programmable quantum processors in Rydberg atomic systems.展开更多
As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driv...As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.展开更多
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du...Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.展开更多
Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will ...Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.展开更多
Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising ...Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising from configuration-dependent geometric and non-geometric source errors,whereas the accuracy of data-driven methods depends on a large amount of measurement data.Using a 5-DOF(degrees of freedom)hybrid machining robot as an exemplar,this study presents a model data-driven approach for the calibration of robotic manipulators.An f-DOF realistic robot containing various source errors is visualized as a 6-DOF fictitious robot having error-free parameters,but erroneous actuated/virtual joint motions.The calibration process essentially involves four steps:(1)formulating the linear map relating the pose error twist to the joint motion errors,(2)parameterizing the joint motion errors using second-order polynomials in terms of nominal actuated joint variables,(3)identifying the polynomial coefficients using the weighted least squares plus principal component analysis,and(4)compensating the compensable pose errors by updating the nominal actuated joint variables.The merit of this approach is that it enables compensation of the pose errors caused by configuration-dependent geometric and non-geometric source errors using finite measurement configurations.Experimental studies on a prototype machine illustrate the effectiveness of the proposed approach.展开更多
Driven critical dynamics in quantum phase transitions holds significant theoretical importance,and also has practical applications in fast-developing quantum devices.While scaling corrections have been shown to play i...Driven critical dynamics in quantum phase transitions holds significant theoretical importance,and also has practical applications in fast-developing quantum devices.While scaling corrections have been shown to play important roles in fully characterizing equilibrium quantum criticality,their impact on nonequilibrium critical dynamics has not been extensively explored.In this work,we investigate the driven critical dynamics in a two-dimensional quantum Heisenberg model.We find that in this model the scaling corrections arising from both finite system size and finite driving rate must be incorporated into the finite-time scaling form in order to properly describe the nonequilibrium scaling behaviors.In addition,improved scaling relations are obtained from the expansion of the full scaling form.We numerically verify these scaling forms and improved scaling relations for different starting states using the nonequilibrium quantum Monte Carlo algorithm.展开更多
Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret...Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.展开更多
Acute lung injury(ALI)was characterized by excessive reactive oxygen species(ROS)levels and inflammatory response in the lung.Scavenging ROS could inhibit the excessive inflammatory response,further treating ALI.Herei...Acute lung injury(ALI)was characterized by excessive reactive oxygen species(ROS)levels and inflammatory response in the lung.Scavenging ROS could inhibit the excessive inflammatory response,further treating ALI.Herein,we designed a novel nanozyme(P@Co)comprised of polydopamine(PDA)nanoparticles(NPs)loading with ultra-small Co,combining with near infrared(NIR)irradiation,which could efficiently scavenge intracellular ROS and suppress inflammatory responses against ALI.For lipopolysaccharide(LPS)induced macrophages,P@Co+NIR presented excellent antioxidant and anti-inflammatory capacities through lowering intracellular ROS levels,decreasing the expression levels of interleukin-6(IL-6)and tumor necrosis factor-α(TNF-α)as well as inducing macrophage M2 directional polarization.Significantly,it displayed the outstanding activities of lowering acute lung inflammation,relieving diffuse alveolar damage,and up-regulating heat shock protein 70(HSP70)expression,resulting in synergistic enhanced ALI therapy effect.It offers a novel strategy for the clinical treatment of ROS related diseases.展开更多
With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)of...With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 62072392,Grant 61822602,Grant 61772207,Grant 61802331,Grant 61602399,Grant 61702439,Grant 61773331,and Grant 62062034the China Postdoctoral Science Foundation under Grant 2019T120732 and Grant 2017M622691+2 种基金the Natural Science Foundation of Shandong Province under Grant ZR2016FM42the Major scientific and technological innovation projects of Shandong Province under Grant 2019JZZY020131the Key projects of Shandong Natural Science Foundation under Grant ZR2020KF019.
文摘In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has become an important goal of the research community.Existing task assignment algorithms can be categorized as offline(performs better with datasets but struggles to achieve good real-life results)or online(works well with real-life input but is difficult to optimize regarding in-depth assignments).This paper proposes a Cross-regional Online Task(CROT)assignment problem based on the online assignment model.Given the CROT problem,an Online Task Assignment across Regions based on Prediction(OTARP)algorithm is proposed.OTARP is a two-stage graphics-driven bilateral assignment strategy that uses edge cloud and graph embedding to complete task assignments.The first stage uses historical data to make offline predictions,with a graph-driven method for offline bipartite graph matching.The second stage uses a bipartite graph to complete the online task assignment process.This paper proposes accelerating the task assignment process through multiple assignment rounds and optimizing the process by combining offline guidance and online assignment strategies.To encourage crowd workers to complete crowd tasks across regions,an incentive strategy is designed to encourage crowd workers’movement.To avoid the idle problem in the process of crowd worker movement,a drop-by-rider problem is used to help crowd workers accept more crowd tasks,optimize the number of assignments,and increase utility.Finally,through comparison experiments on real datasets,the performance of the proposed algorithm on crowd worker utility value and the matching number is evaluated.
基金The National Natural Science Foundation of China(41271393).
文摘Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.
文摘In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this problem, a makespan and reliability driven (MRD) sufferage scheduling algorithm is designed and implemented. Different from the traditional Grid scheduling algorithms, the algorithm addresses the makespan as well as reliability of tasks. The simulation experimental results show that the MRD sufferage scheduling algorithm can increase reliability of tasks and can trade off reliability against makespan of tasks by adjusting the weighting parameter in its cost function. So it can be applied to the complex Grid computing environment well.
基金Supported by Special Fund for Reform of Teaching Model of Huanggang Normal University(2016CK06,2018CE42)
文摘In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is just the time to explore a new botany field practice model.The attempt of a new task-driven model for botany field practice will greatly enhance students thinking about plants and nature,plants and environment,and plant and ecological civilization,and will inevitably enhance students initiative awareness and practical ability to protect and rationally utilize plant resources.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the support from the National Natural Science Foundation of China(52202306)Program from Guangdong Introducing Innovative and Entrepreneurial Teams(2019ZT08L101 and RCTDPT-2020-001)+1 种基金Shenzhen Key Laboratory of Eco-materials and Renewable Energy(ZDSYS20200922160400001)the Provincial Talent Plan of Guangdong(2023TB0012).
文摘Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite challenging in simultaneously achieving high activity and selectivity for target products under mild conditions,especially when synthesizing high-value C2t chemicals such as ethanol[2].The conversion of methane to ethanol by photocatalysis is promising for achieving transformation under ambient temperature and pressure conditions.Currently,the apparent quantum efficiency(AQE)of solar-driven methane-to-ethanol conversion is generally below 0.5%[3,4].Furthermore,the stability of photocatalysts remains inadequate,offering substantial potential for further improvement.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.
基金supported by the National Natural Science Foundation of China(Grant Nos.12222515,12075324 for S.Yin,and 12347107,1257-4160 for Y.F.Jiang)the National Key R&D Program of China(Grant No.2022YFA1402703 for Y.F.Jiang)+1 种基金the Science and Technology Projects in Guangdong Province(Grant No.2021QN02X561 for S.Yin)the Science and Technology Projects in Guangzhou City(Grant No.2025A04J5408 for S.Yin)。
文摘The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical point with two independent relevant directions have not been adequately studied.Here,we employ the time-dependent variational principle to study the driven critical dynamics at a one-dimensional supersymmetric Ising tricritical point.For the relevant direction along the Ising critical line,the AIS apparently breaks down.Nevertheless,we find that the critical dynamics can still be described by finite-time scaling in which the driving rate has a dimension of r_(μ)=z+1/v_(μ)with z and v_(μ)being the dynamic exponent and correlation length exponent in this direction,respectively.For driven dynamics along another direction,the driving rate has a dimension of r_(p)=z+1/v_(p)with v_(p)being another correlation length exponent.Our work brings a new fundamental perspective into nonequilibrium critical dynamics near the tricritical point,which could be realized in programmable quantum processors in Rydberg atomic systems.
基金supported by the National Natural Science Foundation of China(62302234).
文摘As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.
基金supported by the Project of Science and Technology Research Program of Chongqing Education Commission of China(No.KJZD-K202401105)High-Quality Development Action Plan for Graduate Education at Chongqing University of Technology(No.gzljg2023308,No.gzljd2024204)+1 种基金the Graduate Innovation Program of Chongqing University of Technology(No.gzlcx20233197)Yunnan Provincial Key R&D Program(202203AA080006).
文摘Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23082).
文摘Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.
基金Supported by National Natural Science Foundation of China(Grant Nos.52325501,U24B2047).
文摘Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising from configuration-dependent geometric and non-geometric source errors,whereas the accuracy of data-driven methods depends on a large amount of measurement data.Using a 5-DOF(degrees of freedom)hybrid machining robot as an exemplar,this study presents a model data-driven approach for the calibration of robotic manipulators.An f-DOF realistic robot containing various source errors is visualized as a 6-DOF fictitious robot having error-free parameters,but erroneous actuated/virtual joint motions.The calibration process essentially involves four steps:(1)formulating the linear map relating the pose error twist to the joint motion errors,(2)parameterizing the joint motion errors using second-order polynomials in terms of nominal actuated joint variables,(3)identifying the polynomial coefficients using the weighted least squares plus principal component analysis,and(4)compensating the compensable pose errors by updating the nominal actuated joint variables.The merit of this approach is that it enables compensation of the pose errors caused by configuration-dependent geometric and non-geometric source errors using finite measurement configurations.Experimental studies on a prototype machine illustrate the effectiveness of the proposed approach.
基金supported by the National Natural Science Foundation of China(Grant Nos.12104109,12222515,and 12075324)the Science and Technology Projects in Guangzhou(Grant No.2024A04J2092)the Science and Technology Projects in Guangdong Province(Grant No.211193863020).
文摘Driven critical dynamics in quantum phase transitions holds significant theoretical importance,and also has practical applications in fast-developing quantum devices.While scaling corrections have been shown to play important roles in fully characterizing equilibrium quantum criticality,their impact on nonequilibrium critical dynamics has not been extensively explored.In this work,we investigate the driven critical dynamics in a two-dimensional quantum Heisenberg model.We find that in this model the scaling corrections arising from both finite system size and finite driving rate must be incorporated into the finite-time scaling form in order to properly describe the nonequilibrium scaling behaviors.In addition,improved scaling relations are obtained from the expansion of the full scaling form.We numerically verify these scaling forms and improved scaling relations for different starting states using the nonequilibrium quantum Monte Carlo algorithm.
基金Supported in part by Science Center for Gas Turbine Project(Project No.P2022-DC-I-003-001)National Natural Science Foundation of China(Grant No.52275130).
文摘Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.
基金financially supported by the Key Research&Development Program of Guangxi(No.GuiKeAB22080088)the Joint Project on Regional High-Incidence Diseases Research of Guangxi Natural Science Foundation(No.2023GXNSFDA026023)+3 种基金the Natural Science Foundation of Guangxi(No.2023JJA140322)the National Natural Science Foundation of China(No.82360372)the High-level Medical Expert Training Program of Guangxi“139 Plan Funding(No.G202003010)the Medical Appropriate Technology Development and Popularization and Application Project of Guangxi(No.S2020099)。
文摘Acute lung injury(ALI)was characterized by excessive reactive oxygen species(ROS)levels and inflammatory response in the lung.Scavenging ROS could inhibit the excessive inflammatory response,further treating ALI.Herein,we designed a novel nanozyme(P@Co)comprised of polydopamine(PDA)nanoparticles(NPs)loading with ultra-small Co,combining with near infrared(NIR)irradiation,which could efficiently scavenge intracellular ROS and suppress inflammatory responses against ALI.For lipopolysaccharide(LPS)induced macrophages,P@Co+NIR presented excellent antioxidant and anti-inflammatory capacities through lowering intracellular ROS levels,decreasing the expression levels of interleukin-6(IL-6)and tumor necrosis factor-α(TNF-α)as well as inducing macrophage M2 directional polarization.Significantly,it displayed the outstanding activities of lowering acute lung inflammation,relieving diffuse alveolar damage,and up-regulating heat shock protein 70(HSP70)expression,resulting in synergistic enhanced ALI therapy effect.It offers a novel strategy for the clinical treatment of ROS related diseases.
基金supported by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)the Science and Technology Research Program of Henan Province of China(232102210134,182102210130)Key Research Projects of Henan Provincial Universities(25B520005).
文摘With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.