With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the ...With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the transition toward an intent-driven task-oriented coordination paradigm across the space,ground,and user segments.This study presents a novel intent-driven task-oriented network(IDTN)framework to address task scheduling and resource allocation challenges in SINs.The scheduling problem is formulated as a three-sided matching game that incorporates the preference attributes of entities across all network segments.To manage the variability of random task arrivals and dynamic resources,a context-aware linear upper-confidence-bound online learning mechanism is integrated to reduce decision-making uncertainty.Simulation results demonstrate the effectiveness of the proposed IDTN framework.Compared with conventional baseline methods,the framework achieves significant performance improvements,including a 4.4%-28.9%increase in average system reward,a 6.2%-34.5%improvement in resource utilization,and a 5.6%-35.7%enhancement in user satisfaction.The proposed framework is expected to facilitate the integration and orchestration of space-based platforms.展开更多
This study compares the relative efficacy of the continuation task and the model-as-feedbackwriting (MAFW) task in EFL writing development. Ninety intermediate-level Chinese EFL learnerswere randomly assigned to a con...This study compares the relative efficacy of the continuation task and the model-as-feedbackwriting (MAFW) task in EFL writing development. Ninety intermediate-level Chinese EFL learnerswere randomly assigned to a continuation group, a MAFW group, and a control group, each with30 learners. A pretest and a posttest were used to gauge L2 writing development. Results showedthat the continuation task outperformed the MAFW task not only in enhancing the overall qualityof L2 writing, but also in promoting the quality of three components of L2 writing, namely, content,organization, and language. The finding has important implications for L2 writing teaching andlearning.展开更多
The iterative continuation task(ICT)requires English as a foreign language(EFL)learners to read a segment and write a continuation that aligns with the preceding segment of an English novel with successive turns,offer...The iterative continuation task(ICT)requires English as a foreign language(EFL)learners to read a segment and write a continuation that aligns with the preceding segment of an English novel with successive turns,offering exposure to diverse grammatical structures and opportunities for contextualized usage.Given the importance of integrating technology into second language(L2)writing and the critical role that grammar plays in L2 writing development,automated written corrective feedback provided by Grammarly has gained significant attention.This study investigates the impact of Grammarly on grammar learning strategies,grammar grit,and grammar competence among EFL college students engaged in ICT.This study employed a mixed-methods sequential exploratory design;56 participants were divided into an experimental group(n=28),receiving Grammarly feedback for ICT,and a control group(n=28),completing ICT without Grammarly feedback.Quantitative results revealed that both groups showed improvements in L2 grammar learning strategies,grit and competence.For the experimental group,significant differences were observed across all variables of L2 grammar learning strategies,grit,and competence between pre-and post-tests.For the control group,significant differences were only observed in the affective dimension of grammar learning strategies,Consistency of Interest(COI)of grammar grit,and grammar competence.However,the control group presented a significantly higher improvement in grammar competence.Qualitative analysis showed both positive and negative perceptions of Grammarly.The pedagogical implications of integrating Grammarly and ICT for L2 grammar development are discussed.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has b...In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has become an important goal of the research community.Existing task assignment algorithms can be categorized as offline(performs better with datasets but struggles to achieve good real-life results)or online(works well with real-life input but is difficult to optimize regarding in-depth assignments).This paper proposes a Cross-regional Online Task(CROT)assignment problem based on the online assignment model.Given the CROT problem,an Online Task Assignment across Regions based on Prediction(OTARP)algorithm is proposed.OTARP is a two-stage graphics-driven bilateral assignment strategy that uses edge cloud and graph embedding to complete task assignments.The first stage uses historical data to make offline predictions,with a graph-driven method for offline bipartite graph matching.The second stage uses a bipartite graph to complete the online task assignment process.This paper proposes accelerating the task assignment process through multiple assignment rounds and optimizing the process by combining offline guidance and online assignment strategies.To encourage crowd workers to complete crowd tasks across regions,an incentive strategy is designed to encourage crowd workers’movement.To avoid the idle problem in the process of crowd worker movement,a drop-by-rider problem is used to help crowd workers accept more crowd tasks,optimize the number of assignments,and increase utility.Finally,through comparison experiments on real datasets,the performance of the proposed algorithm on crowd worker utility value and the matching number is evaluated.展开更多
Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield env...Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.展开更多
In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this pr...In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this problem, a makespan and reliability driven (MRD) sufferage scheduling algorithm is designed and implemented. Different from the traditional Grid scheduling algorithms, the algorithm addresses the makespan as well as reliability of tasks. The simulation experimental results show that the MRD sufferage scheduling algorithm can increase reliability of tasks and can trade off reliability against makespan of tasks by adjusting the weighting parameter in its cost function. So it can be applied to the complex Grid computing environment well.展开更多
In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is ...In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is just the time to explore a new botany field practice model.The attempt of a new task-driven model for botany field practice will greatly enhance students thinking about plants and nature,plants and environment,and plant and ecological civilization,and will inevitably enhance students initiative awareness and practical ability to protect and rationally utilize plant resources.展开更多
Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinfor...Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret...Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.展开更多
This paper focuses on the problem of multi-station multi-robot spot welding task assignment,and proposes a deep reinforcement learning(DRL)framework,which is made up of a public graph attention network and independent...This paper focuses on the problem of multi-station multi-robot spot welding task assignment,and proposes a deep reinforcement learning(DRL)framework,which is made up of a public graph attention network and independent policy networks.The graph of welding spots distribution is encoded using the graph attention network.Independent policy networks with attention mechanism as a decoder can handle the encoded graph and decide to assign robots to different tasks.The policy network is used to convert the large scale welding spots allocation problem to multiple small scale singlerobot welding path planning problems,and the path planning problem is quickly solved through existing methods.Then,the model is trained through reinforcement learning.In addition,the task balancing method is used to allocate tasks to multiple stations.The proposed algorithm is compared with classical algorithms,and the results show that the algorithm based on DRL can produce higher quality solutions.展开更多
Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite chall...Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite challenging in simultaneously achieving high activity and selectivity for target products under mild conditions,especially when synthesizing high-value C2t chemicals such as ethanol[2].The conversion of methane to ethanol by photocatalysis is promising for achieving transformation under ambient temperature and pressure conditions.Currently,the apparent quantum efficiency(AQE)of solar-driven methane-to-ethanol conversion is generally below 0.5%[3,4].Furthermore,the stability of photocatalysts remains inadequate,offering substantial potential for further improvement.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions...With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.展开更多
The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical poin...The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical point with two independent relevant directions have not been adequately studied.Here,we employ the time-dependent variational principle to study the driven critical dynamics at a one-dimensional supersymmetric Ising tricritical point.For the relevant direction along the Ising critical line,the AIS apparently breaks down.Nevertheless,we find that the critical dynamics can still be described by finite-time scaling in which the driving rate has a dimension of r_(μ)=z+1/v_(μ)with z and v_(μ)being the dynamic exponent and correlation length exponent in this direction,respectively.For driven dynamics along another direction,the driving rate has a dimension of r_(p)=z+1/v_(p)with v_(p)being another correlation length exponent.Our work brings a new fundamental perspective into nonequilibrium critical dynamics near the tricritical point,which could be realized in programmable quantum processors in Rydberg atomic systems.展开更多
As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driv...As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.展开更多
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du...Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.展开更多
基金supported by the National Key Research and Development Program of China(2020YFB1807700)Innovation Capability Support Program of Shaanxi(2024RS-CXTD-01).
文摘With the rapid advancement of satellite communication technologies,space information networks(SINs)have become essential infrastructure for complex service delivery and cross-domain task coordination,facilitating the transition toward an intent-driven task-oriented coordination paradigm across the space,ground,and user segments.This study presents a novel intent-driven task-oriented network(IDTN)framework to address task scheduling and resource allocation challenges in SINs.The scheduling problem is formulated as a three-sided matching game that incorporates the preference attributes of entities across all network segments.To manage the variability of random task arrivals and dynamic resources,a context-aware linear upper-confidence-bound online learning mechanism is integrated to reduce decision-making uncertainty.Simulation results demonstrate the effectiveness of the proposed IDTN framework.Compared with conventional baseline methods,the framework achieves significant performance improvements,including a 4.4%-28.9%increase in average system reward,a 6.2%-34.5%improvement in resource utilization,and a 5.6%-35.7%enhancement in user satisfaction.The proposed framework is expected to facilitate the integration and orchestration of space-based platforms.
文摘This study compares the relative efficacy of the continuation task and the model-as-feedbackwriting (MAFW) task in EFL writing development. Ninety intermediate-level Chinese EFL learnerswere randomly assigned to a continuation group, a MAFW group, and a control group, each with30 learners. A pretest and a posttest were used to gauge L2 writing development. Results showedthat the continuation task outperformed the MAFW task not only in enhancing the overall qualityof L2 writing, but also in promoting the quality of three components of L2 writing, namely, content,organization, and language. The finding has important implications for L2 writing teaching andlearning.
文摘The iterative continuation task(ICT)requires English as a foreign language(EFL)learners to read a segment and write a continuation that aligns with the preceding segment of an English novel with successive turns,offering exposure to diverse grammatical structures and opportunities for contextualized usage.Given the importance of integrating technology into second language(L2)writing and the critical role that grammar plays in L2 writing development,automated written corrective feedback provided by Grammarly has gained significant attention.This study investigates the impact of Grammarly on grammar learning strategies,grammar grit,and grammar competence among EFL college students engaged in ICT.This study employed a mixed-methods sequential exploratory design;56 participants were divided into an experimental group(n=28),receiving Grammarly feedback for ICT,and a control group(n=28),completing ICT without Grammarly feedback.Quantitative results revealed that both groups showed improvements in L2 grammar learning strategies,grit and competence.For the experimental group,significant differences were observed across all variables of L2 grammar learning strategies,grit,and competence between pre-and post-tests.For the control group,significant differences were only observed in the affective dimension of grammar learning strategies,Consistency of Interest(COI)of grammar grit,and grammar competence.However,the control group presented a significantly higher improvement in grammar competence.Qualitative analysis showed both positive and negative perceptions of Grammarly.The pedagogical implications of integrating Grammarly and ICT for L2 grammar development are discussed.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金supported in part by the National Natural Science Foundation of China under Grant 62072392,Grant 61822602,Grant 61772207,Grant 61802331,Grant 61602399,Grant 61702439,Grant 61773331,and Grant 62062034the China Postdoctoral Science Foundation under Grant 2019T120732 and Grant 2017M622691+2 种基金the Natural Science Foundation of Shandong Province under Grant ZR2016FM42the Major scientific and technological innovation projects of Shandong Province under Grant 2019JZZY020131the Key projects of Shandong Natural Science Foundation under Grant ZR2020KF019.
文摘In the era of the Internet of Things(IoT),the crowdsourcing process is driven by data collected by devices that interact with each other and with the physical world.As a part of the IoT ecosystem,task assignment has become an important goal of the research community.Existing task assignment algorithms can be categorized as offline(performs better with datasets but struggles to achieve good real-life results)or online(works well with real-life input but is difficult to optimize regarding in-depth assignments).This paper proposes a Cross-regional Online Task(CROT)assignment problem based on the online assignment model.Given the CROT problem,an Online Task Assignment across Regions based on Prediction(OTARP)algorithm is proposed.OTARP is a two-stage graphics-driven bilateral assignment strategy that uses edge cloud and graph embedding to complete task assignments.The first stage uses historical data to make offline predictions,with a graph-driven method for offline bipartite graph matching.The second stage uses a bipartite graph to complete the online task assignment process.This paper proposes accelerating the task assignment process through multiple assignment rounds and optimizing the process by combining offline guidance and online assignment strategies.To encourage crowd workers to complete crowd tasks across regions,an incentive strategy is designed to encourage crowd workers’movement.To avoid the idle problem in the process of crowd worker movement,a drop-by-rider problem is used to help crowd workers accept more crowd tasks,optimize the number of assignments,and increase utility.Finally,through comparison experiments on real datasets,the performance of the proposed algorithm on crowd worker utility value and the matching number is evaluated.
基金The National Natural Science Foundation of China(41271393).
文摘Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.
文摘In the dynamic, complex and unbounded Grid systems, failures of Grid resources caused by malicious attacks and hardware failures are inevitable and have an adverse effect on the execution of tasks. To mitigate this problem, a makespan and reliability driven (MRD) sufferage scheduling algorithm is designed and implemented. Different from the traditional Grid scheduling algorithms, the algorithm addresses the makespan as well as reliability of tasks. The simulation experimental results show that the MRD sufferage scheduling algorithm can increase reliability of tasks and can trade off reliability against makespan of tasks by adjusting the weighting parameter in its cost function. So it can be applied to the complex Grid computing environment well.
基金Supported by Special Fund for Reform of Teaching Model of Huanggang Normal University(2016CK06,2018CE42)
文摘In the present context of increasing social demands for natural science education,increasing people s awareness of environmental biodiversity protection,and ecological civilization lifting to the state strategy,it is just the time to explore a new botany field practice model.The attempt of a new task-driven model for botany field practice will greatly enhance students thinking about plants and nature,plants and environment,and plant and ecological civilization,and will inevitably enhance students initiative awareness and practical ability to protect and rationally utilize plant resources.
基金This research was funded by the Project of the National Natural Science Foundation of China,Grant Number 62106283.
文摘Aiming at the problems of low solution accuracy and high decision pressure when facing large-scale dynamic task allocation(DTA)and high-dimensional decision space with single agent,this paper combines the deep reinforce-ment learning(DRL)theory and an improved Multi-Agent Deep Deterministic Policy Gradient(MADDPG-D2)algorithm with a dual experience replay pool and a dual noise based on multi-agent architecture is proposed to improve the efficiency of DTA.The algorithm is based on the traditional Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm,and considers the introduction of a double noise mechanism to increase the action exploration space in the early stage of the algorithm,and the introduction of a double experience pool to improve the data utilization rate;at the same time,in order to accelerate the training speed and efficiency of the agents,and to solve the cold-start problem of the training,the a priori knowledge technology is applied to the training of the algorithm.Finally,the MADDPG-D2 algorithm is compared and analyzed based on the digital battlefield of ground and air confrontation.The experimental results show that the agents trained by the MADDPG-D2 algorithm have higher win rates and average rewards,can utilize the resources more reasonably,and better solve the problem of the traditional single agent algorithms facing the difficulty of solving the problem in the high-dimensional decision space.The MADDPG-D2 algorithm based on multi-agent architecture proposed in this paper has certain superiority and rationality in DTA.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金Supported in part by Science Center for Gas Turbine Project(Project No.P2022-DC-I-003-001)National Natural Science Foundation of China(Grant No.52275130).
文摘Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.
基金National Key Research and Development Program of China,Grant/Award Number:2021YFB1714700Postdoctoral Research Foundation of China,Grant/Award Number:2024M752364Postdoctoral Fellowship Program of CPSF,Grant/Award Number:GZB20240525。
文摘This paper focuses on the problem of multi-station multi-robot spot welding task assignment,and proposes a deep reinforcement learning(DRL)framework,which is made up of a public graph attention network and independent policy networks.The graph of welding spots distribution is encoded using the graph attention network.Independent policy networks with attention mechanism as a decoder can handle the encoded graph and decide to assign robots to different tasks.The policy network is used to convert the large scale welding spots allocation problem to multiple small scale singlerobot welding path planning problems,and the path planning problem is quickly solved through existing methods.Then,the model is trained through reinforcement learning.In addition,the task balancing method is used to allocate tasks to multiple stations.The proposed algorithm is compared with classical algorithms,and the results show that the algorithm based on DRL can produce higher quality solutions.
基金the support from the National Natural Science Foundation of China(52202306)Program from Guangdong Introducing Innovative and Entrepreneurial Teams(2019ZT08L101 and RCTDPT-2020-001)+1 种基金Shenzhen Key Laboratory of Eco-materials and Renewable Energy(ZDSYS20200922160400001)the Provincial Talent Plan of Guangdong(2023TB0012).
文摘Methane(CH4),the predominant component of natural gas and shale gas,is regarded as a promising carbon feedstock for chemical synthesis[1].However,considering the extreme stability of CH4 molecules,it's quite challenging in simultaneously achieving high activity and selectivity for target products under mild conditions,especially when synthesizing high-value C2t chemicals such as ethanol[2].The conversion of methane to ethanol by photocatalysis is promising for achieving transformation under ambient temperature and pressure conditions.Currently,the apparent quantum efficiency(AQE)of solar-driven methane-to-ethanol conversion is generally below 0.5%[3,4].Furthermore,the stability of photocatalysts remains inadequate,offering substantial potential for further improvement.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.
基金supported by the National Natural Science Foundation of China(Grant Nos.12222515,12075324 for S.Yin,and 12347107,1257-4160 for Y.F.Jiang)the National Key R&D Program of China(Grant No.2022YFA1402703 for Y.F.Jiang)+1 种基金the Science and Technology Projects in Guangdong Province(Grant No.2021QN02X561 for S.Yin)the Science and Technology Projects in Guangzhou City(Grant No.2025A04J5408 for S.Yin)。
文摘The conventional Kibble–Zurek mechanism,describing driven dynamics across critical points based on the adiabatic-impulse scenario(AIS),has attracted broad attention.However,the driven dynamics at the tricritical point with two independent relevant directions have not been adequately studied.Here,we employ the time-dependent variational principle to study the driven critical dynamics at a one-dimensional supersymmetric Ising tricritical point.For the relevant direction along the Ising critical line,the AIS apparently breaks down.Nevertheless,we find that the critical dynamics can still be described by finite-time scaling in which the driving rate has a dimension of r_(μ)=z+1/v_(μ)with z and v_(μ)being the dynamic exponent and correlation length exponent in this direction,respectively.For driven dynamics along another direction,the driving rate has a dimension of r_(p)=z+1/v_(p)with v_(p)being another correlation length exponent.Our work brings a new fundamental perspective into nonequilibrium critical dynamics near the tricritical point,which could be realized in programmable quantum processors in Rydberg atomic systems.
基金supported by the National Natural Science Foundation of China(62302234).
文摘As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.
基金supported by the Project of Science and Technology Research Program of Chongqing Education Commission of China(No.KJZD-K202401105)High-Quality Development Action Plan for Graduate Education at Chongqing University of Technology(No.gzljg2023308,No.gzljd2024204)+1 种基金the Graduate Innovation Program of Chongqing University of Technology(No.gzlcx20233197)Yunnan Provincial Key R&D Program(202203AA080006).
文摘Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.