期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Study of highly excited vibrational dynamics of HCP integrable system with dynamic potential methods 被引量:1
1
作者 Aixing Wang Lifeng Sun +1 位作者 Chao Fang Yibao Liu 《Chinese Physics B》 SCIE EI CAS CSCD 2020年第1期203-211,共9页
Highly excited vibrational dynamics of phosphaethyne(HCP)integrable system are investigated based on its dynamic potentials.Taking into consideration the 2:1 Fermi resonance between H–C–P bending vibrational mode an... Highly excited vibrational dynamics of phosphaethyne(HCP)integrable system are investigated based on its dynamic potentials.Taking into consideration the 2:1 Fermi resonance between H–C–P bending vibrational mode and C–P stretching vibrational mode,it is found that the effects of H–C stretching vibrational mode on vibrational dynamic features of the HCP integrable system are significant and regularly vary with Polyad numbers(P number).The geometrical profiles of the dynamic potentials and the corresponding fixed points are sensitive to the variation of H–C stretching vibrational strength when P numbers are small,but are not sensitive when P numbers become larger and the corresponding threshold values become lower.The phase space trajectories of different energy levels in a designated dynamic potential(P=28)were studied and the results indicated that the dynamic potentials govern the various dynamic environments in which the vibrational states lie.Furthermore,action integrals of the energy levels contained in dynamic potential(P=28)were quantitatively analyzed and elucidated.It was determined that the dynamic environments could be identified by the numerical values of the action integrals of trajectories of phase space,which is equivalent with dynamic potentials. 展开更多
关键词 phosphaethyne(HCP) highly excited vibrational state fixed point phase space trajectory
原文传递
The impact of new relationship learning on artificial intelligence technology innovation 被引量:2
2
作者 Ying Xue Chao Fang Ying Dong 《International Journal of Innovation Studies》 2021年第1期2-8,共7页
Economic globalization makes the competition among organizations increasingly fierce,and the"dependence relationship"among organizations is of particular importance.It is found that the influence of relation... Economic globalization makes the competition among organizations increasingly fierce,and the"dependence relationship"among organizations is of particular importance.It is found that the influence of relationship learning(based on organizational relationships)on the technological innovation of organizations has become increasingly prominent.Artifi-cial intelligence(AI)technology,as a subversive emerging technology which is rapidly globalizing,is currently in a high-speed development stage.China's AI technology is showing a trend of rapid development in the civilian market,and its application in the military field has also risen as the country's top-level strategic goal.At present,the research and development(R&D)system in the military and civilian markets is relatively independent,and the development of AI technology in civil and military applications is not balanced.This will hinder the AI technology innovation in the future.To promote the balanced development of AI technology in both military and civilian applications,it is necessary to establish a new organizational relationship of"integrated development"among military and civilian R&D organizations.The relationship learning based on"in-tegrated development"will help organizations quickly open up the information sharing channels,rapidly improve the ability of common understanding,promote the continuous improvement of relationship memory,and provide favorable support for promoting the innovation and development of AI technology in China. 展开更多
关键词 Artificial intelligence Relationship learning Integrated development Technological innovation
原文传递
OSCAR:OOD State-Conservative Offline Reinforcement Learning for Sequential Decision Making
3
作者 Yi Ma Chao Wang +4 位作者 Chen Chen Jinyi Liu Zhaopeng Meng Yan Zheng Jianye Hao 《CAAI Artificial Intelligence Research》 2023年第1期91-101,共11页
Offline reinforcement learning(RL)is a data-driven learning paradigm for sequential decision making.Mitigating the overestimation of values originating from out-of-distribution(OOD)states induced by the distribution s... Offline reinforcement learning(RL)is a data-driven learning paradigm for sequential decision making.Mitigating the overestimation of values originating from out-of-distribution(OOD)states induced by the distribution shift between the learning policy and the previously-collected offline dataset lies at the core of offline RL.To tackle this problem,some methods underestimate the values of states given by learned dynamics models or state-action pairs with actions sampled from policies different from the behavior policy.However,since these generated states or state-action pairs are not guaranteed to be OOD,staying conservative on them may adversely affect the in-distribution ones.In this paper,we propose an OOD state-conservative offline RL method(OSCAR),which aims to address the limitation by explicitly generating reliable OOD states that are located near the manifold of the offline dataset,and then design a conservative policy evaluation approach that combines the vanilla Bellman error with a regularization term that only underestimates the values of these generated OOD states.In this way,we can prevent the value errors of OOD states from propagating to in-distribution states through value bootstrapping and policy improvement.We also theoretically prove that the proposed conservative policy evaluation approach guarantees to underestimate the values of OOD states.OSCAR along with several strong baselines is evaluated on the offline decision-making benchmarks D4RL and autonomous driving benchmark SMARTS.Experimental results show that OSCAR outperforms the baselines on a large portion of the benchmarks and attains the highest average return,substantially outperforming existing offline RL methods. 展开更多
关键词 offline reinforcement learning out-of-distribution decision making
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部