At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
Background:This study explored the value of integrating problem-based learning(PBL)and team-based learning(TBL)methods into plastic and reconstructive surgery clinical practice.By addressing the challenges faced in tr...Background:This study explored the value of integrating problem-based learning(PBL)and team-based learning(TBL)methods into plastic and reconstructive surgery clinical practice.By addressing the challenges faced in traditional teachings,this study aimed to enhance educational outcomes and prepare students for real-world surgical scenarios,thereby improving patient care in this specialized field.Methods:Fifty undergraduate students majoring in clinical medicine at the Shanghai Jiao Tong University School of Medicine were selected as research subjects.They were randomly divided into experimental and control groups.The experimental group received the combined PBL-TBL teaching method,whereas the control group received the traditional teaching.The teaching effect was evaluated based on student satisfaction and academic performance.Results:The student satisfaction in the experimental group was higher than that of the control group(P<0.05).Subjective scoring for academic performance by instructors was higher in the experimental group than in the control group(P<0.05).Conclusion:The PBL and TBL combination had a significant effect when applied in plastic and reconstructive surgery clinical practice.展开更多
Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex ro...Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex road traffic environment of smart vehicles and other vehicles frequently experiences conflicting start and stop motion.The fine-grained scheduling of autonomous vehicles(AVs)at non-signalized intersections,which is a promising technique for exploring optimal driving paths for both assisted driving nowadays and driverless cars in the near future,has attracted significant attention owing to its high potential for improving road safety and traffic efficiency.Fine-grained scheduling primarily focuses on signalized intersection scenarios,as applying it directly to non-signalized intersections is challenging because each AV can move freely without traffic signal control.This may cause frequent driving collisions and low road traffic efficiency.Therefore,this study proposes a novel algorithm to address this issue.Our work focuses on the fine-grained scheduling of automated vehicles at non-signal intersections via dual reinforced training(FS-DRL).For FS-DRL,we first use a grid to describe the non-signalized intersection and propose a convolutional neural network(CNN)-based fast decision model that can rapidly yield a coarse-grained scheduling decision for each AV in a distributed manner.We then load these coarse-grained scheduling decisions onto a deep Q-learning network(DQN)for further evaluation.We use an adaptive learning rate to maximize the reward function and employ parameterεto tradeoff the fast speed of coarse-grained scheduling in the CNN and optimal fine-grained scheduling in the DQN.In addition,we prove that using this adaptive learning rate leads to a converged loss rate with an extremely small number of training loops.The simulation results show that compared with Dijkstra,RNN,and ant colony-based scheduling,FS-DRL yields a high accuracy of 96.5%on the sample,with improved performance of approximately 61.54%-85.37%in terms of the average conflict and traffic efficiency.展开更多
Autonomous driving technology is constantly developing to a higher level of complex scenes,and there is a growing demand for the utilization of end-to-end data-driven control.However,the end-to-end path tracking proce...Autonomous driving technology is constantly developing to a higher level of complex scenes,and there is a growing demand for the utilization of end-to-end data-driven control.However,the end-to-end path tracking process often encounters challenges in learning efficiency and generalization.To address this issue,this paper designs a deep deterministic policy gradient(DDPG)-based reinforcement learning strategy that integrates imitation learning and feedforward exploration in the path following process.In imitation learning,the path tracking control data generated by the model predictive control(MPC)method is used to train an end-to-end steering control model of a deep neural network.Another feedforward exploration behavior is predicted by road curvature and vehicle speed,and adds it and imitation learning to the DDPG reinforcement learning to obtain decision-making experience and action prediction behavior of the path tracking process.In the reinforcement learning process,imitation learning is used to update the pre-training parameters of the actor network,and a feedforward steering technique with random noise is adopted for strategy exploration.In the reward function,a hierarchical progressive reward form and a constrained objective reward function referring to MPC are designed,and the actor-critic network architecture is determined.Finally,the path tracking performance of the designed method is verified by comparing various training results,simulations,and HIL tests.The results show that the designed method can effectively utilize pre-training and feedforward prior experience to obtain optimal path tracking performance of an autonomous vehicle,and has better generalization ability than other methods.This study provides an efficient control scheme for improving the end-to-end control performance of autonomous vehicles.展开更多
This paper introduces autonomous driving image perception technology,including deep learning models(such as CNN and RNN)and their applications,analyzing the limitations of traditional algorithms.It elaborates on the s...This paper introduces autonomous driving image perception technology,including deep learning models(such as CNN and RNN)and their applications,analyzing the limitations of traditional algorithms.It elaborates on the shortcomings of Faster R-CNN and YOLO series models,proposes various improvement techniques such as data fusion,attention mechanisms,and model compression,and introduces relevant datasets,evaluation metrics,and testing frameworks to demonstrate the advantages of the improved models.展开更多
This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion...This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.展开更多
This paper intends to promote a college English autonomous teaching and learning approach by introducing the whole process of its implementation and feedback from the learners. The theoretical and practical framework ...This paper intends to promote a college English autonomous teaching and learning approach by introducing the whole process of its implementation and feedback from the learners. The theoretical and practical framework of this approach is: with multiple autonomous learning research and practice models as its core, with process syllabus as its guidance, with task-based teaching as its essential principle, with group cooperation and reciprocal learning as its means, with extracurricular activities, online learning and self-access center as its learning environment, with formative assessment system as its guarantee and with cultivation of learners' comprehensive English practical skills and autonomy as its goal. Through this approach, we provide the learners with a favorable learning environment where they can learn by themselves and learn by reflection and practice so that they can learn how to learn and how to behave and how to survive.展开更多
Based on the literature review about autonomous learning,the study put forward four steps for using TED to enhance student autonomous learning,which are preparation,activity design,presentation and evaluation. By doin...Based on the literature review about autonomous learning,the study put forward four steps for using TED to enhance student autonomous learning,which are preparation,activity design,presentation and evaluation. By doing so,both teachers and students can achieve their teaching and learning objectives.展开更多
Nowadays, English as a world language becomes more and more important. Consequently, English learning becomes more and more popular. As we know, an important object for English learners is to improve their communicati...Nowadays, English as a world language becomes more and more important. Consequently, English learning becomes more and more popular. As we know, an important object for English learners is to improve their communicative competence. So autonomous learning is a good way to improve communicative competence. In this paper, two terms, autonomous learning and communicative competence, and their relationship will be introduced from the perspective of English learning. Autonomous learning is self-managed learning, which is contrary to passive learning and mechanical learning, according to intrinsic property of language learning. Communicative competence is a concept introduced by Dell Hymes and is discussed and refined by many other linguists. According to Hymes, communicative competence is the ability not only to apply the grammatical rules of language in order to form grammatically correct sentences but also to know when and where to use these sentences and to whom. Communicative competence includes 4 aspects: Possibility, feasibility, appropriateness and performance. Improving communicative competence is the result of autonomous learning, autonomous learning is the motivation of improving communicative competence. English, of course, is a bridge connecting China to the world, and fostering students'communicative competence through autonomous learning is the vital element of improving English learning in China.展开更多
The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab....The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab.The purpose of the study is to find how students'listening strategies differ in these two approaches and thereby to find which one better facilitates students'listening proficiency.展开更多
The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the m...The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the means of questionnaires and interviews. It further analyzes the possible reasons why students perceive their teachers' roles in such a way, in the hope of providing some implications for web-based college English autonomous learning.展开更多
Autonomous learning is one of the objectives of multi-media college English teaching. On basis of the test of students' autonomous learning ability and the analysis of the results, this paper attempts to explore the ...Autonomous learning is one of the objectives of multi-media college English teaching. On basis of the test of students' autonomous learning ability and the analysis of the results, this paper attempts to explore the feasibility of fostering the autonomous learning ability in college English teaching.展开更多
Autonomous study emphasizes the learner's initiative,enthusiasm and creativity.In all fields of education,there is growing emphasis on "learner-centered" teaching methods and the ability of learner auton...Autonomous study emphasizes the learner's initiative,enthusiasm and creativity.In all fields of education,there is growing emphasis on "learner-centered" teaching methods and the ability of learner autonomy.Many experts and scholars have found that learning strategies plays an important role in English language learning,but the importance of affective strategy use in English learning is often ignored by people.Therefore,this paper focuses on the frequencies of affective strategies use in English learning and their relationships so as to enable college students to use positive affective strategies effectively to improve their autonomous learning ability.展开更多
Through the research into college students' English autonomous learning ability of the non-English major students. That the cause why university students' English autonomous learning ability is weak is proved to be ...Through the research into college students' English autonomous learning ability of the non-English major students. That the cause why university students' English autonomous learning ability is weak is proved to be that they do not value the use of learning strategies. The use of learning strategies can promote the formation and enhancement of autonomous learning ability of the learners. Metacognitive strategy is a high-level management skill which can enable the learners to plan, regulate, monitor and evaluate actively their own learning process. Massive researches have proved whether metacognitive strategy is used successfully or not can directly affect the student learning result. So, it is necessary for teachers to cultivate and train the students to use metacogitive strategy in the university English teaching.展开更多
The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learn...The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learning can improve learners' autonomous learning, as well as some problems found in their findings. Therefore, this paper first gives a summary and critique of research studies on the web-based autonomous learning and some factors influencing learners' autonomous learning ability;then, areas that deserve further study are also indicated.展开更多
Learner Autonomy has been a hot topic in foreign language learning and teaching since 1960s,especially in relation to life-long skills.As the globalization develops,intercultural communication becomes more and more si...Learner Autonomy has been a hot topic in foreign language learning and teaching since 1960s,especially in relation to life-long skills.As the globalization develops,intercultural communication becomes more and more significant for college students.This essay attempts to explore main approaches to cultivate and improve students' autonomous learning ability and intercultural communication competence in foreign language teaching.展开更多
Obstacle avoidance becomes a very challenging task for an autonomous underwater vehicle(AUV)in an unknown underwater environment during exploration process.Successful control in such case may be achieved using the mod...Obstacle avoidance becomes a very challenging task for an autonomous underwater vehicle(AUV)in an unknown underwater environment during exploration process.Successful control in such case may be achieved using the model-based classical control techniques like PID and MPC but it required an accurate mathematical model of AUV and may fail due to parametric uncertainties,disturbance,or plant model mismatch.On the other hand,model-free reinforcement learning(RL)algorithm can be designed using actual behavior of AUV plant in an unknown environment and the learned control may not get affected by model uncertainties like a classical control approach.Unlike model-based control model-free RL based controller does not require to manually tune controller with the changing environment.A standard RL based one-step Q-learning based control can be utilized for obstacle avoidance but it has tendency to explore all possible actions at given state which may increase number of collision.Hence a modified Q-learning based control approach is proposed to deal with these problems in unknown environment.Furthermore,function approximation is utilized using neural network(NN)to overcome the continuous states and large statespace problems which arise in RL-based controller design.The proposed modified Q-learning algorithm is validated using MATLAB simulations by comparing it with standard Q-learning algorithm for single obstacle avoidance.Also,the same algorithm is utilized to deal with multiple obstacle avoidance problems.展开更多
Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly acc...Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly accomplish driving functions. Speaking of machine vision, deep learning techniques, and specifically convolutional neural networks, have been proven to be the state of the art technology in the field. As these networks typically involve millions of parameters and elements, designing an optimal architecture for deep learning structures is a difficult task which is globally under investigation by researchers. This study experimentally evaluates the impact of three major architectural properties of convolutional networks, including the number of layers, filters, and filter size on their performance. In this study, several models with different properties are developed,equally trained, and then applied to an autonomous car in a realistic simulation environment. A new ensemble approach is also proposed to calculate and update weights for the models regarding their mean squared error values. Based on design properties,performance results are reported and compared for further investigations. Surprisingly, the number of filters itself does not largely affect the performance efficiency. As a result, proper allocation of filters with different kernel sizes through the layers introduces a considerable improvement in the performance.Achievements of this study will provide the researchers with a clear clue and direction in designing optimal network architectures for deep learning purposes.展开更多
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ...Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.展开更多
This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft...This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.展开更多
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金supported by grants from the National Natural Science Foundation of China(grant nos.82472554 and 82202449)the Fund for Excellent Young Scholars of Shanghai Ninth People’s Hospital,Shanghai Jiao Tong University School of Medicine(grant no.JYYQ006).
文摘Background:This study explored the value of integrating problem-based learning(PBL)and team-based learning(TBL)methods into plastic and reconstructive surgery clinical practice.By addressing the challenges faced in traditional teachings,this study aimed to enhance educational outcomes and prepare students for real-world surgical scenarios,thereby improving patient care in this specialized field.Methods:Fifty undergraduate students majoring in clinical medicine at the Shanghai Jiao Tong University School of Medicine were selected as research subjects.They were randomly divided into experimental and control groups.The experimental group received the combined PBL-TBL teaching method,whereas the control group received the traditional teaching.The teaching effect was evaluated based on student satisfaction and academic performance.Results:The student satisfaction in the experimental group was higher than that of the control group(P<0.05).Subjective scoring for academic performance by instructors was higher in the experimental group than in the control group(P<0.05).Conclusion:The PBL and TBL combination had a significant effect when applied in plastic and reconstructive surgery clinical practice.
基金Supported by National Natural Science Foundation of China(Grant No.61803206)Jiangsu Provincial Natural Science Foundation(Grant No.222300420468)Jiangsu Provincial key R&D Program(Grant No.BE2017008-2).
文摘Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex road traffic environment of smart vehicles and other vehicles frequently experiences conflicting start and stop motion.The fine-grained scheduling of autonomous vehicles(AVs)at non-signalized intersections,which is a promising technique for exploring optimal driving paths for both assisted driving nowadays and driverless cars in the near future,has attracted significant attention owing to its high potential for improving road safety and traffic efficiency.Fine-grained scheduling primarily focuses on signalized intersection scenarios,as applying it directly to non-signalized intersections is challenging because each AV can move freely without traffic signal control.This may cause frequent driving collisions and low road traffic efficiency.Therefore,this study proposes a novel algorithm to address this issue.Our work focuses on the fine-grained scheduling of automated vehicles at non-signal intersections via dual reinforced training(FS-DRL).For FS-DRL,we first use a grid to describe the non-signalized intersection and propose a convolutional neural network(CNN)-based fast decision model that can rapidly yield a coarse-grained scheduling decision for each AV in a distributed manner.We then load these coarse-grained scheduling decisions onto a deep Q-learning network(DQN)for further evaluation.We use an adaptive learning rate to maximize the reward function and employ parameterεto tradeoff the fast speed of coarse-grained scheduling in the CNN and optimal fine-grained scheduling in the DQN.In addition,we prove that using this adaptive learning rate leads to a converged loss rate with an extremely small number of training loops.The simulation results show that compared with Dijkstra,RNN,and ant colony-based scheduling,FS-DRL yields a high accuracy of 96.5%on the sample,with improved performance of approximately 61.54%-85.37%in terms of the average conflict and traffic efficiency.
基金Supported by National Natural Science Foundation of China(Grant No.52405104)Jiangxi Provincial Natural Science Foundation(Grant Nos.20242BAB20249 and 20232BAB204041)Science and Technology Project of Department of Transportation of Jiangxi Province(Grant No.2025QN003).
文摘Autonomous driving technology is constantly developing to a higher level of complex scenes,and there is a growing demand for the utilization of end-to-end data-driven control.However,the end-to-end path tracking process often encounters challenges in learning efficiency and generalization.To address this issue,this paper designs a deep deterministic policy gradient(DDPG)-based reinforcement learning strategy that integrates imitation learning and feedforward exploration in the path following process.In imitation learning,the path tracking control data generated by the model predictive control(MPC)method is used to train an end-to-end steering control model of a deep neural network.Another feedforward exploration behavior is predicted by road curvature and vehicle speed,and adds it and imitation learning to the DDPG reinforcement learning to obtain decision-making experience and action prediction behavior of the path tracking process.In the reinforcement learning process,imitation learning is used to update the pre-training parameters of the actor network,and a feedforward steering technique with random noise is adopted for strategy exploration.In the reward function,a hierarchical progressive reward form and a constrained objective reward function referring to MPC are designed,and the actor-critic network architecture is determined.Finally,the path tracking performance of the designed method is verified by comparing various training results,simulations,and HIL tests.The results show that the designed method can effectively utilize pre-training and feedforward prior experience to obtain optimal path tracking performance of an autonomous vehicle,and has better generalization ability than other methods.This study provides an efficient control scheme for improving the end-to-end control performance of autonomous vehicles.
文摘This paper introduces autonomous driving image perception technology,including deep learning models(such as CNN and RNN)and their applications,analyzing the limitations of traditional algorithms.It elaborates on the shortcomings of Faster R-CNN and YOLO series models,proposes various improvement techniques such as data fusion,attention mechanisms,and model compression,and introduces relevant datasets,evaluation metrics,and testing frameworks to demonstrate the advantages of the improved models.
基金supported by the National Science and Technology Council,Taiwan[Grant NSTC 111-2628-E-006-005-MY3]supported by the Ocean Affairs Council,Taiwansponsored in part by Higher Education Sprout Project,Ministry of Education to the Headquarters of University Advancement at National Cheng Kung University(NCKU).
文摘This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.
文摘This paper intends to promote a college English autonomous teaching and learning approach by introducing the whole process of its implementation and feedback from the learners. The theoretical and practical framework of this approach is: with multiple autonomous learning research and practice models as its core, with process syllabus as its guidance, with task-based teaching as its essential principle, with group cooperation and reciprocal learning as its means, with extracurricular activities, online learning and self-access center as its learning environment, with formative assessment system as its guarantee and with cultivation of learners' comprehensive English practical skills and autonomy as its goal. Through this approach, we provide the learners with a favorable learning environment where they can learn by themselves and learn by reflection and practice so that they can learn how to learn and how to behave and how to survive.
文摘Based on the literature review about autonomous learning,the study put forward four steps for using TED to enhance student autonomous learning,which are preparation,activity design,presentation and evaluation. By doing so,both teachers and students can achieve their teaching and learning objectives.
文摘Nowadays, English as a world language becomes more and more important. Consequently, English learning becomes more and more popular. As we know, an important object for English learners is to improve their communicative competence. So autonomous learning is a good way to improve communicative competence. In this paper, two terms, autonomous learning and communicative competence, and their relationship will be introduced from the perspective of English learning. Autonomous learning is self-managed learning, which is contrary to passive learning and mechanical learning, according to intrinsic property of language learning. Communicative competence is a concept introduced by Dell Hymes and is discussed and refined by many other linguists. According to Hymes, communicative competence is the ability not only to apply the grammatical rules of language in order to form grammatically correct sentences but also to know when and where to use these sentences and to whom. Communicative competence includes 4 aspects: Possibility, feasibility, appropriateness and performance. Improving communicative competence is the result of autonomous learning, autonomous learning is the motivation of improving communicative competence. English, of course, is a bridge connecting China to the world, and fostering students'communicative competence through autonomous learning is the vital element of improving English learning in China.
文摘The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab.The purpose of the study is to find how students'listening strategies differ in these two approaches and thereby to find which one better facilitates students'listening proficiency.
文摘The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the means of questionnaires and interviews. It further analyzes the possible reasons why students perceive their teachers' roles in such a way, in the hope of providing some implications for web-based college English autonomous learning.
文摘Autonomous learning is one of the objectives of multi-media college English teaching. On basis of the test of students' autonomous learning ability and the analysis of the results, this paper attempts to explore the feasibility of fostering the autonomous learning ability in college English teaching.
文摘Autonomous study emphasizes the learner's initiative,enthusiasm and creativity.In all fields of education,there is growing emphasis on "learner-centered" teaching methods and the ability of learner autonomy.Many experts and scholars have found that learning strategies plays an important role in English language learning,but the importance of affective strategy use in English learning is often ignored by people.Therefore,this paper focuses on the frequencies of affective strategies use in English learning and their relationships so as to enable college students to use positive affective strategies effectively to improve their autonomous learning ability.
文摘Through the research into college students' English autonomous learning ability of the non-English major students. That the cause why university students' English autonomous learning ability is weak is proved to be that they do not value the use of learning strategies. The use of learning strategies can promote the formation and enhancement of autonomous learning ability of the learners. Metacognitive strategy is a high-level management skill which can enable the learners to plan, regulate, monitor and evaluate actively their own learning process. Massive researches have proved whether metacognitive strategy is used successfully or not can directly affect the student learning result. So, it is necessary for teachers to cultivate and train the students to use metacogitive strategy in the university English teaching.
文摘The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learning can improve learners' autonomous learning, as well as some problems found in their findings. Therefore, this paper first gives a summary and critique of research studies on the web-based autonomous learning and some factors influencing learners' autonomous learning ability;then, areas that deserve further study are also indicated.
文摘Learner Autonomy has been a hot topic in foreign language learning and teaching since 1960s,especially in relation to life-long skills.As the globalization develops,intercultural communication becomes more and more significant for college students.This essay attempts to explore main approaches to cultivate and improve students' autonomous learning ability and intercultural communication competence in foreign language teaching.
基金the support of Centre of Excellence (CoE) in Complex and Nonlinear dynamical system (CNDS), through TEQIP-II, VJTI, Mumbai, India
文摘Obstacle avoidance becomes a very challenging task for an autonomous underwater vehicle(AUV)in an unknown underwater environment during exploration process.Successful control in such case may be achieved using the model-based classical control techniques like PID and MPC but it required an accurate mathematical model of AUV and may fail due to parametric uncertainties,disturbance,or plant model mismatch.On the other hand,model-free reinforcement learning(RL)algorithm can be designed using actual behavior of AUV plant in an unknown environment and the learned control may not get affected by model uncertainties like a classical control approach.Unlike model-based control model-free RL based controller does not require to manually tune controller with the changing environment.A standard RL based one-step Q-learning based control can be utilized for obstacle avoidance but it has tendency to explore all possible actions at given state which may increase number of collision.Hence a modified Q-learning based control approach is proposed to deal with these problems in unknown environment.Furthermore,function approximation is utilized using neural network(NN)to overcome the continuous states and large statespace problems which arise in RL-based controller design.The proposed modified Q-learning algorithm is validated using MATLAB simulations by comparing it with standard Q-learning algorithm for single obstacle avoidance.Also,the same algorithm is utilized to deal with multiple obstacle avoidance problems.
文摘Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly accomplish driving functions. Speaking of machine vision, deep learning techniques, and specifically convolutional neural networks, have been proven to be the state of the art technology in the field. As these networks typically involve millions of parameters and elements, designing an optimal architecture for deep learning structures is a difficult task which is globally under investigation by researchers. This study experimentally evaluates the impact of three major architectural properties of convolutional networks, including the number of layers, filters, and filter size on their performance. In this study, several models with different properties are developed,equally trained, and then applied to an autonomous car in a realistic simulation environment. A new ensemble approach is also proposed to calculate and update weights for the models regarding their mean squared error values. Based on design properties,performance results are reported and compared for further investigations. Surprisingly, the number of filters itself does not largely affect the performance efficiency. As a result, proper allocation of filters with different kernel sizes through the layers introduces a considerable improvement in the performance.Achievements of this study will provide the researchers with a clear clue and direction in designing optimal network architectures for deep learning purposes.
基金co-supported by the National Natural Science Foundation of China(Nos.62003267,61573285)the Aeronautical Science Foundation of China(ASFC)(No.20175553027)Natural Science Basic Research Plan in Shaanxi Province of China(No.2020JQ-220)。
文摘Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.
基金the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”(2018AAA0100803)the National Natural Science Foundation of China(U20B2071,91948204,T2121003,U1913602)。
文摘This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.