期刊文献+
共找到2,916篇文章
< 1 2 146 >
每页显示 20 50 100
Rhodia Rare Earths Reinforces its position in Asia
1
《China Rare Earth Information》 1999年第6期3-3,共1页
关键词 Rhodia Rare Earths reinforces its position in Asia
在线阅读 下载PDF
Auxin Binding Protein 1 Reinforces Resistance to Sugarcane Mosaic Virus in Maize 被引量:7
2
作者 Pengfei Leng Qing Ji +12 位作者 Torben Asp Ursula K. Frei Christina R. Ingvardsen Yongzhong Xing Bruno Studer Margaret Redinbaugh Mark Jones Priyanka Gajjar Sisi Liu Fei Li Guangtang Pan Mingliang Xu Thomas Lǖbberstedt 《Molecular Plant》 SCIE CAS CSCD 2017年第10期1357-1360,共4页
Dear Editor,Sugarcane mosaic virus (SCMV) causes severe viral diseases in maize worldwide (Fuchs and Gruntzig, 1995), resulting in significant losses in grain and forage yield in susceptible cultivars of maize and... Dear Editor,Sugarcane mosaic virus (SCMV) causes severe viral diseases in maize worldwide (Fuchs and Gruntzig, 1995), resulting in significant losses in grain and forage yield in susceptible cultivars of maize and related crops. The most promising solution is to cultivate resistant varieties, which contribute to sustainable crop production. Two epistatically interacting major SCMV resistance loci (Scmvl and Scmv2) are required to confer complete resistance against SCMV in the resistant nearisogenic line F7RPJRR (the letters left of the slash refer to the genotype at Scmv2 on chromosome 3 and those on the right refer to the genotype at Scmvl on chromosome 6, with R indicating a resistance allele and S a susceptibility allele) (Xing et al., 2006). 展开更多
关键词 Auxin Binding Protein 1 reinforces Resistance Sugarcane Mosaic Virus MAIZE
原文传递
An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities
3
作者 Vi Hoai Nam Chu Thi Minh Hue Dang Van Anh 《Computers, Materials & Continua》 2026年第1期2030-2044,共15页
Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top... Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments. 展开更多
关键词 UAV FANET smart cities reinforcement learning Q-LEARNING
在线阅读 下载PDF
Recurrent MAPPO for Joint UAV Trajectory and Traffic Offloading in Space-Air-Ground Integrated Networks
4
作者 Zheyuan Jia Fenglin Jin +1 位作者 Jun Xie Yuan He 《Computers, Materials & Continua》 2026年第1期447-461,共15页
This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential g... This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential growth of mobile devices and data traffic has substantially increased network congestion,particularly in urban areas and regions with limited terrestrial infrastructure.Our approach jointly optimizes unmanned aerial vehicle(UAV)trajectories and satellite-assisted offloading strategies to simultaneously maximize data throughput,minimize energy consumption,and maintain equitable resource distribution.The proposed RMAPPO framework incorporates recurrent neural networks(RNNs)to model temporal dependencies in UAV mobility patterns and utilizes a decentralized multi-agent reinforcement learning architecture to reduce communication overhead while improving system robustness.The proposed RMAPPO algorithm was evaluated through simulation experiments,with the results indicating that it significantly enhances the cumulative traffic offloading rate of nodes and reduces the energy consumption of UAVs. 展开更多
关键词 Space-air-ground integrated networks UAV traffic offloading reinforcement learning
在线阅读 下载PDF
A Q-Learning Improved Particle Swarm Optimization for Aircraft Pulsating Assembly Line Scheduling Problem Considering Skilled Operator Allocation
5
作者 Xiaoyu Wen Haohao Liu +6 位作者 Xinyu Zhang Haoqi Wang Yuyan Zhang Guoyong Ye Hongwen Xing Siren Liu Hao Li 《Computers, Materials & Continua》 2026年第1期1503-1529,共27页
Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in oper... Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines. 展开更多
关键词 Aircraft pulsating assembly lines skilled operator reinforcement learning PSO reverse scheduling
在线阅读 下载PDF
A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration
6
作者 Changcheng Li Weimeng Chang +1 位作者 Dahai Zhang Jinghan He 《Energy Engineering》 2026年第1期243-264,共22页
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision... Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training. 展开更多
关键词 Partitioning method parallel restoration deep reinforcement learning experience replay buffer partitioning modularity
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
7
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
Artificial Intelligence (AI)-Enabled Unmanned Aerial Vehicle (UAV) Systems for Optimizing User Connectivity in Sixth-Generation (6G) Ubiquitous Networks
8
作者 Zeeshan Ali Haider Inam Ullah +2 位作者 Ahmad Abu Shareha Rashid Nasimov Sufyan Ali Memon 《Computers, Materials & Continua》 2026年第1期534-549,共16页
The advent of sixth-generation(6G)networks introduces unprecedented challenges in achieving seamless connectivity,ultra-low latency,and efficient resource management in highly dynamic environments.Although fifth-gener... The advent of sixth-generation(6G)networks introduces unprecedented challenges in achieving seamless connectivity,ultra-low latency,and efficient resource management in highly dynamic environments.Although fifth-generation(5G)networks transformed mobile broadband and machine-type communications at massive scales,their properties of scaling,interference management,and latency remain a limitation in dense high mobility settings.To overcome these limitations,artificial intelligence(AI)and unmanned aerial vehicles(UAVs)have emerged as potential solutions to develop versatile,dynamic,and energy-efficient communication systems.The study proposes an AI-based UAV architecture that utilizes cooperative reinforcement learning(CoRL)to manage an autonomous network.The UAVs collaborate by sharing local observations and real-time state exchanges to optimize user connectivity,movement directions,allocate power,and resource distribution.Unlike conventional centralized or autonomous methods,CoRL involves joint state sharing and conflict-sensitive reward shaping,which ensures fair coverage,less interference,and enhanced adaptability in a dynamic urban environment.Simulations conducted in smart city scenarios with 10 UAVs and 50 ground users demonstrate that the proposed CoRL-based UAV system increases user coverage by up to 10%,achieves convergence 40%faster,and reduces latency and energy consumption by 30%compared with centralized and decentralized baselines.Furthermore,the distributed nature of the algorithm ensures scalability and flexibility,making it well-suited for future large-scale 6G deployments.The results highlighted that AI-enabled UAV systems enhance connectivity,support ultra-reliable low-latency communications(URLLC),and improve 6G network efficiency.Future work will extend the framework with adaptive modulation,beamforming-aware positioning,and real-world testbed deployment. 展开更多
关键词 6G networks UAV-based communication cooperative reinforcement learning network optimization user connectivity energy efficiency
在线阅读 下载PDF
Energy Optimization for Autonomous Mobile Robot Path Planning Based on Deep Reinforcement Learning
9
作者 Longfei Gao Weidong Wang Dieyun Ke 《Computers, Materials & Continua》 2026年第1期984-998,共15页
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ... At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems. 展开更多
关键词 Autonomous mobile robot deep reinforcement learning energy optimization multi-attention mechanism prioritized experience replay dueling deep Q-Network
在线阅读 下载PDF
Evaluation of Reinforcement Learning-Based Adaptive Modulation in Shallow Sea Acoustic Communication
10
作者 Yifan Qiu Xiaoyu Yang +1 位作者 Feng Tong Dongsheng Chen 《哈尔滨工程大学学报(英文版)》 2026年第1期292-299,共8页
While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance re... While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies. 展开更多
关键词 Adaptive modulation Shallow sea underwater acoustic modulation Reinforcement learning
在线阅读 下载PDF
DRL-Based Cross-Regional Computation Offloading Algorithm
11
作者 Lincong Zhang Yuqing Liu +2 位作者 Kefeng Wei Weinan Zhao Bo Qian 《Computers, Materials & Continua》 2026年第1期901-918,共18页
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e... In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads. 展开更多
关键词 Edge computing computational task offloading deep reinforcement learning D3QN device-to-device communication system latency optimization
在线阅读 下载PDF
Hybrid AI-IoT Framework with Digital Twin Integration for Predictive Urban Infrastructure Management in Smart Cities
12
作者 Abdullah Alourani Mehtab Alam +2 位作者 Ashraf Ali Ihtiram Raza Khan Chandra Kanta Samal 《Computers, Materials & Continua》 2026年第1期462-493,共32页
The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often... The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities. 展开更多
关键词 Smart cities digital twin AI-IOT framework predictive infrastructure management edge computing reinforcement learning optimization methods federated learning urban systems modeling smart governance
在线阅读 下载PDF
Gulfstream G550 Reinforces Reliability and Capabilities with World Speed Record
13
《今日民航》 2019年第3期98-98,共1页
Gulfstream Aerospace’s high-performing G550 recently established a new city-pair speed record, connecting Shanghai with Seattle in 10 hours and 29minutes, bringing the number of records the jet has earned to 55.
关键词 Gulfstream G550 reinforces RELIABILITY and Capabilities with World SPEED RECORD
原文传递
Book Review:Enno Freiherr von Fircks.(2024).Conservativism:A Cultural-Psychological Exploration.Cham:Springer Nature.xx+124pp.ISBN 978-3-031-51205-6(eBook)
14
作者 Nan Xu Tingting Hu 《Journal of Psychology in Africa》 2025年第3期429-430,共2页
In the last few years,scholars have continued to refine the understanding of conservatism through cultural psychology.The application of Symbolic Action Theory,particularly Ernst Boesch’s framework,remains central to... In the last few years,scholars have continued to refine the understanding of conservatism through cultural psychology.The application of Symbolic Action Theory,particularly Ernst Boesch’s framework,remains central to discussions,illustrating how conservatism is not merely a political stance but a dynamic system of behaviors that reinforces societal values through symbolic actions(Kölbl,2020).Additionally,recent studies have highlighted the interaction between cultural conservatism and collective identity,with particular focus on how conservatism can manifest in diverse cultural settings(Yilmaz&Alper,2019).Research also underscores the relevance of symbolic politics in shaping both individual attitudes and broader political actions,contributing to the understanding of how symbolic meanings sustain social conservatism in different cultural contexts(Chin&Levey,2022). 展开更多
关键词 cultural psychology CONSERVATISM system behaviors symbolic action theoryparticularly symbolic politics reinforces societal values symbolic actions k lbl additionallyrecent collective identity symbolic action theory
在线阅读 下载PDF
基于注意力机制的无人机集群动态分配 被引量:1
15
作者 张家宝 陈勇 +2 位作者 薛文军 秦嘉 李元恒 《火力与指挥控制》 北大核心 2025年第2期86-92,99,共8页
在俄乌战争和巴以冲突中,人工智能技术支持的无人机群被频繁用于感知和打击,这种人工智能生成的超快速杀伤链在当前复杂城市作战中展现出巨大的战争潜力。面向多临机目标的无人机集群动态分配技术存在“系统响应慢、资源调度力弱、协作... 在俄乌战争和巴以冲突中,人工智能技术支持的无人机群被频繁用于感知和打击,这种人工智能生成的超快速杀伤链在当前复杂城市作战中展现出巨大的战争潜力。面向多临机目标的无人机集群动态分配技术存在“系统响应慢、资源调度力弱、协作效率低”等问题。构建基于注意力机制的无人机集群动态分配决策模型,提升求解效率,缩短指挥决策阶段耗费时间,且决策模型具有较强的泛化性;运用强化学习(带基线的REINFORCE算法)训练,简化训练流程。 展开更多
关键词 无人机集群 动态分配 注意力机制 带基线的REINFORCE算法
在线阅读 下载PDF
In-situ Si particle-reinforced joints of hypereutectic Al−60Si alloys by ultrasonic-assisted soldering 被引量:2
16
作者 Yuan-xing LI Xiang-bo ZHENG +3 位作者 Chao-zheng ZHAO Zong-tao ZHU Yu-jie BAI Hui CHEN 《Transactions of Nonferrous Metals Society of China》 2025年第1期77-90,共14页
To improve the wettability of hypereutectic Al−60Si alloy and enhance the mechanical properties of the joints,Al−60Si alloy was joined by ultrasonic soldering with Sn-9Zn solder,and a sound joint with in-situ Si parti... To improve the wettability of hypereutectic Al−60Si alloy and enhance the mechanical properties of the joints,Al−60Si alloy was joined by ultrasonic soldering with Sn-9Zn solder,and a sound joint with in-situ Si particle reinforcement was obtained.The oxide film of Al−60Si alloy at the interface was identified by transmission electron microscopy(TEM)analysis as amorphous Al_(2)O_(3).The oxide of Si particles in the base metal was also alumina.The oxide film of Al−60Si alloy was observed to be removed by ultrasonic vibration instead of holding treatment.Si particle-reinforced joints(35.7 vol.%)were obtained by increasing the ultrasonication time.The maximum shear strength peaked at 99.5 MPa for soldering at 330℃with an ultrasonic vibration time of 50 s.A model of forming of Si particles reinforced joint under the ultrasound was proposed,and ultrasonic vibration was considered to promote the dissolution of Al and migration of Si particles. 展开更多
关键词 hypereutectic Al−60Si alloy ultrasonic-assisted soldering Si particle reinforcement Sn−9Zn solder
在线阅读 下载PDF
Recent Advances in Sustainable Concrete and Steel Alternatives for Marine Infrastructure 被引量:2
17
作者 Kiran Napte Ganesh E.Kondhalkar +4 位作者 Shilpa Vishal Patil Pallavi Vishnu Kharat Snehal Mayur Banarase Anant Sidhappa Kurhade Shital Yashwant Waware 《Sustainable Marine Structures》 2025年第2期107-131,共25页
Marine infrastructure is increasingly vulnerable to harsh environmental conditions that accelerate the degradation of traditional materials such as Portland cement concrete and carbon steel.This review systematically ... Marine infrastructure is increasingly vulnerable to harsh environmental conditions that accelerate the degradation of traditional materials such as Portland cement concrete and carbon steel.This review systematically investigates recent advancements in sustainable alternatives,including geopolymer concrete,engineered innovacementitious composites(ECC),bio-concrete,fiber-reinforced polymers(FRPs),and bamboo,stainless steel,and steel-CFRP hybrid bars.Each material is evaluated based on marine durability,mechanical performance,environmental impact,and cost feasibility using life cycle assessment,durability modelling,and a multi-criteria decisionsupport framework.The results reveal that geopolymer concrete and FRP reinforcement’s exhibit superior corrosion resistance and environmental benefits,while ECC and steel-CFRP composites offer structural resilience with moderate environmental trade-offs.However,challenges remain in long-term performance validation,standardization,and market integration.The review concludes that a combined approach involving innovative materials,computational tools,and sustainability assessment is essential for advancing marine infrastructure.Outlook recommendations include focused field studies,development of regulatory guidelines,and interdisciplinary collaboration to drive the practical adoption of eco-efficient materials in coastal and offshore construction. 展开更多
关键词 Bio-Concrete Self-Healing Materials Corrosion-Resistant Reinforcement Fiber-Reinforced Polymer(FRP)Composites Geopolymer Concrete Life Cycle Assessment in Construction Sustainable Marine Infrastructure
在线阅读 下载PDF
Graph-based multi-agent reinforcement learning for collaborative search and tracking of multiple UAVs 被引量:2
18
作者 Bocheng ZHAO Mingying HUO +4 位作者 Zheng LI Wenyu FENG Ze YU Naiming QI Shaohai WANG 《Chinese Journal of Aeronautics》 2025年第3期109-123,共15页
This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary obj... This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments. 展开更多
关键词 Unmanned aerial vehicle(UAV) Multi-agent reinforcement learning(MARL) Graph attention network(GAT) Tracking Dynamic and unknown environment
原文传递
NeOR: neural exploration with feature-based visual odometry and tracking-failure-reduction policy 被引量:1
19
作者 ZHU Ziheng LIU Jialing +2 位作者 CHEN Kaiqi TONG Qiyi LIU Ruyu 《Optoelectronics Letters》 2025年第5期290-297,共8页
Embodied visual exploration is critical for building intelligent visual agents. This paper presents the neural exploration with feature-based visual odometry and tracking-failure-reduction policy(Ne OR), a framework f... Embodied visual exploration is critical for building intelligent visual agents. This paper presents the neural exploration with feature-based visual odometry and tracking-failure-reduction policy(Ne OR), a framework for embodied visual exploration that possesses the efficient exploration capabilities of deep reinforcement learning(DRL)-based exploration policies and leverages feature-based visual odometry(VO) for more accurate mapping and positioning results. An improved local policy is also proposed to reduce tracking failures of feature-based VO in weakly textured scenes through a refined multi-discrete action space, keyframe fusion, and an auxiliary task. The experimental results demonstrate that Ne OR has better mapping and positioning accuracy compared to other entirely learning-based exploration frameworks and improves the robustness of feature-based VO by significantly reducing tracking failures in weakly textured scenes. 展开更多
关键词 intelligent visual agents deep reinforcement learning drl based embodied visual exploration feature based visual odometry tracking failure reduction policy neural exploration deep reinforcement learning
原文传递
Comprehensive recovery of rare earth elements and gypsum from phosphogypsum:A wastewater free process combining gravity separation and hydrometallurgy 被引量:1
20
作者 Jialin Qing Dapeng Zhao +6 位作者 Li Zeng Guiqing Zhang Liang Zhou Jiawei Du Qinggang Li Zuoying Cao Shengxi Wu 《Journal of Rare Earths》 2025年第2期362-370,I0005,共10页
Comprehensive utilization of phosphogypsum(PG)has attracted much attention,especially for the recovery of rare earth elements(REEs)and gypsum due to the issues of stockpile,environmental pollution,and waste of associa... Comprehensive utilization of phosphogypsum(PG)has attracted much attention,especially for the recovery of rare earth elements(REEs)and gypsum due to the issues of stockpile,environmental pollution,and waste of associated resources.Traditional utilization methods suffered the issues of low REEs leaching efficiency,huge amount of CaSO_(4)saturated wastewater and high recovery cost.To solve these issues,this study investigated the occurrence of REEs in PG and the leaching of REEs.The results show that REEs in PG are in the forms of(1)REEs mineral inclusions,(2)REEs isomorphous substitution of Ca^(2+)in gypsum lattice,(3)dispersed soluble REEs salts.Acid leaching results demonstrate that(1)the dissolution of gypsum matrix is the control factor of REEs leaching;(2)H_(2)SO_(4)is a promising leachant considering the recycle of leachate;(3)the gypsum matrix suffers a recrystallization during the acid leaching and releases the soluble REEs from PG to aqueous solution.For the recovery of the undissolved REEs mineral inclusions,wet sieving concentrated 37.1 wt%of the REEs in a 10.7 wt%mass,increasing REEs content from 309 to 1071 ppm.Finally,a green process combining gravity separation and hydrometallurgy is proposed.This process owns the merits of wastewater free,considerable REEs recovery(about 10%increase compared with traditional processes),excellent gypsum purification(>95 wt%CaSO_(4)·2H_(2)O,with<0.06 wt%of soluble P_(2)O_(5) and<0.015 wt%of soluble F)and reagent saving(about 2/3less reagent consumption than non-cyclical leaching). 展开更多
关键词 PHOSPHOGYPSUM Rare earths Wastewater free Recrystallization reinforcement Gravity separation
原文传递
上一页 1 2 146 下一页 到第
使用帮助 返回顶部