Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of im...Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of image in tasks and the growth of data scale,existing deep learning methods still have some limitations.For example,they lack the ability to capture long-range dependencies and their performance in handling multi-scale image structures is suboptimal.To solve this problem,the paper proposes an image inpainting method based on the parallel dual-branch learnable Transformer network.The encoder of the proposed model generator consists of a dual-branch parallel structure with stacked CNN blocks and Transformer blocks,aiming to extract global and local feature information from images.Furthermore,a dual-branch fusion module is adopted to combine the features obtained from both branches.Additionally,a gated full-scale skip connection module is proposed to further enhance the coherence of the inpainting results and alleviate information loss.Finally,experimental results from the three public datasets demonstrate the superior performance of the proposed method.展开更多
In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we devel...In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we develop a multimodal framework that integrates symbolic task reasoning with continuous trajectory generation.The approach employs transformer models and adversarial training to map high-level intent to robotic motion.Information from multiple data sources,such as voice traits,hand and body keypoints,visual observations,and recorded paths,is integrated simultaneously.These signals are mapped into a shared representation that supports interpretable reasoning while enabling smooth and realistic motion generation.Based on this design,two different learning strategies are investigated.In the first step,grammar-constrained Linear Temporal Logic(LTL)expressions are created from multimodal human inputs.These expressions are subsequently decoded into robot trajectories.The second method generates trajectories directly from symbolic intent and linguistic data,bypassing an intermediate logical representation.Transformer encoders combine multiple types of information,and autoregressive transformer decoders generate motion sequences.Adding smoothness and speed limits during training increases the likelihood of physical feasibility.To improve the realism and stability of the generated trajectories during training,an adversarial discriminator is also included to guide them toward the distribution of actual robot motion.Tests on the NATSGLD dataset indicate that the complete system exhibits stable training behaviour and performance.In normalised coordinates,the logic-based pipeline has an Average Displacement Error(ADE)of 0.040 and a Final Displacement Error(FDE)of 0.036.The adversarial generator makes substantially more progress,reducing ADE to 0.021 and FDE to 0.018.Visual examination confirms that the generated trajectories closely align with observed motion patterns while preserving smooth temporal dynamics.展开更多
基金supported by Scientific Research Fund of Hunan Provincial Natural Science Foundation under Grant 20231J60257Hunan Provincial Engineering Research Center for Intelligent Rehabilitation Robotics and Assistive Equipment under Grant 2025SH501Inha University and Design of a Conflict Detection and Validation Tool under Grant HX2024123.
文摘Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of image in tasks and the growth of data scale,existing deep learning methods still have some limitations.For example,they lack the ability to capture long-range dependencies and their performance in handling multi-scale image structures is suboptimal.To solve this problem,the paper proposes an image inpainting method based on the parallel dual-branch learnable Transformer network.The encoder of the proposed model generator consists of a dual-branch parallel structure with stacked CNN blocks and Transformer blocks,aiming to extract global and local feature information from images.Furthermore,a dual-branch fusion module is adopted to combine the features obtained from both branches.Additionally,a gated full-scale skip connection module is proposed to further enhance the coherence of the inpainting results and alleviate information loss.Finally,experimental results from the three public datasets demonstrate the superior performance of the proposed method.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number(PSAU/2024/01/32082).
文摘In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we develop a multimodal framework that integrates symbolic task reasoning with continuous trajectory generation.The approach employs transformer models and adversarial training to map high-level intent to robotic motion.Information from multiple data sources,such as voice traits,hand and body keypoints,visual observations,and recorded paths,is integrated simultaneously.These signals are mapped into a shared representation that supports interpretable reasoning while enabling smooth and realistic motion generation.Based on this design,two different learning strategies are investigated.In the first step,grammar-constrained Linear Temporal Logic(LTL)expressions are created from multimodal human inputs.These expressions are subsequently decoded into robot trajectories.The second method generates trajectories directly from symbolic intent and linguistic data,bypassing an intermediate logical representation.Transformer encoders combine multiple types of information,and autoregressive transformer decoders generate motion sequences.Adding smoothness and speed limits during training increases the likelihood of physical feasibility.To improve the realism and stability of the generated trajectories during training,an adversarial discriminator is also included to guide them toward the distribution of actual robot motion.Tests on the NATSGLD dataset indicate that the complete system exhibits stable training behaviour and performance.In normalised coordinates,the logic-based pipeline has an Average Displacement Error(ADE)of 0.040 and a Final Displacement Error(FDE)of 0.036.The adversarial generator makes substantially more progress,reducing ADE to 0.021 and FDE to 0.018.Visual examination confirms that the generated trajectories closely align with observed motion patterns while preserving smooth temporal dynamics.