期刊文献+
共找到410篇文章
< 1 2 21 >
每页显示 20 50 100
“suppose,supposing 引导条件状语从句时,仅用于问句”欠妥
1
作者 夏罗英 《语言教育》 1996年第6期79-79,共1页
贵刊1995年第12期 p.30《怎样理解这三个句子》一文,读后颇受启发。但该文有一处注释写道:“suppose,supposing 引导条件状语从句时,仅用于问句。”笔者认为,这一说法欠妥。请看以下例证:Suppose white were black,you might be right.... 贵刊1995年第12期 p.30《怎样理解这三个句子》一文,读后颇受启发。但该文有一处注释写道:“suppose,supposing 引导条件状语从句时,仅用于问句。”笔者认为,这一说法欠妥。请看以下例证:Suppose white were black,you might be right.假如白的即是黑的,那末你或许就对了。(《英汉大词典》下卷 p.3490)Suppose(Supposing)you miss your tiger,he is not likely to miss you.你如果打不着老虎,老虎不见得吃不着你。(《英华大词典》修订第二版 p.1399) 展开更多
关键词 状语从句 SUPPOSE LIKELY TIGER 文有 请看 表达法 增补版 posing
在线阅读 下载PDF
The Relationship between Students’Problem Posing and Problem Solving Abilities and Beliefs:A Small-Scale Study with Chinese Elementary School Children
2
作者 CHEN Limin Wim VAN DOOREN Lieven VERSCHAFFEL 《Frontiers of Education in China》 2013年第1期147-161,共15页
The goal of the present study is to investigate the relationship between pupils’problem posing and problem solving abilities,their beliefs about problem posing and problem solving,and their general mathematics abilit... The goal of the present study is to investigate the relationship between pupils’problem posing and problem solving abilities,their beliefs about problem posing and problem solving,and their general mathematics abilities,in a Chinese context.Five instruments,i.e.,a problem posing test,a problem solving test,a problem posing questionnaire,a problem solving questionnaire,and a standard achievement test,were administered to 69 Chinese fifth-grade pupils to assess these five variables and analyze their mutual relationships.Results revealed strong correlations between pupils’problem posing and problem solving abilities and beliefs,and their general mathematical abilities. 展开更多
关键词 problem posing problem solving Chinese pupils mathematics education
原文传递
Application of the Improved PF-Flow-Style-VTON in Virtual Try-On
3
作者 TIAN Jiajia HUANG Rong +1 位作者 DONG Aihua WANG Zhijie 《Journal of Donghua University(English Edition)》 2026年第1期104-117,共14页
During the image generation phase,the parserfree Flow-Style-VTON model(PF-Flow-Style-VTON),which utilizes distilled appearance flows,faces two main challenges:blurring,deformation,occlusion,or loss of the arm or palm ... During the image generation phase,the parserfree Flow-Style-VTON model(PF-Flow-Style-VTON),which utilizes distilled appearance flows,faces two main challenges:blurring,deformation,occlusion,or loss of the arm or palm regions in the generated image when these regions of the person occlude the garment;blurring and deformation in the generated image when the person performs large pose movements and the target garment is complex with detailed patterns.To solve these two problems,an improved virtual try-on network model,denoted as IPF-Flow-Style-VTON,is proposed.Firstly,a target warped garment mask refinement module(M-RM)is introduced to refine the warped garment mask and remove erroneous information in the arm and palm regions,thereby improving the quality of subsequent image generation.Secondly,an improved global attention module(GAM)is integrated into the original image generation network,enhancing the ResUNet’s understanding of global context and optimizing the fusion of local features and global information,thereby further improving image generation quality.Finally,the UniPose model is used to provide the pose keypoint information of the target person image,guiding the task execution during the image generation phase.Experiments conducted on the VITON dataset show that the proposed method outperforms the original method,Flow-Style-VTON,by 5.4%,0.3%,6.7%,and 2.2%in Frchet inception distance(FID),structural similarity index measure(SSIM),learned perceptual image patch similarity(LPIPS),and peak signal-to-noise ratio(PSNR),respectively.Overall,the proposed method effectively improves upon the shortcomings of the original network and achieves better visual results. 展开更多
关键词 virtual try-on image generation network pose keypoint deep learning
在线阅读 下载PDF
针对AI防呆的手部姿态估计算法研究
4
作者 孙世丹 《电子质量》 2026年第1期112-117,共6页
为满足实际生产中对于人工智能(AI)智能防呆的需求,对手部姿态估计算法进行了深入研究。通过对多种算法进行调研与实验对比,最终构建了一套基于YOLOv8Pose的手部姿态估计系统。首先,对COCO-WholeBody数据集中的手部训练数据进行整理,筛... 为满足实际生产中对于人工智能(AI)智能防呆的需求,对手部姿态估计算法进行了深入研究。通过对多种算法进行调研与实验对比,最终构建了一套基于YOLOv8Pose的手部姿态估计系统。首先,对COCO-WholeBody数据集中的手部训练数据进行整理,筛选并剔除错误标注,有效提升了数据质量与后续模型训练的准确率;其次,应用GroupNorm剪枝算法对YOLOv8Pose模型进行压缩,在较好地保持模型检测精度的同时,显著降低了其资源消耗,使其能够适配资源受限的边缘计算设备;最后,将优化后的模型成功部署在瑞芯微RK3588硬件平台,验证了所提算法在真实工业场景中的可行性与有效性。本研究为工业生产线上的实时、精准操作监控提供了一种高效的AI解决方案,对提升生产自动化水平和质量控制能力具有积极意义。 展开更多
关键词 姿态估计 YOLOv8Pose 边缘部署 防呆设计
在线阅读 下载PDF
An intelligent segmentation method for leakage points in central serous chorioretinopathy based on fluorescein angiography images
5
作者 Jian-Guo Xu Yong-Chi Liu +4 位作者 Fen Zhou Jian-Xin Shen Zhi-Peng Yan Xin-Ya Hu Wei-Hua Yang 《International Journal of Ophthalmology(English edition)》 2026年第3期421-433,共13页
AIM:To construct an intelligent segmentation scheme for precise localization of central serous chorioretinopathy(CSC)leakage points,thereby enabling ophthalmologists to deliver accurate laser treatment without navigat... AIM:To construct an intelligent segmentation scheme for precise localization of central serous chorioretinopathy(CSC)leakage points,thereby enabling ophthalmologists to deliver accurate laser treatment without navigational laser equipment.METHODS:A dataset with dual labels(point-level and pixel-level)was first established based on fundus fluorescein angiography(FFA)images of CSC and subsequently divided into training(102 images),validation(40 images),and test(40 images)datasets.An intelligent segmentation method was then developed,based on the You Only Look Once version 8 Pose Estimation(YOLOv8-Pose)model and segment anything model(SAM),to segment CSC leakage points.Next,the YOLOv8-Pose model was trained for 200 epochs,and the best-performing model was selected to form the optimal combination with SAM.Additionally,the classic five types of U-Net series models[i.e.,U-Net,recurrent residual U-Net(R2U-Net),attention U-Net(AttU-Net),recurrent residual attention U-Net(R2AttUNet),and nested U-Net(UNet^(++))]were initialized with three random seeds and trained for 200 epochs,resulting in a total of 15 baseline models for comparison.Finally,based on the metrics including Dice similarity coefficient(DICE),intersection over union(IoU),precision,recall,precisionrecall(PR)curve,and receiver operating characteristic(ROC)curve,the proposed method was compared with baseline models through quantitative and qualitative experiments for leakage point segmentation,thereby demonstrating its effectiveness.RESULTS:With the increase of training epochs,the mAP50-95,Recall,and precision of the YOLOv8-Pose model showed a significant increase and tended to stabilize,and it achieved a preliminary localization success rate of 90%(i.e.,36 images)for CSC leakage points in 40 test images.Using manually expert-annotated pixel-level labels as the ground truth,the proposed method achieved outcomes with a DICE of 57.13%,an IoU of 45.31%,a precision of 45.91%,a recall of 93.57%,an area under the PR curve(AUC-PR)of 0.78 and an area under the ROC curve(AUC-ROC)of 0.97,which enables more accurate segmentation of CSC leakage points.CONCLUSION:By combining the precise localization capability of the YOLOv8-Pose model with the robust and flexible segmentation ability of SAM,the proposed method not only demonstrates the effectiveness of the YOLOv8-Pose model in detecting keypoint coordinates of CSC leakage points from the perspective of application innovation but also establishes a novel approach for accurate segmentation of CSC leakage points through the“detect-then-segment”strategy,thereby providing a potential auxiliary means for the automatic and precise realtime localization of leakage points during traditional laser photocoagulation for CSC. 展开更多
关键词 You Only Look Once version 8 Pose Estimation segment anything model central serous chorioretinopathy leakage point segmentation
原文传递
Transformer-Driven Multimodal for Human-Object Detection and Recognition for Intelligent Robotic Surveillance
6
作者 Aman Aman Ullah Yanfeng Wu +3 位作者 Shaheryar Najam Nouf Abdullah Almujally Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 2026年第4期1364-1383,共20页
Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To addre... Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To address this,we present SCENET-3D,a transformer-drivenmultimodal framework that unifies human-centric skeleton features with scene-object semantics for intelligent robotic vision through a three-stage pipeline.In the first stage,scene analysis,rich geometric and texture descriptors are extracted from RGB frames,including surface-normal histograms,angles between neighboring normals,Zernike moments,directional standard deviation,and Gabor-filter responses.In the second stage,scene-object analysis,non-human objects are segmented and represented using local feature descriptors and complementary surface-normal information.In the third stage,human-pose estimation,silhouettes are processed through an enhanced MoveNet to obtain 2D anatomical keypoints,which are fused with depth information and converted into RGB-based point clouds to construct pseudo-3D skeletons.Features from all three stages are fused and fed in a transformer encoder with multi-head attention to resolve visually similar activities.Experiments on UCLA(95.8%),ETRI-Activity3D(89.4%),andCAD-120(91.2%)demonstrate that combining pseudo-3D skeletonswith rich scene-object fusion significantly improves generalizable activity recognition,enabling safer elderly care,natural human–robot interaction,and robust context-aware robotic perception in real-world environments. 展开更多
关键词 Human object detection elderly care RGB-based pose estimation scene context analysis object recognition Gabor features point cloud reconstruction
在线阅读 下载PDF
煤矿井下人员危险行为检测方法 被引量:1
7
作者 张旭辉 余恒翰 +6 位作者 杜昱阳 杨文娟 赵亦辉 万继成 王彦群 赵典 汤杜炜 《工矿自动化》 北大核心 2025年第5期64-71,共8页
井下人员危险行为检测是煤矿安全防控的关键环节。现有目标检测技术用于人员危险行为检测时,受煤矿井下复杂工况、设备遮挡、多目标密集、粉尘干扰等因素影响,存在特征提取不准确等问题,且未明确界定人员危险行为。以YOLOv8−pose模型为... 井下人员危险行为检测是煤矿安全防控的关键环节。现有目标检测技术用于人员危险行为检测时,受煤矿井下复杂工况、设备遮挡、多目标密集、粉尘干扰等因素影响,存在特征提取不准确等问题,且未明确界定人员危险行为。以YOLOv8−pose模型为基准架构,采用DCNv4和PConv模块融合的DCNv4−PConv混合模块代替标准卷积,添加混合局部通道注意力(MLCA)模块,并采用感受野注意力卷积(RFAConv)模块替换检测头,构建了PMR−YOLO模型,用于检测井下监控图像中人体关键点,提升检测精度和运算速度。在此基础上设计了人员行为识别算法,将井下人员行为划分为9种类别,基于YOLOv8−pose模型检测的人体关键点形成人体骨架,判断人员行为类别型。采用DsLMF+数据集进行消融实验、对比实验和人员行为识别实验,结果表明:DCNv4−PConv混合模块、MLCA模块、RFAConv模块的引入有效提高了YOLOv8−pose模型的精确度、召回率和平均精度均值(mAP);PMR−YOLO模型对人体关键点特征提取的精确度、召回率和mAP分别为0.893,0.841,0.852,较YOLOv8−pose模型分别提高了6.9%,14.4%,10.5%;基于PMR−YOLO模型的检测方法可有效识别井下人员9种行为类别,识别准确率均不低于96%。 展开更多
关键词 视频识别 危险行为检测 人员行为识别 YOLOv8−pose模型 人体关键点检测
在线阅读 下载PDF
Multiplexable tilted fiber Bragg grating sensors based on dispersive microwave-photonic frequency domain reflectometry
8
作者 CHEN ZHU RUIMIN JIE +2 位作者 CHENXI HUANG HUAIJUN GUAN JIE HUANG 《Photonics Research》 2025年第12期3453-3465,共13页
Tilted fiber Bragg gratings(TFBGs) have seen rapid development and widespread deployment in diverse sensing applications, ranging from biochemical detection to in-situ monitoring in energy systems. Their ability to ge... Tilted fiber Bragg gratings(TFBGs) have seen rapid development and widespread deployment in diverse sensing applications, ranging from biochemical detection to in-situ monitoring in energy systems. Their ability to generate rich spectral features via cladding mode coupling enables highly sensitive, multi-parameter, and label-free sensing.However, accurate interrogation typically requires broadband spectral measurements to resolve fine spectral structures-posing significant challenges for scalable and cost-effective multiplexing in practical settings. 展开更多
关键词 tilted fiber bragg gratings fine spectral structures posing tilted fiber bragg gratings tfbgs biochemical detection rich spectral features multiplexable cladding mode coupling energy systemstheir
原文传递
An Efficient and Accurate Solution for the PnPL Problem 被引量:1
9
作者 Ridma Basnayaka Qida Yu 《Instrumentation》 2025年第3期63-75,共13页
Camera Pose Estimating from point and line correspondences is critical in various applications,including robotics,augmented reality,3D reconstruction,and autonomous navigation.Existing methods,such as the Perspective-... Camera Pose Estimating from point and line correspondences is critical in various applications,including robotics,augmented reality,3D reconstruction,and autonomous navigation.Existing methods,such as the Perspective-n-Point(PnP)and Perspective-n-Line(PnL)approaches,offer limited accuracy and robustness in environments with occlusions,noise,or sparse feature data.This paper presents a unified solution,Efficient and Accurate Pose Estimation from Point and Line Correspondences(EAPnPL),combining point-based and linebased constraints to improve pose estimation accuracy and computational efficiency,particularly in low-altitude UAV navigation and obstacle avoidance.The proposed method utilizes quaternion parameterization of the rotation matrix to overcome singularity issues and address challenges in traditional rotation matrix-based formulations.A hybrid optimization framework is developed to integrate both point and line constraints,providing a more robust and stable solution in complex scenarios.The method is evaluated using synthetic and realworld datasets,demonstrating significant improvements in performance over existing techniques.The results indicate that the EAPnPL method enhances accuracy and reduces computational complexity,making it suitable for real-time applications in autonomous UAV systems.This approach offers a promising solution to the limitations of existing camera pose estimation methods,with potential applications in low-altitude navigation,autonomous robotics,and 3D scene reconstruction. 展开更多
关键词 camera pose estimation efficient and accurate pose estimation(eapnpl) UAV navigation obstacle avoidance point-and-line correspondences
原文传递
不同视角三维人体关键点动作相似度计算
10
作者 李子贺 王一丁 《计算机与现代化》 2025年第7期63-68,共6页
目前线上健身、舞蹈教学视频资源丰富,但学员在学习过程中为比较与教学的动作自行拍摄的视频无法保证与教学的视角一致,会有角度和尺度的差异,不便于比较动作相似度。针对此问题,本文利用现有的三维人体姿态估计技术,提出一种可以用于... 目前线上健身、舞蹈教学视频资源丰富,但学员在学习过程中为比较与教学的动作自行拍摄的视频无法保证与教学的视角一致,会有角度和尺度的差异,不便于比较动作相似度。针对此问题,本文利用现有的三维人体姿态估计技术,提出一种可以用于不同视角下的单目摄像头拍摄的视频的动作相似度评估算法。对于2个不同视角的人物动作视频,首先用YOLOv8pose网络提取二维人体关键点,然后用GraphMLP网络升维成三维关键点。基于2组三维关键点序列计算欧氏距离矩阵,用DTW算法找出2组动作的对应帧,将对应帧的三维关键点通过旋转、放缩等手段调整视角,将不同视角的动作序列调整到同一方向,最后采用骨骼向量的余弦相似度作为相似度评判指标。利用不同视角的动作捕捉动画进行实验,验证了本文方法的有效性。 展开更多
关键词 YOLOv8pose GraphMLP 人体姿态估计 DTW 余弦相似度 不同视角
在线阅读 下载PDF
An Integrated Framework of Grasp Detection and Imitation Learning for Space Robotics Applications 被引量:1
11
作者 Yuming Ning Tuanjie Li +3 位作者 Yulin Zhang Ziang Li Wenqian Du Yan Zhang 《Chinese Journal of Mechanical Engineering》 2025年第4期316-335,共20页
Robots are key to expanding the scope of space applications.The end-to-end training for robot vision-based detection and precision operations is challenging owing to constraints such as extreme environments and high c... Robots are key to expanding the scope of space applications.The end-to-end training for robot vision-based detection and precision operations is challenging owing to constraints such as extreme environments and high computational overhead.This study proposes a lightweight integrated framework for grasp detection and imitation learning,named GD-IL;it comprises a grasp detection algorithm based on manipulability and Gaussian mixture model(manipulability-GMM),and a grasp trajectory generation algorithm based on a two-stage robot imitation learning algorithm(TS-RIL).In the manipulability-GMM algorithm,we apply GMM clustering and ellipse regression to the object point cloud,propose two judgment criteria to generate multiple candidate grasp bounding boxes for the robot,and use manipulability as a metric for selecting the optimal grasp bounding box.The stages of the TS-RIL algorithm are grasp trajectory learning and robot pose optimization.In the first stage,the robot grasp trajectory is characterized using a second-order dynamic movement primitive model and Gaussian mixture regression(GMM).By adjusting the function form of the forcing term,the robot closely approximates the target-grasping trajectory.In the second stage,a robot pose optimization model is built based on the derived pose error formula and manipulability metric.This model allows the robot to adjust its configuration in real time while grasping,thereby effectively avoiding singularities.Finally,an algorithm verification platform is developed based on a Robot Operating System and a series of comparative experiments are conducted in real-world scenarios.The experimental results demonstrate that GD-IL significantly improves the effectiveness and robustness of grasp detection and trajectory imitation learning,outperforming existing state-of-the-art methods in execution efficiency,manipulability,and success rate. 展开更多
关键词 Grasp detection Robot imitation learning MANIPULABILITY Dynamic movement primitives Gaussian mixture model and Gaussian mixture regression Pose optimization
在线阅读 下载PDF
Hourglass-GCN for 3D Human Pose Estimation Using Skeleton Structure and View Correlation
12
作者 Ange Chen Chengdong Wu Chuanjiang Leng 《Computers, Materials & Continua》 SCIE EI 2025年第1期173-191,共19页
Previous multi-view 3D human pose estimation methods neither correlate different human joints in each view nor model learnable correlations between the same joints in different views explicitly,meaning that skeleton s... Previous multi-view 3D human pose estimation methods neither correlate different human joints in each view nor model learnable correlations between the same joints in different views explicitly,meaning that skeleton structure information is not utilized and multi-view pose information is not completely fused.Moreover,existing graph convolutional operations do not consider the specificity of different joints and different views of pose information when processing skeleton graphs,making the correlation weights between nodes in the graph and their neighborhood nodes shared.Existing Graph Convolutional Networks(GCNs)cannot extract global and deeplevel skeleton structure information and view correlations efficiently.To solve these problems,pre-estimated multiview 2D poses are designed as a multi-view skeleton graph to fuse skeleton priors and view correlations explicitly to process occlusion problem,with the skeleton-edge and symmetry-edge representing the structure correlations between adjacent joints in each viewof skeleton graph and the view-edge representing the view correlations between the same joints in different views.To make graph convolution operation mine elaborate and sufficient skeleton structure information and view correlations,different correlation weights are assigned to different categories of neighborhood nodes and further assigned to each node in the graph.Based on the graph convolution operation proposed above,a Residual Graph Convolution(RGC)module is designed as the basic module to be combined with the simplified Hourglass architecture to construct the Hourglass-GCN as our 3D pose estimation network.Hourglass-GCNwith a symmetrical and concise architecture processes three scales ofmulti-viewskeleton graphs to extract local-to-global scale and shallow-to-deep level skeleton features efficiently.Experimental results on common large 3D pose dataset Human3.6M and MPI-INF-3DHP show that Hourglass-GCN outperforms some excellent methods in 3D pose estimation accuracy. 展开更多
关键词 3D human pose estimation multi-view skeleton graph elaborate graph convolution operation Hourglass-GCN
在线阅读 下载PDF
Self-Supervised Monocular Depth Estimation with Scene Dynamic Pose
13
作者 Jing He Haonan Zhu +1 位作者 Chenhao Zhao Minrui Zhao 《Computers, Materials & Continua》 2025年第6期4551-4573,共23页
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su... Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions. 展开更多
关键词 Monocular depth estimation self-supervised learning scene dynamic pose estimation dynamic-depth constraint pixel-wise dynamic pose
在线阅读 下载PDF
Animal Pose Estimation Based on YOLO-POSE
14
作者 Binbin Zhou Lei Liu 《国际计算机前沿大会会议论文集》 2025年第1期234-246,共13页
With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the ... With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the complexity of animal poses and the diversity of species,the existing pose estimation methods still face many challenges when applied to animal targets.To solve this problem,an improved YOLO-Pose model is proposed to improve the accuracy and efficiency of animal pose estimation.On the basis of the original YOLO-Pose model,a separable kernel attention mechanism is introduced and improved to make it conform to the animal target,and combined with the spatial pyramid pool of YOLO-Pose,the multiscale feature fusion capability of the model is improved.The experimental results show that the improved YOLO-Pose model achieves excellent performance on both the public animal pose dataset and the AP-10K dataset,significantly improving the ability of target detection and pose estimation. 展开更多
关键词 ANIMAL Pose estimation LSKA YOLO-Pose
原文传递
Manifold-Optimized Error-State Kalman Filter for Robust Pose Estimation in Unmanned Aerial Vehicles
15
作者 Bolin Jia Zongwen Bai +5 位作者 Yiqun Gao Dong Wang Meili Zhou Peiqi Gao Pei Zhang Zhang Yang 《Journal of Electronic Research and Application》 2025年第2期247-257,共11页
This paper presents a manifold-optimized Error-State Kalman Filter(ESKF)framework for unmanned aerial vehicle(UAV)pose estimation,integrating Inertial Measurement Unit(IMU)data with GPS or LiDAR to enhance estimation ... This paper presents a manifold-optimized Error-State Kalman Filter(ESKF)framework for unmanned aerial vehicle(UAV)pose estimation,integrating Inertial Measurement Unit(IMU)data with GPS or LiDAR to enhance estimation accuracy and robustness.We employ a manifold-based optimization approach,leveraging exponential and logarithmic mappings to transform rotation vectors into rotation matrices.The proposed ESKF framework ensures state variables remain near the origin,effectively mitigating singularity issues and enhancing numerical stability.Additionally,due to the small magnitude of state variables,second-order terms can be neglected,simplifying Jacobian matrix computation and improving computational efficiency.Furthermore,we introduce a novel Kalman filter gain computation strategy that dynamically adapts to low-dimensional and high-dimensional observation equations,enabling efficient processing across different sensor modalities.Specifically,for resource-constrained UAV platforms,this method significantly reduces computational cost,making it highly suitable for real-time UAV applications. 展开更多
关键词 UAV pose estimation Error-State Kalman Filter MANIFOLD GPS LIDAR
在线阅读 下载PDF
High-accuracy real-time satellite pose estimation for in-orbit applications
16
作者 Zi WANG Jinghao WANG +2 位作者 Jiyang YU Zhang LI Qifeng YU 《Chinese Journal of Aeronautics》 2025年第6期130-142,共13页
Vision-based relative pose estimation plays a pivotal role in various space missions.Deep learning enhances monocular spacecraft pose estimation,but high computational demands necessitate model simplification for onbo... Vision-based relative pose estimation plays a pivotal role in various space missions.Deep learning enhances monocular spacecraft pose estimation,but high computational demands necessitate model simplification for onboard systems.In this paper,we aim to achieve an optimal balance between accuracy and computational efficiency.We present a Perspective-n-Point(PnP)based method for spacecraft pose estimation,leveraging lightweight neural networks to localize semantic keypoints and reduce computational load.Since the accuracy of keypoint localization is closely related to the heatmap resolution,we devise an efficient upsampling module to increase the resolution of heatmaps with minimal overhead.Furthermore,the heatmaps predicted by the lightweight models tend to show high-level noise.To tackle this issue,we propose a weighting strategy by analyzing the statistical characteristics of predicted semantic keypoints and substantially improve the pose estimation accuracy.The experiments carried out on the SPEED dataset underscore the prospect of our method in engineering applications.We dramatically reduce the model parameters to 0.7 M,merely 2.5%of that required by the top-performing method,and achieve lower pose estimation error and better real-time performance. 展开更多
关键词 Keypoint detection Lightweight models Non-cooperative satellite Pose estimation Weighted PnP
原文传递
Skeleton-Based Action Recognition Using Graph Convolutional Network with Pose Correction and Channel Topology Refinement
17
作者 Yuxin Gao Xiaodong Duan Qiguo Dai 《Computers, Materials & Continua》 2025年第4期701-718,共18页
Graph convolutional network(GCN)as an essential tool in human action recognition tasks have achieved excellent performance in previous studies.However,most current skeleton-based action recognition using GCN methods u... Graph convolutional network(GCN)as an essential tool in human action recognition tasks have achieved excellent performance in previous studies.However,most current skeleton-based action recognition using GCN methods use a shared topology,which cannot flexibly adapt to the diverse correlations between joints under different motion features.The video-shooting angle or the occlusion of the body parts may bring about errors when extracting the human pose coordinates with estimation algorithms.In this work,we propose a novel graph convolutional learning framework,called PCCTR-GCN,which integrates pose correction and channel topology refinement for skeleton-based human action recognition.Firstly,a pose correction module(PCM)is introduced,which corrects the pose coordinates of the input network to reduce the error in pose feature extraction.Secondly,channel topology refinement graph convolution(CTR-GC)is employed,which can dynamically learn the topology features and aggregate joint features in different channel dimensions so as to enhance the performance of graph convolution networks in feature extraction.Finally,considering that the joint stream and bone stream of skeleton data and their dynamic information are also important for distinguishing different actions,we employ a multi-stream data fusion approach to improve the network’s recognition performance.We evaluate the model using top-1 and top-5 classification accuracy.On the benchmark datasets iMiGUE and Kinetics,the top-1 classification accuracy reaches 55.08%and 36.5%,respectively,while the top-5 classification accuracy reaches 89.98%and 59.2%,respectively.On the NTU dataset,for the two benchmark RGB+Dsettings(X-Sub and X-View),the classification accuracy achieves 89.7%and 95.4%,respectively. 展开更多
关键词 Pose correction multi-stream fusion GCN action recognition
在线阅读 下载PDF
Review of Pose Estimation Methods for Spacecraft Targets
18
作者 LI Shoucheng LI Jing +2 位作者 CHEN Qiang LI Xindong WANG Junzheng 《Aerospace China》 2025年第1期53-58,共6页
Pose estimation of spacecraft targets is a key technology for achieving space operation tasks,such as the cleaning of failed satellites and the detection and scanning of non-cooperative targets.This paper reviews the ... Pose estimation of spacecraft targets is a key technology for achieving space operation tasks,such as the cleaning of failed satellites and the detection and scanning of non-cooperative targets.This paper reviews the target pose estimation methods based on image feature extraction and PnP,the target estimation methods based on registration,and the spacecraft target pose estimation methods based on deep learning,and introduces the corresponding research methods. 展开更多
关键词 SPACECRAFT pose estimation non-cooperative targets feature extraction deep learning
在线阅读 下载PDF
High-Precision Fish Pose Estimation Method Based on Improved HRNet
19
作者 PENG Qiujun LI Weiran +1 位作者 LIU Yeqiang LI Zhenbo 《智慧农业(中英文)》 2025年第3期160-172,共13页
[Objective]Fish pose estimation(FPE)provides fish physiological information,facilitating health monitoring in aquaculture.It aids decision-making in areas such as fish behavior recognition.When fish are injured or def... [Objective]Fish pose estimation(FPE)provides fish physiological information,facilitating health monitoring in aquaculture.It aids decision-making in areas such as fish behavior recognition.When fish are injured or deficient,they often display abnormal behaviors and noticeable changes in the positioning of their body parts.Moreover,the unpredictable posture and orientation of fish during swimming,combined with the rapid swimming speed of fish,restrict the current scope of research in FPE.In this research,a FPE model named HPFPE is presented to capture the swimming posture of fish and accurately detect their key points.[Methods]On the one hand,this model incorporated the CBAM module into the HRNet framework.The attention module enhanced accuracy without adding computational complexity,while effectively capturing a broader range of contextual information.On the other hand,the model incorporated dilated convolution to increase the receptive field,allowing it to capture more spatial context.[Results and Discussions]Experiments showed that compared with the baseline method,the average precision(AP)of HPFPE based on different backbones and input sizes on the oplegnathus punctatus datasets had increased by 0.62,1.35,1.76,and 1.28 percent point,respectively,while the average recall(AR)had also increased by 0.85,1.50,1.40,and 1.00,respectively.Additionally,HPFPE outperformed other mainstream methods,including DeepPose,CPM,SCNet,and Lite-HRNet.Furthermore,when compared to other methods using the ornamental fish data,HPFPE achieved the highest AP and AR values of 52.96%,and 59.50%,respectively.[Conclusions]The proposed HPFPE can accurately estimate fish posture and assess their swimming patterns,serving as a valuable reference for applications such as fish behavior recognition. 展开更多
关键词 AQUACULTURE computer vision fish pose estimation key point attention mechanism
在线阅读 下载PDF
AARPose:Real-time and accurate drogue pose measurement based on monocular vision for autonomous aerial refueling
20
作者 Shuyuan WEN Yang GAO +3 位作者 Bingrui HU Zhongyu LUO Zhenzhong WEI Guangjun ZHANG 《Chinese Journal of Aeronautics》 2025年第6期552-572,共21页
Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness... Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness are easily affected by limited computing power of airborne equipment,complex aerial scenes and partial occlusion.To address the above challenges,we propose a novel drogue keypoint detection and pose measurement algorithm based on monocular vision,and realize real-time processing on airborne embedded devices.Firstly,a lightweight network is designed with structural re-parameterization to reduce computational cost and improve inference speed.And a sub-pixel level keypoints prediction head and loss functions are adopted to improve keypoint detection accuracy.Secondly,a closed-form solution of drogue pose is computed based on double spatial circles,followed by a nonlinear refinement based on Levenberg-Marquardt optimization.Both virtual simulation and physical simulation experiments have been used to test the proposed method.In the virtual simulation,the mean pixel error of the proposed method is 0.787 pixels,which is significantly superior to that of other methods.In the physical simulation,the mean relative measurement error is 0.788%,and the mean processing time is 13.65 ms on embedded devices. 展开更多
关键词 Autonomous aerial refueling Vision measurement Deep learning REAL-TIME LIGHTWEIGHT ACCURATE Monocular vision Drogue pose measurement
原文传递
上一页 1 2 21 下一页 到第
使用帮助 返回顶部