期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
TQU-GraspingObject:3D Common Objects Detection,Recognition,and Localization on Point Cloud for Hand Grasping in Sharing Environments
1
作者 Thi-Loan Nguyen Huy-Nam Chu +2 位作者 The-Thanh Hua Trung-Nghia Phung Van-Hung Le 《Computers, Materials & Continua》 2026年第5期1701-1722,共22页
To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determ... To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determining the position of the grasping information.These results can then be used to guide the visually impaired or to execute grasping tasks with a robotic arm.In this paper,we collected,annotated,and published the benchmark TQUGraspingObject dataset for testing,validation,and evaluation of deep learning(DL)models for detecting,recognizing,and localizing grasping objects in 2D and 3D space,especially 3D point cloud data.Our dataset is collected in a shared room,with common everyday objects placed on the tabletop in jumbled positions by Intel RealSense D435(IR-D435).This dataset includes more than 63k RGB-D pairs and related data such as normalized 3D object point cloud,3D object point cloud segmented,coordinate system normalizationmatrix,3D object point cloud normalized,and hand pose for grasping each object.At the same time,we also conducted experiments on fourDL networks with the best performance:SSD-MobileNetV3,ResNet50-Transformer,ResNet101-Transformer,and YOLOv12.The results present that YOLOv12 has the most suitable results in detecting and recognizing objects in images.All data,annotations,toolkit,source code,point cloud data,and results are publicly available on our project website:https://github.com/HuaTThanhIT2327Tqu/datasetv2. 展开更多
关键词 Grasping object of blind/Robot arm TQU-graspingobject benchmark dataset 3d point cloud data deep learning(DL) object detection/recognition intel realsense D435(IR-D435)
在线阅读 下载PDF
Automated Rock Detection and Shape Analysis from Mars Rover Imagery and 3D Point Cloud Data 被引量:11
2
作者 邸凯昌 岳宗玉 +1 位作者 刘召芹 王树良 《Journal of Earth Science》 SCIE CAS CSCD 2013年第1期125-135,共11页
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b... A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies. 展开更多
关键词 Mars rover rock extraction rover image 3d point cloud data.
原文传递
Development of vehicle-recognition method on water surfaces using LiDAR data:SPD^(2)(spherically stratified point projection with diameter and distance)
3
作者 Eon-ho Lee Hyeon Jun Jeon +2 位作者 Jinwoo Choi Hyun-Taek Choi Sejin Lee 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第6期95-104,共10页
Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface ... Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework. 展开更多
关键词 Object classification Clustering 3d point cloud data LiDAR(light detection and ranging) Surface vehicle
在线阅读 下载PDF
A centroid measurement method based on 3D scanning 被引量:1
4
作者 HE Xin LI Zhen 《Journal of Measurement Science and Instrumentation》 2025年第2期186-194,共9页
The centroid coordinate serves as a critical control parameter in motion systems,including aircraft,missiles,rockets,and drones,directly influencing their motion dynamics and control performance.Traditional methods fo... The centroid coordinate serves as a critical control parameter in motion systems,including aircraft,missiles,rockets,and drones,directly influencing their motion dynamics and control performance.Traditional methods for centroid measurement often necessitate custom equipment and specialized positioning devices,leading to high costs and limited accuracy.Here,we present a centroid measurement method that integrates 3D scanning technology,enabling accurate measurement of centroid across various types of objects without the need for specialized positioning fixtures.A theoretical framework for centroid measurement was established,which combined the principle of the multi-point weighing method with 3D scanning technology.The measurement accuracy was evaluated using a designed standard component.Experimental results demonstrate that the discrepancies between the theoretical and the measured centroid of a standard component with various materials and complex shapes in the X,Y,and Z directions are 0.003 mm,0.009 mm,and 0.105 mm,respectively,yielding a spatial deviation of 0.106 mm.Qualitative verification was conducted through experimental validation of three distinct types.They confirmed the reliability of the proposed method,which allowed for accurate centroid measurements of various products without requiring positioning fixtures.This advancement significantly broadened the applicability and scope of centroid measurement devices,offering new theoretical insights and methodologies for the measurement of complex parts and systems. 展开更多
关键词 centroid measurement mass characteristic parameter 3d scanning 3d point cloud data no specialized positioning fixtures multi-point weighing method
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部