期刊文献+
共找到24篇文章
< 1 2 >
每页显示 20 50 100
ScenePalette:Contextually Exploring Object Collections Through Multiplex Relations in 3D Scenes
1
作者 Shao-Kui Zhang Wei-Yu Xie +1 位作者 Chen Wang Song-Hai Zhang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第5期1180-1192,共13页
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual relationship.ScenePalette is inspired by an important intui... This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual relationship.ScenePalette is inspired by an important intuition which was often ignored in previous work:a real-world 3D scene consists of the contextually reasonable organization of objects,e.g.people typically place one double bed with several subordinate objects into a bedroom instead of different shapes of beds.ScenePalette,abstracts 3D repositories as multiplex networks and accordingly encodes implicit relations between or among objects.Specifically,basic statistics such as co-occurrence,in combination with advanced relations,are used to tackle object relationships of different levels.Extensive experiments demonstrate that the latent space of ScenePalette has rich contexts that are essential for contextual representation and exploration. 展开更多
关键词 computer graphics 3d scene context 3d repository exploration multiplex network embedding
原文传递
Fusion Prototypical Network for 3D Scene Graph Prediction
2
作者 Jiho Bae Bogyu Choi +1 位作者 Sumin Yeon Suwon Lee 《Computer Modeling in Engineering & Sciences》 2025年第6期2991-3003,共13页
Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships amo... Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments. 展开更多
关键词 3d scene graph prediction prototypical network 3d scene understanding
在线阅读 下载PDF
Structure-aware fusion network for 3D scene understanding
3
作者 Haibin YAN Yating LV Venice Erin LIONG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第5期194-203,共10页
In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complem... In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet. 展开更多
关键词 3d point clouds Data fusion Structure-aware 3d scene understanding Deep metric learning
原文传递
Semantic Driven Design Reuse for 3D Scene Modeling
4
作者 曹雪 蔡鸿明 步丰林 《Journal of Shanghai Jiaotong university(Science)》 EI 2012年第2期233-236,共4页
The increasing scale and complexity of 3D scene design work urge an efficient way to understand the design in multi-disciplinary team and exploit the experiences and underlying knowledge in previous works for reuse.Ho... The increasing scale and complexity of 3D scene design work urge an efficient way to understand the design in multi-disciplinary team and exploit the experiences and underlying knowledge in previous works for reuse.However the previous researches lack of concerning on relationship maintaining and design reuse in knowledge level.We propose a novel semantic driven design reuse system,including a property computation algorithm that enables our system to compute the properties while modeling process to maintain the semantic consistency,and a vertex statics based algorithm that enables the system to recognize scene design pattern as universal semantic model for the same type of scenes.With the universal semantic model,the system conducts the modeling process of future design works by suggestions and constraints on operation.The proposed framework empowers the reuse of 3D scene design on both model level and knowledge level. 展开更多
关键词 semantic driven modeling design reuse ONTOLOGY 3d scene
原文传递
3D scene graph prediction from point clouds
5
作者 Fanfan WU Feihu YAN +1 位作者 Weimin SHI Zhong ZHOU 《Virtual Reality & Intelligent Hardware》 EI 2022年第1期76-88,共13页
Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes... Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes and their relationships are modeled as edges.More specifically,we employ the DGCNN to capture the features of objects and their relationships in the scene.A Graph Attention Network(GAT)is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure.A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.Results Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.Conclusions The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds. 展开更多
关键词 Scene understanding 3d scene graph Point cloud DGCNN GAT
在线阅读 下载PDF
Analysis of Color Landscape Characteristics in“Beautiful Village”of China Based on 3D Real Scene Models
6
作者 Yiyi Cen Wenzheng Jia +2 位作者 Wen Dai Chun Wang He Wu 《Revue Internationale de Géomatique》 2024年第1期93-109,共17页
Color,as a significant element of village landscapes,serves various functions such as enhancing aesthetic appeal and attractiveness,conveying emotions and cultural values.To explore the three-dimensional spatial chara... Color,as a significant element of village landscapes,serves various functions such as enhancing aesthetic appeal and attractiveness,conveying emotions and cultural values.To explore the three-dimensional spatial characteristics of color landscapes in beautiful villages,this study conducted a comparative experiment involving eight provinciallevel beautiful villages and eight ordinary villages in Jinzhai County.Landscape pattern indices were used to analyze the color landscape patterns on the facades of these villages,complemented by a quantitative analysis of color attributes using theMunsell color system.The results indicate that(1)Natural landscape colors in beautiful villages are primarily concentrated in the yellow-red to green-yellow interval,while those in ordinary villages are widely distributed in the red to blue-green interval.Artificial landscape colors in beautiful villages aremainly characterized by medium value,with chroma concentrated in the low chroma range.(2)The proportion of color areas for forests,grasslands,and building walls in beautiful villages is higher by 14.76%,2.17%,and 5.16%,respectively,compared to ordinary villages.However,the proportion of yellow exposed areas in ordinary villages ismore than twice that of beautiful villages.(3)The Landscape Shape Index for forests,grasslands,and buildings in beautiful villages is 5.23,8.01,and 8.19,respectively,indicating a higher irregularity in color patches.(4)Ordinary villages exhibit a higher Shannon’s diversity index,indicating amore complex distribution of colors,whereas beautiful villages demonstrate a higher number of connected dominant patches.This study can provide a scientific basis for village color planning and layout. 展开更多
关键词 Color landscape landscape pattern Jinzhai County 3d real scene beautiful village
在线阅读 下载PDF
Sequential selection and calibration of video frames for 3D outdoor scene reconstruction
7
作者 Weilin Sun Manyi Li +3 位作者 Peng Li Xiao Cao Xiangxu Meng Lei Meng 《CAAI Transactions on Intelligence Technology》 2024年第6期1500-1514,共15页
3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic sce... 3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic scene understanding methods primarily recover scenes from single images,with a focus on indoor scenes.Due to the complexity of real-world,the information provided by a single image is limited,resulting in issues such as object occlusion and omission.Furthermore,captured data from outdoor scenes exhibits characteristics of sparsity,strong temporal dependencies and a lack of annotations.Consequently,the task of understanding and reconstructing outdoor scenes is highly challenging.The authors propose a sparse multi-view images-based 3D scene reconstruction framework(SMSR).It divides the scene reconstruction task into three stages:initial prediction,refinement,and fusion stage.The first two stages extract 3D scene representations from each viewpoint,while the final stage involves selection,calibration and fusion of object positions and orientations across different viewpoints.SMSR effectively address the issue of object omission by utilizing small-scale sequential scene information.Experimental results on the general outdoor scene dataset UrbanScene3D-Art Sci and our proprietary dataset Software College Aerial Time-series Images,demonstrate that SMSR achieves superior performance in the scene understanding and reconstruction. 展开更多
关键词 3d outdoor scene reconstruction 3d scene understanding multi-view fusion
在线阅读 下载PDF
Swin3D++:Effective Multi-Source Pretraining for 3D Indoor Scene Understanding
8
作者 Yu-Qi Yang Yu-Xiao Guo Yang Liu 《Computational Visual Media》 2025年第3期465-481,共17页
Data diversity and abundance are essential for improving the performance and generalization of models in natural language processing and 2D vision.However,the 3D vision domain suffers from a lack of 3D data,and simply... Data diversity and abundance are essential for improving the performance and generalization of models in natural language processing and 2D vision.However,the 3D vision domain suffers from a lack of 3D data,and simply combining multiple 3D datasets for pretraining a 3D backbone does not yield significant improvement,due to the domain discrepancies among different 3D datasets that impede effective feature learning.In this work,we identify the main sources of the domain discrepancies between 3D indoor scene datasets,and propose Swin3d++,an enhanced architecture based on Swin3d for efficient pretraining on multi-source 3D point clouds.Swin3d++introduces domain-specific mechanisms to SWIN3D's modules to address domain discrepancies and enhance the network capability on multi-source pretraining.Moreover,we devise a simple source-augmentation strategy to increase the pretraining data scale and facilitate supervised pretraining.We validate the effectiveness of our design,and demonstrate that Swin3d++surpasses the state-of-the-art 3D pretraining methods on typical indoor scene understanding tasks. 展开更多
关键词 3d scenes INDOOR pretraining multi-source data data augmentation
原文传递
Virtual Huanghe River System:Framework and Technology 被引量:2
9
作者 LU Heli LIU Guifang SUN Jiulin 《Chinese Geographical Science》 SCIE CSCD 2006年第3期255-259,共5页
Virtual Reality provides a new approach for geographical research. In this paper, a framework of the Virtual Huanghe (Yellow) River System was first presented from the view of technology, which included five main mo... Virtual Reality provides a new approach for geographical research. In this paper, a framework of the Virtual Huanghe (Yellow) River System was first presented from the view of technology, which included five main modules——data sources, 3D simulation terrain database, 3D simulation model database, 3D simulation implementation and application system. Then the key technoiogies of constructing Virtual Huanghe River System were discussed in detail: 1) OpenGL technology, the 3D graphics developing instrument, was employed in Virtual Huanghe River System to realize the function of dynamic real-time navigation. 2) MO and OpenGL technologies were used to make the mutual response between 3D scene and 2D electronic map available, which made use of the advantages of both 3D scene and 2D electronic map, with the macroscopic view, integrality and conciseness of 2D electronic map combined with the locality, reality and visualization of 3D scene. At the same time the disadvantages of abstract and ambiguity of 2D electronic map and the direction losing of virtual navigation in 3D scene were overcome. 展开更多
关键词 Virtual Reality Virtual Huanghe River System dynamic real-time navigation mutual response between 3d scene and 2D electronic map
在线阅读 下载PDF
Generation and Control of Game Virtual Environment 被引量:2
10
作者 Myeong Won Lee Jae Moon Lee 《International Journal of Automation and computing》 EI 2007年第1期25-29,共5页
In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-di... In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-dimensional terrain mesh data processing, a map editor, scene processing, collision processing, and walkthrough control. We also define an environment-specific semantic information editor, which can be applied using specific location obtained from the real world. Users can insert text information related to the characters real position in the real world during navigation in the game virtual environment. 展开更多
关键词 Virtual environment virtual reality 3d game 3d navigation 3d scene management.
在线阅读 下载PDF
RWNeRF:Robust Watermarking Scheme for Neural Radiance Fields Based on Invertible Neural Networks
11
作者 Wenquan Sun Jia Liu +2 位作者 Weina Dong Lifeng Chen Fuqiang Di 《Computers, Materials & Continua》 SCIE EI 2024年第9期4065-4083,共19页
As neural radiance fields continue to advance in 3D content representation,the copyright issues surrounding 3D models oriented towards implicit representation become increasingly pressing.In response to this challenge... As neural radiance fields continue to advance in 3D content representation,the copyright issues surrounding 3D models oriented towards implicit representation become increasingly pressing.In response to this challenge,this paper treats the embedding and extraction of neural radiance field watermarks as inverse problems of image transformations and proposes a scheme for protecting neural radiance field copyrights using invertible neural network watermarking.Leveraging 2D image watermarking technology for 3D scene protection,the scheme embeds watermarks within the training images of neural radiance fields through the forward process in invertible neural networks and extracts them from images rendered by neural radiance fields through the reverse process,thereby ensuring copyright protection for both the neural radiance fields and associated 3D scenes.However,challenges such as information loss during rendering processes and deliberate tampering necessitate the design of an image quality enhancement module to increase the scheme’s robustness.This module restores distorted images through neural network processing before watermark extraction.Additionally,embedding watermarks in each training image enables watermark information extraction from multiple viewpoints.Our proposed watermarking method achieves a PSNR(Peak Signal-to-Noise Ratio)value exceeding 37 dB for images containing watermarks and 22 dB for recovered watermarked images,as evaluated on the Lego,Hotdog,and Chair datasets,respectively.These results demonstrate the efficacy of our scheme in enhancing copyright protection. 展开更多
关键词 Neural radiance fields 3d scene ROBUST watermarking invertible neural networks
在线阅读 下载PDF
Digital Simulation Platform for Satellite Launch Mission Verification
12
作者 ZU Yunyu ZHANG Chi +5 位作者 BU Xiangwei XU Guoguang WU Kao XU Lijie WANG Chenxi LIU Chang 《Aerospace China》 2023年第4期9-16,共8页
To simulate the satellite launch mission under a general platform which could be used in a full-digital mode as well as in a semi-physical way, is an important way to certify the mission design performance as well as ... To simulate the satellite launch mission under a general platform which could be used in a full-digital mode as well as in a semi-physical way, is an important way to certify the mission design performance as well as technical feasibilities, and it relates to complex system simulation methods such as multi-disciplinary coupling, multi-language modeling as well as interactive simulation and virtual simulation technologies. This paper introduces the design of a digital simulation platform for satellite launch mission verification.The platform has the advantages of high generality and extensibility, being easy to build up. The Functional Mockup Interface(FMI) Standard is adopted to achieve integration of multi-source models. A WebGL based 3D visual simulation tool is also adopted to implement the virtual display system which could display the rocket launch process and rocket-satellite separation with high fidelity 3D virtual scenes. A configuration tool was developed to map the 3D objects in the visual scene with simulation physical variables for complex rocket flight control mechanisms, which greatly improves the platform's generality and extensibility. Finally the real-time performance had been tested and the YL-1 launch mission was adopted to demonstrate the functions of the platform.The platform will be used to construct a digital twin system for satellite launch missions in the future. 展开更多
关键词 digital simulation platform Functional Mockup Interface satellite launch mission WebGL based 3d virtual scenes
在线阅读 下载PDF
3D Indoor Scene Geometry Estimation from a Single Omnidirectional Image:A Comprehensive Survey
13
作者 Ming Meng Yonggui Zhu +2 位作者 Yufei Zhao Zhaoxin Li Zhe Zhu 《Computational Visual Media》 2025年第3期431-464,共34页
This paper surveys the technology used in three-dimensional indoor scene geometry estimation from a single 360°omnidirectional image,which is pivotal in extracting 3D structural information from indoor environmen... This paper surveys the technology used in three-dimensional indoor scene geometry estimation from a single 360°omnidirectional image,which is pivotal in extracting 3D structural information from indoor environments.The technology transforms omnidirectional data into a 3D model,depicting spatial structure,object positions,and scene layout.Its significance spans various domains,including virtual reality(VR),augmented reality(AR),mixed reality(MR),game development,urban planning,and robot navigation.We begin by revisiting foundational concepts of omnidirectional imaging and detailing the problems,applications,and challenges in this field.Our review categorizes the fundamental tasks of structure recovery,depth estimation,and layout recovery.We also review pertinent datasets and evaluation metrics,providing the latest research as a reference.Finally,we summarize the field and discuss potential future trends to inform and guide further research. 展开更多
关键词 3d scene geometry omnidirectional images structure recovery depth estimation layout recovery
原文传递
ROl-constrained visualization of flood scenes to improve perception efficiency 被引量:1
14
作者 Jigang You Jun Zhu +3 位作者 Weilian Li Yukun Guo Lin Fu Pei Dang 《International Journal of Digital Earth》 SCIE EI 2023年第1期3065-3084,共20页
Efficient and intuitive representation of floods can improve people's perception,which is useful for flood emergency management_and decision making.However,the current methods of visualizing flood disaster scenes ... Efficient and intuitive representation of floods can improve people's perception,which is useful for flood emergency management_and decision making.However,the current methods of visualizing flood disaster scenes have the shortcomings of data redundancy and low-efficiency.The interference of the complex background can be avoided through region of interest(ROl)extraction,because the attention will be quickly attracted by a few salient visual objects.First,the characteristics of the flood disaster scene object are analysed,and the method for scene division and data organization is established.Second,the region of interest is extracted according to the time series data of the flood evolution process simulated using cellular automata,and the dynamic identification model of the objects of interest is established.Then,a dynamic scheduling queue model with service interruption is designed to optimize the rendering efficiency of flood scenes and improve the perception efficiency.Finally,a prototype visualization system was developed and the experimental results show that approximately 30%of the redundant data are reduced,and the scene rendering efficiency is increased by approximately 15%.The non-ROl visualization is weakened by using the rules of human visual cognition,which improves the perception efficiency of flood scenes. 展开更多
关键词 Floods region of interest perception effciency 3d scenes adaptive optimization
原文传递
3D indoor scene modeling from RGB-D data:a survey 被引量:6
15
作者 Kang Chen Yu-Kun Lai Shi-Min Hu 《Computational Visual Media》 2015年第4期267-278,共12页
3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras,there is a growing interest in digitizing real-world indoor 3D scenes... 3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras,there is a growing interest in digitizing real-world indoor 3D scenes. However,modeling indoor3 D scenes remains a challenging problem because of the complex structure of interior objects and poor quality of RGB-D data acquired by consumer-level sensors.Various methods have been proposed to tackle these challenges. In this survey,we provide an overview of recent advances in indoor scene modeling techniques,as well as public datasets and code libraries which can facilitate experiments and evaluation. 展开更多
关键词 RGB-D camera 3d indoor scenes geometric modeling semantic modeling SURVEY
原文传递
A real 3D scene rendering optimization method based on region of interest and viewing frustum prediction in virtual reality 被引量:1
16
作者 Pei Dang Jun Zhu +5 位作者 Jianlin Wu Weilian Li Jigang You Lin Fu Yiqun Shi Yuhang Gong 《International Journal of Digital Earth》 SCIE EI 2022年第1期1081-1100,共20页
As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering method... As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering methods,but the current rendering optimization methods have some defects and cannot render real 3D scenes in virtual reality.In this study,the location of the viewing frustum is predicted by a Kalman filter,and eye-tracking equipment is used to recognize the region of interest(ROI)in the scene.Finally,the real 3D model of interest in the predicted frustum is rendered first.The experimental results show that the method of this study can predict the frustrum location approximately 200 ms in advance,the prediction accuracy is approximately 87%,the scene rendering efficiency is improved by 8.3%,and the motion sickness is reduced by approximately 54.5%.These studies help promote the use of real 3D models in virtual reality and ROI recognition methods.In future work,we will further improve the prediction accuracy of viewing frustums in virtual reality and the application of eye tracking in virtual geographic scenes. 展开更多
关键词 Real 3d scene virtual reality Kalman filter region of interest viewing frustum
原文传递
HDR-Net-Fusion:Real-time 3D dynamic scene reconstruction with a hierarchical deep reinforcement network 被引量:1
17
作者 Hao-Xuan Song Jiahui Huang +1 位作者 Yan-Pei Cao Tai-Jiang Mu 《Computational Visual Media》 EI CSCD 2021年第4期419-435,共17页
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de... Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods. 展开更多
关键词 dynamic 3d scene reconstruction deep reinforcement learning point cloud completion deep neural networks
原文传递
Template-based knowledge reuse method for generating high-speed railway virtual construction scenes 被引量:1
18
作者 Heng Zhang Wen Zhao +9 位作者 Zujie Han Jun Zhu Qing Zhu Xinwen Ning Dengke Fan Hua Wang Fengpin Jia Wei Fang Bin Yang Weilian Li 《International Journal of Digital Earth》 SCIE EI 2023年第1期1144-1163,共20页
Virtual construction has become an important approach to the high-quality development of high-speed railways,but existing methods have problems such as low efficiency in generating virtual construction scenes and the ... Virtual construction has become an important approach to the high-quality development of high-speed railways,but existing methods have problems such as low efficiency in generating virtual construction scenes and the inability to reuse construction knowledge.To support the rapid visual representation of multiple types of construction processes and construction methods,a template-based knowledge reuse method is proposed.The method includes using a component-based modeling mode to build body structure models of a high-speed railway project and generate a 3D scene;decomposing the construction process and building a construction process knowledge base;establishing joint linkage models of construction machinery and forming a construction method knowledge template;and fusing multiple types of information according to a time sequence to visualize the construction process.Based on the template-based knowledge reuse method,a prototype system was developed,and virtual construction experiments were carried out.The results show that this method achieves the reuse of construction knowledge at different levels including construction machinery level,construction method level,and work site level.Compared with animation software for virtual construction,this method improves the production efficiency by 87%.Moreover,this method can provide a multilevel knowledge reuse scheme for diversified virtual construction. 展开更多
关键词 High-speed railway 3d scene virtual construction parametric template knowledge reuse
原文传递
Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture 被引量:1
19
作者 Hongjia Wu Hongxin Zhang +2 位作者 Jiang Cheng Jianwei Guo Wei Chen 《Visual Informatics》 EI 2023年第3期59-64,共6页
With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration ... With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration unlocks the value of data and computational power,presenting significant opportunities for large-scale 3D scene modeling and XR presentation.In this paper,we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture.We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges. 展开更多
关键词 The cloud-edge-client integrated ARCHITECTURE 3d scene perception and modeling XR rendering Cloud-edge-client integrated visualization
原文传递
A Method for 3D Scene Description and Segmentation in an Object Record
20
作者 Chen Tingbiao(Department of Radio Engineering,Naming University of Posts and Telecommunications,Naming 210003,P.R.China) 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 1996年第1期37-42,共6页
in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such... in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such as boundaries and shape classes of individual object as well as relationshipsbetween objects.It is implemented as an added high-level component to an existing low-level binocularvision system[1]. Based on a pair of matched stereo images produced by that system,3D segmentation is firstperformed to group object boundary data into several edge-sets,each of which is believed to belong to aparticular object.Then gross features of each object are extracted and stored in an object recbrd.The finalstructural description of the scene is accomplished with information in the object record,a set of rules and arule implementor. The System is designed to handle partially occluded objects of different shapes and sizeson the 2D imager.Experimental results have shown its success in computing both object and structurallevel descriptions of common man-made objects. 展开更多
关键词 s:image segmentation 3d scene description object record image understanding
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部