Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships amo...Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.展开更多
In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complem...In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.展开更多
The increasing scale and complexity of 3D scene design work urge an efficient way to understand the design in multi-disciplinary team and exploit the experiences and underlying knowledge in previous works for reuse.Ho...The increasing scale and complexity of 3D scene design work urge an efficient way to understand the design in multi-disciplinary team and exploit the experiences and underlying knowledge in previous works for reuse.However the previous researches lack of concerning on relationship maintaining and design reuse in knowledge level.We propose a novel semantic driven design reuse system,including a property computation algorithm that enables our system to compute the properties while modeling process to maintain the semantic consistency,and a vertex statics based algorithm that enables the system to recognize scene design pattern as universal semantic model for the same type of scenes.With the universal semantic model,the system conducts the modeling process of future design works by suggestions and constraints on operation.The proposed framework empowers the reuse of 3D scene design on both model level and knowledge level.展开更多
Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes...Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes and their relationships are modeled as edges.More specifically,we employ the DGCNN to capture the features of objects and their relationships in the scene.A Graph Attention Network(GAT)is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure.A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.Results Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.Conclusions The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.展开更多
为提升煤场的管理水平,并保证不同环境下的盘煤精度,研究Unity3D在工业煤场三维可视化中的关键技术。该技术的数据采集模块利用激光扫描仪采集工业煤场的点云数据,并通过基于面片的多视角立体视觉(PMVS)算法重建该数据;随后将该数据输...为提升煤场的管理水平,并保证不同环境下的盘煤精度,研究Unity3D在工业煤场三维可视化中的关键技术。该技术的数据采集模块利用激光扫描仪采集工业煤场的点云数据,并通过基于面片的多视角立体视觉(PMVS)算法重建该数据;随后将该数据输入至三维场景构建模块,该模块利用Dynamo for Revit软件生成煤场的三维场景模型。场景渲染和可视化模块在Unity3D技术的支撑下渲染该模型,并完成模型可视化展示;结合构建的模型结果和点云数据完成煤场各区域煤堆的体积计算,实现煤场盘点。测试结果显示,该技术生成的三维场景模型能完整保留煤堆的形态细节,且能可靠完成不同高度煤堆的体积计算。展开更多
As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering method...As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering methods,but the current rendering optimization methods have some defects and cannot render real 3D scenes in virtual reality.In this study,the location of the viewing frustum is predicted by a Kalman filter,and eye-tracking equipment is used to recognize the region of interest(ROI)in the scene.Finally,the real 3D model of interest in the predicted frustum is rendered first.The experimental results show that the method of this study can predict the frustrum location approximately 200 ms in advance,the prediction accuracy is approximately 87%,the scene rendering efficiency is improved by 8.3%,and the motion sickness is reduced by approximately 54.5%.These studies help promote the use of real 3D models in virtual reality and ROI recognition methods.In future work,we will further improve the prediction accuracy of viewing frustums in virtual reality and the application of eye tracking in virtual geographic scenes.展开更多
With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration ...With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration unlocks the value of data and computational power,presenting significant opportunities for large-scale 3D scene modeling and XR presentation.In this paper,we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture.We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.展开更多
in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such...in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such as boundaries and shape classes of individual object as well as relationshipsbetween objects.It is implemented as an added high-level component to an existing low-level binocularvision system[1]. Based on a pair of matched stereo images produced by that system,3D segmentation is firstperformed to group object boundary data into several edge-sets,each of which is believed to belong to aparticular object.Then gross features of each object are extracted and stored in an object recbrd.The finalstructural description of the scene is accomplished with information in the object record,a set of rules and arule implementor. The System is designed to handle partially occluded objects of different shapes and sizeson the 2D imager.Experimental results have shown its success in computing both object and structurallevel descriptions of common man-made objects.展开更多
This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual relationship.ScenePalette is inspired by an important intui...This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual relationship.ScenePalette is inspired by an important intuition which was often ignored in previous work:a real-world 3D scene consists of the contextually reasonable organization of objects,e.g.people typically place one double bed with several subordinate objects into a bedroom instead of different shapes of beds.ScenePalette,abstracts 3D repositories as multiplex networks and accordingly encodes implicit relations between or among objects.Specifically,basic statistics such as co-occurrence,in combination with advanced relations,are used to tackle object relationships of different levels.Extensive experiments demonstrate that the latent space of ScenePalette has rich contexts that are essential for contextual representation and exploration.展开更多
3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic sce...3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic scene understanding methods primarily recover scenes from single images,with a focus on indoor scenes.Due to the complexity of real-world,the information provided by a single image is limited,resulting in issues such as object occlusion and omission.Furthermore,captured data from outdoor scenes exhibits characteristics of sparsity,strong temporal dependencies and a lack of annotations.Consequently,the task of understanding and reconstructing outdoor scenes is highly challenging.The authors propose a sparse multi-view images-based 3D scene reconstruction framework(SMSR).It divides the scene reconstruction task into three stages:initial prediction,refinement,and fusion stage.The first two stages extract 3D scene representations from each viewpoint,while the final stage involves selection,calibration and fusion of object positions and orientations across different viewpoints.SMSR effectively address the issue of object omission by utilizing small-scale sequential scene information.Experimental results on the general outdoor scene dataset UrbanScene3D-Art Sci and our proprietary dataset Software College Aerial Time-series Images,demonstrate that SMSR achieves superior performance in the scene understanding and reconstruction.展开更多
自然语言描述驱动的目标跟踪是指通过自然语言描述引导视觉目标跟踪,通过融合文本描述和图像视觉信息,使机器能够“像人类一样”感知和理解真实的三维世界.随着深度学习的发展,自然语言描述驱动的视觉目标跟踪领域不断涌现新的方法.但...自然语言描述驱动的目标跟踪是指通过自然语言描述引导视觉目标跟踪,通过融合文本描述和图像视觉信息,使机器能够“像人类一样”感知和理解真实的三维世界.随着深度学习的发展,自然语言描述驱动的视觉目标跟踪领域不断涌现新的方法.但现有方法大多局限于二维空间,未能充分利用三维空间的位姿信息,因此无法像人类一样自然地进行三维感知;而传统三维目标跟踪任务又依赖于昂贵的传感器,并且数据采集和处理存在局限性,这使得三维目标跟踪变得更加复杂.针对上述挑战,本文提出了单目视角下自然语言描述驱动的三维目标跟踪(Natural Language-driven Object Tracking in 3D,NLOT3D)新任务,并构建了对应的数据集NLOT3D-SPD.此外,本文还设计了一个端到端的NLOT3D-TR(Natural Language-driven Object Tracking in 3D based on Transformer)模型,该模型融合了视觉与文本的跨模态特征,在NLOT3D-SPD数据集上取得了优异的实验结果.本文为NLOT3D任务提供了全面的基准测试,并进行了对比实验与消融研究,为三维目标跟踪领域的进一步发展提供了支持.展开更多
Three-dimensional(3D)high-fidelity surface models play an important role in urban scene construction.However,the data quantity of such models is large and places a tremendous burden on rendering.Many applications must...Three-dimensional(3D)high-fidelity surface models play an important role in urban scene construction.However,the data quantity of such models is large and places a tremendous burden on rendering.Many applications must balance the visual quality of the models with the rendering efficiency.The study provides a practical texture baking processing pipeline for generating 3D models to reduce the model complexity and preserve the visually pleasing details.Concretely,we apply a mesh simplification to the original model and use texture baking to create three types of baked textures,namely,a diffuse map,normal map and displacement map.The simplified model with the baked textures has a pleasing visualization effect in a rendering engine.Furthermore,we discuss the influence of various factors in the process on the results,as well as the functional principles and characteristics of the baking textures.The proposed approach is very useful for real-time rendering with limited rendering hardware as no additional memory or computing capacity is required for properly preserving the relief details of the model.Each step in the pipeline is described in detail to facilitate the realization.展开更多
Color,as a significant element of village landscapes,serves various functions such as enhancing aesthetic appeal and attractiveness,conveying emotions and cultural values.To explore the three-dimensional spatial chara...Color,as a significant element of village landscapes,serves various functions such as enhancing aesthetic appeal and attractiveness,conveying emotions and cultural values.To explore the three-dimensional spatial characteristics of color landscapes in beautiful villages,this study conducted a comparative experiment involving eight provinciallevel beautiful villages and eight ordinary villages in Jinzhai County.Landscape pattern indices were used to analyze the color landscape patterns on the facades of these villages,complemented by a quantitative analysis of color attributes using theMunsell color system.The results indicate that(1)Natural landscape colors in beautiful villages are primarily concentrated in the yellow-red to green-yellow interval,while those in ordinary villages are widely distributed in the red to blue-green interval.Artificial landscape colors in beautiful villages aremainly characterized by medium value,with chroma concentrated in the low chroma range.(2)The proportion of color areas for forests,grasslands,and building walls in beautiful villages is higher by 14.76%,2.17%,and 5.16%,respectively,compared to ordinary villages.However,the proportion of yellow exposed areas in ordinary villages ismore than twice that of beautiful villages.(3)The Landscape Shape Index for forests,grasslands,and buildings in beautiful villages is 5.23,8.01,and 8.19,respectively,indicating a higher irregularity in color patches.(4)Ordinary villages exhibit a higher Shannon’s diversity index,indicating amore complex distribution of colors,whereas beautiful villages demonstrate a higher number of connected dominant patches.This study can provide a scientific basis for village color planning and layout.展开更多
基金supported by the Glocal University 30 Project Fund of Gyeongsang National University in 2025.
文摘Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.
基金supported by the National Natural Science Foundation of China(No.61976023)。
文摘In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.
基金the National Natural Science Foundation of China(Nos.61073086 and 70871078)the National High Technology Research and Development Program (863) of China(No.2008AA04Z126)
文摘The increasing scale and complexity of 3D scene design work urge an efficient way to understand the design in multi-disciplinary team and exploit the experiences and underlying knowledge in previous works for reuse.However the previous researches lack of concerning on relationship maintaining and design reuse in knowledge level.We propose a novel semantic driven design reuse system,including a property computation algorithm that enables our system to compute the properties while modeling process to maintain the semantic consistency,and a vertex statics based algorithm that enables the system to recognize scene design pattern as universal semantic model for the same type of scenes.With the universal semantic model,the system conducts the modeling process of future design works by suggestions and constraints on operation.The proposed framework empowers the reuse of 3D scene design on both model level and knowledge level.
基金Supported by National Natural Science Foundation of China(61872024)National Key R&D Program of China under Grant(2018YFB2100603).
文摘Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes and their relationships are modeled as edges.More specifically,we employ the DGCNN to capture the features of objects and their relationships in the scene.A Graph Attention Network(GAT)is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure.A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.Results Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.Conclusions The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.
文摘为提升煤场的管理水平,并保证不同环境下的盘煤精度,研究Unity3D在工业煤场三维可视化中的关键技术。该技术的数据采集模块利用激光扫描仪采集工业煤场的点云数据,并通过基于面片的多视角立体视觉(PMVS)算法重建该数据;随后将该数据输入至三维场景构建模块,该模块利用Dynamo for Revit软件生成煤场的三维场景模型。场景渲染和可视化模块在Unity3D技术的支撑下渲染该模型,并完成模型可视化展示;结合构建的模型结果和点云数据完成煤场各区域煤堆的体积计算,实现煤场盘点。测试结果显示,该技术生成的三维场景模型能完整保留煤堆的形态细节,且能可靠完成不同高度煤堆的体积计算。
基金supported by the National Natural Science Foundation of China(grant numbers U2034202,41871289,42171397)the Sichuan Science and Technology Program(grant number 2020JDTD0003).
文摘As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering methods,but the current rendering optimization methods have some defects and cannot render real 3D scenes in virtual reality.In this study,the location of the viewing frustum is predicted by a Kalman filter,and eye-tracking equipment is used to recognize the region of interest(ROI)in the scene.Finally,the real 3D model of interest in the predicted frustum is rendered first.The experimental results show that the method of this study can predict the frustrum location approximately 200 ms in advance,the prediction accuracy is approximately 87%,the scene rendering efficiency is improved by 8.3%,and the motion sickness is reduced by approximately 54.5%.These studies help promote the use of real 3D models in virtual reality and ROI recognition methods.In future work,we will further improve the prediction accuracy of viewing frustums in virtual reality and the application of eye tracking in virtual geographic scenes.
基金the National Natural Science Foundation of China(U22B2034)the Fundamental Research Funds for the Central Universities(226-2022-00064).
文摘With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration unlocks the value of data and computational power,presenting significant opportunities for large-scale 3D scene modeling and XR presentation.In this paper,we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture.We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.
文摘in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such as boundaries and shape classes of individual object as well as relationshipsbetween objects.It is implemented as an added high-level component to an existing low-level binocularvision system[1]. Based on a pair of matched stereo images produced by that system,3D segmentation is firstperformed to group object boundary data into several edge-sets,each of which is believed to belong to aparticular object.Then gross features of each object are extracted and stored in an object recbrd.The finalstructural description of the scene is accomplished with information in the object record,a set of rules and arule implementor. The System is designed to handle partially occluded objects of different shapes and sizeson the 2D imager.Experimental results have shown its success in computing both object and structurallevel descriptions of common man-made objects.
基金supported by the National Natural Science Foundation of China under Grant No.61832016the Key Research Projects of the Foundation Strengthening Program of China under Grant No.2020JCJQZD01412Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.
文摘This paper presents ScenePalette,a modeling tool that allows users to“draw”3D scenes interactively by placing objects on a canvas based on their contextual relationship.ScenePalette is inspired by an important intuition which was often ignored in previous work:a real-world 3D scene consists of the contextually reasonable organization of objects,e.g.people typically place one double bed with several subordinate objects into a bedroom instead of different shapes of beds.ScenePalette,abstracts 3D repositories as multiplex networks and accordingly encodes implicit relations between or among objects.Specifically,basic statistics such as co-occurrence,in combination with advanced relations,are used to tackle object relationships of different levels.Extensive experiments demonstrate that the latent space of ScenePalette has rich contexts that are essential for contextual representation and exploration.
基金National Key R&D Program of China,Grant/Award Number:2021YFC3300203TaiShan Scholars Program,Grant/Award Number:tsqn202211289+1 种基金Oversea Innovation Team Project of the“20 Regulations for New Universities”funding program of Jinan,Grant/Award Number:2021GXRC073Excellent Youth Scholars Program of Shandong Province,Grant/Award Number:2022HWYQ-048。
文摘3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic scene understanding methods primarily recover scenes from single images,with a focus on indoor scenes.Due to the complexity of real-world,the information provided by a single image is limited,resulting in issues such as object occlusion and omission.Furthermore,captured data from outdoor scenes exhibits characteristics of sparsity,strong temporal dependencies and a lack of annotations.Consequently,the task of understanding and reconstructing outdoor scenes is highly challenging.The authors propose a sparse multi-view images-based 3D scene reconstruction framework(SMSR).It divides the scene reconstruction task into three stages:initial prediction,refinement,and fusion stage.The first two stages extract 3D scene representations from each viewpoint,while the final stage involves selection,calibration and fusion of object positions and orientations across different viewpoints.SMSR effectively address the issue of object omission by utilizing small-scale sequential scene information.Experimental results on the general outdoor scene dataset UrbanScene3D-Art Sci and our proprietary dataset Software College Aerial Time-series Images,demonstrate that SMSR achieves superior performance in the scene understanding and reconstruction.
文摘自然语言描述驱动的目标跟踪是指通过自然语言描述引导视觉目标跟踪,通过融合文本描述和图像视觉信息,使机器能够“像人类一样”感知和理解真实的三维世界.随着深度学习的发展,自然语言描述驱动的视觉目标跟踪领域不断涌现新的方法.但现有方法大多局限于二维空间,未能充分利用三维空间的位姿信息,因此无法像人类一样自然地进行三维感知;而传统三维目标跟踪任务又依赖于昂贵的传感器,并且数据采集和处理存在局限性,这使得三维目标跟踪变得更加复杂.针对上述挑战,本文提出了单目视角下自然语言描述驱动的三维目标跟踪(Natural Language-driven Object Tracking in 3D,NLOT3D)新任务,并构建了对应的数据集NLOT3D-SPD.此外,本文还设计了一个端到端的NLOT3D-TR(Natural Language-driven Object Tracking in 3D based on Transformer)模型,该模型融合了视觉与文本的跨模态特征,在NLOT3D-SPD数据集上取得了优异的实验结果.本文为NLOT3D任务提供了全面的基准测试,并进行了对比实验与消融研究,为三维目标跟踪领域的进一步发展提供了支持.
基金supported by the Key Program of the National Natural Science Foundation of China[grant no 41930104].
文摘Three-dimensional(3D)high-fidelity surface models play an important role in urban scene construction.However,the data quantity of such models is large and places a tremendous burden on rendering.Many applications must balance the visual quality of the models with the rendering efficiency.The study provides a practical texture baking processing pipeline for generating 3D models to reduce the model complexity and preserve the visually pleasing details.Concretely,we apply a mesh simplification to the original model and use texture baking to create three types of baked textures,namely,a diffuse map,normal map and displacement map.The simplified model with the baked textures has a pleasing visualization effect in a rendering engine.Furthermore,we discuss the influence of various factors in the process on the results,as well as the functional principles and characteristics of the baking textures.The proposed approach is very useful for real-time rendering with limited rendering hardware as no additional memory or computing capacity is required for properly preserving the relief details of the model.Each step in the pipeline is described in detail to facilitate the realization.
基金the National Natural Science Foundation of China(Grant Number 42301478)Natural Science Foundation of Anhui Province(No.2208085QD108)the Major Project of Natural Science Research of Anhui Provincial Department of Education(Grant Number KJ2021ZD0130).
文摘Color,as a significant element of village landscapes,serves various functions such as enhancing aesthetic appeal and attractiveness,conveying emotions and cultural values.To explore the three-dimensional spatial characteristics of color landscapes in beautiful villages,this study conducted a comparative experiment involving eight provinciallevel beautiful villages and eight ordinary villages in Jinzhai County.Landscape pattern indices were used to analyze the color landscape patterns on the facades of these villages,complemented by a quantitative analysis of color attributes using theMunsell color system.The results indicate that(1)Natural landscape colors in beautiful villages are primarily concentrated in the yellow-red to green-yellow interval,while those in ordinary villages are widely distributed in the red to blue-green interval.Artificial landscape colors in beautiful villages aremainly characterized by medium value,with chroma concentrated in the low chroma range.(2)The proportion of color areas for forests,grasslands,and building walls in beautiful villages is higher by 14.76%,2.17%,and 5.16%,respectively,compared to ordinary villages.However,the proportion of yellow exposed areas in ordinary villages ismore than twice that of beautiful villages.(3)The Landscape Shape Index for forests,grasslands,and buildings in beautiful villages is 5.23,8.01,and 8.19,respectively,indicating a higher irregularity in color patches.(4)Ordinary villages exhibit a higher Shannon’s diversity index,indicating amore complex distribution of colors,whereas beautiful villages demonstrate a higher number of connected dominant patches.This study can provide a scientific basis for village color planning and layout.