Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores t...Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.展开更多
With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mo...With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a ...High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.展开更多
Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,incl...Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.展开更多
The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green ph...The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.展开更多
目的实时渲染图形程序(如游戏、虚拟现实等)对高分辨率和高刷新率的要求越来越高,因此,针对渲染图像的实时超分辨率技术在实时渲染中非常必要。然而,现有的视频超分算法和实时渲染处于不同的数据处理管线之中,这导致其难以直接应用到实...目的实时渲染图形程序(如游戏、虚拟现实等)对高分辨率和高刷新率的要求越来越高,因此,针对渲染图像的实时超分辨率技术在实时渲染中非常必要。然而,现有的视频超分算法和实时渲染处于不同的数据处理管线之中,这导致其难以直接应用到实时渲染管线里。方法对此,提出了一个基于帧循环结构的实时神经超采样方法。充分利用实时渲染管线中生成的低分辨场景几何数据,以提升超采样网络对于三维空间信息的感知力;将帧循环框架结合到超采样方法中,通过引入先前帧重建结果的特征来改善当前帧的重建结果,从而实现时间尺度上的稳定性;将重加权网络和注意力网络置于特征提取模块中,以提升提取到的特征的有效性。此外,本文还提出了一个面向神经超采样的实时渲染流程,该流程能够将超采样网络部署至图形计算管线之上,并与实时渲染管线相结合。结果与同样能够实时且效果较好的基准方法面向实时渲染的神经超采样(neural super-sampling for real-time rendering,NSRR)比较,本文方法在速度少许提升的前提下,图像质量指标峰值信噪比(peak signal to noise ratio,PSNR)平均提升了0.4 dB,并在部署到实时渲染管线后,通过轻量化裁剪继续保持实时性且部分场景效果仍然优于非实时的部署后NSRR;在网络模块的消融实验中也证明了各个子模块对于神经超采样任务的有效性。结论本文提出的神经超采样网络模型与搭建的神经超采样渲染流程,在取得更好效果的同时具有一定的实用价值。展开更多
Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key...Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.展开更多
As a cornerstone for applications such as autonomous driving,3D urban perception is a burgeoning field of study.Enhancing the performance and robustness of these perception systems is crucial for ensuring the safety o...As a cornerstone for applications such as autonomous driving,3D urban perception is a burgeoning field of study.Enhancing the performance and robustness of these perception systems is crucial for ensuring the safety of next-generation autonomous vehicles.In this work,we introduce a novel neural scene representation called Street Detection Gaussians(SDGs),which redefines urban 3D perception through an integrated architecture unifying reconstruction and detection.At its core lies the dynamic Gaussian representation,where time-conditioned parameterization enables simultaneous modeling of static environments and dynamic objects through physically constrained Gaussian evolution.The framework’s radar-enhanced perception module learns cross-modal correlations between sparse radardata anddense visual features,resulting ina22%reduction inocclusionerrors compared tovisiononly systems.A breakthrough differentiable rendering pipeline back-propagates semantic detection losses throughout the entire 3D reconstruction process,enabling the optimization of both geometric and semantic fidelity.Evaluated on the Waymo Open Dataset and the KITTI Dataset,the system achieves real-time performance(135 Frames Per Second(FPS)),photorealistic quality(Peak Signal-to-Noise Ratio(PSNR)34.9 dB),and state-of-the-art detection accuracy(78.1%Mean Average Precision(mAP)),demonstrating a 3.8×end-to-end improvement over existing hybrid approaches while enabling seamless integration with autonomous driving stacks.展开更多
In this paper,we provide a comprehensive examination of the evolution of graphics Application Programming Interfaces(APIs).We begin by exploring traditional graphics APIs,elucidating their distinct features and inhere...In this paper,we provide a comprehensive examination of the evolution of graphics Application Programming Interfaces(APIs).We begin by exploring traditional graphics APIs,elucidating their distinct features and inherent challenges.This sets the stage for a detailed exploration of modern graphics APIs,with a focus on four critical design principles.These principles are further analyzed through specific case studies and categorical examinations.The paper then introduces MoerEngine,a bespoke rendering engine,as a practical case to demonstrate the real-world application of these modern principles in software engineering.In conclusion,the study offers insights into the potential future trajectory of graphics APIs,spotlighting emerging design patterns and technological innovations.It also ventures to predict the development trends and capabilities of next-generation graphics APIs.展开更多
针对神经辐射场(neural radiance fields,NeRF)在稀疏视图输入及复杂场景下新视图合成易出现伪影和纹理模糊的问题,提出了一种基于显式特征匹配和缩放点积注意力的神经辐射场方法(NeRF based on explicit feature matching and scaled d...针对神经辐射场(neural radiance fields,NeRF)在稀疏视图输入及复杂场景下新视图合成易出现伪影和纹理模糊的问题,提出了一种基于显式特征匹配和缩放点积注意力的神经辐射场方法(NeRF based on explicit feature matching and scaled dot-product attention,EMD-NeRF)。使用多尺度特征提取网络从输入的稀疏视图中提取多尺度特征信息。利用融合点积模块计算视图交互信息,作为共享分支。采用余弦相似度作为匹配线索,进行相似性嵌入体渲染。使用正则化损失函数增强场景颜色密度场的质量,提高所渲染的新视图的真实性。在多个开源数据集上的实验结果均证明了所提方法的有效性。展开更多
基金supported partially by the National Natural Science Foundation of China(No.U19A2063)the Jilin Provincial Science&Technology Development Program of China(No.20230201080GX)。
文摘Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.
基金Supported by the National Key R&D Program of China under grant No.2022YFB3303203the National Natural Science Foundation of China under grant No.62272275.
文摘With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
基金Supported by the National Natural Science Foundation of China under Grants 61631010 and 61806085.
文摘High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.
基金Supported by National Natural Science Foundation of China(No.62072020)the Leading Talents in Innovation and Entrepreneurship of Qingdao,China(19-3-2-21-zhc).
文摘Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.
基金the National Natural Science Foundation of China(22003035,21963006,22073061)the Project of Shaanxi Province Youth Science and Technology New Star(2023KJXX-076)the National Training Program of Innovation and Entrepreneurship for Undergraduates(202314390018)。
文摘The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.
文摘目的实时渲染图形程序(如游戏、虚拟现实等)对高分辨率和高刷新率的要求越来越高,因此,针对渲染图像的实时超分辨率技术在实时渲染中非常必要。然而,现有的视频超分算法和实时渲染处于不同的数据处理管线之中,这导致其难以直接应用到实时渲染管线里。方法对此,提出了一个基于帧循环结构的实时神经超采样方法。充分利用实时渲染管线中生成的低分辨场景几何数据,以提升超采样网络对于三维空间信息的感知力;将帧循环框架结合到超采样方法中,通过引入先前帧重建结果的特征来改善当前帧的重建结果,从而实现时间尺度上的稳定性;将重加权网络和注意力网络置于特征提取模块中,以提升提取到的特征的有效性。此外,本文还提出了一个面向神经超采样的实时渲染流程,该流程能够将超采样网络部署至图形计算管线之上,并与实时渲染管线相结合。结果与同样能够实时且效果较好的基准方法面向实时渲染的神经超采样(neural super-sampling for real-time rendering,NSRR)比较,本文方法在速度少许提升的前提下,图像质量指标峰值信噪比(peak signal to noise ratio,PSNR)平均提升了0.4 dB,并在部署到实时渲染管线后,通过轻量化裁剪继续保持实时性且部分场景效果仍然优于非实时的部署后NSRR;在网络模块的消融实验中也证明了各个子模块对于神经超采样任务的有效性。结论本文提出的神经超采样网络模型与搭建的神经超采样渲染流程,在取得更好效果的同时具有一定的实用价值。
文摘Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.
文摘As a cornerstone for applications such as autonomous driving,3D urban perception is a burgeoning field of study.Enhancing the performance and robustness of these perception systems is crucial for ensuring the safety of next-generation autonomous vehicles.In this work,we introduce a novel neural scene representation called Street Detection Gaussians(SDGs),which redefines urban 3D perception through an integrated architecture unifying reconstruction and detection.At its core lies the dynamic Gaussian representation,where time-conditioned parameterization enables simultaneous modeling of static environments and dynamic objects through physically constrained Gaussian evolution.The framework’s radar-enhanced perception module learns cross-modal correlations between sparse radardata anddense visual features,resulting ina22%reduction inocclusionerrors compared tovisiononly systems.A breakthrough differentiable rendering pipeline back-propagates semantic detection losses throughout the entire 3D reconstruction process,enabling the optimization of both geometric and semantic fidelity.Evaluated on the Waymo Open Dataset and the KITTI Dataset,the system achieves real-time performance(135 Frames Per Second(FPS)),photorealistic quality(Peak Signal-to-Noise Ratio(PSNR)34.9 dB),and state-of-the-art detection accuracy(78.1%Mean Average Precision(mAP)),demonstrating a 3.8×end-to-end improvement over existing hybrid approaches while enabling seamless integration with autonomous driving stacks.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.IA20230921014。
文摘In this paper,we provide a comprehensive examination of the evolution of graphics Application Programming Interfaces(APIs).We begin by exploring traditional graphics APIs,elucidating their distinct features and inherent challenges.This sets the stage for a detailed exploration of modern graphics APIs,with a focus on four critical design principles.These principles are further analyzed through specific case studies and categorical examinations.The paper then introduces MoerEngine,a bespoke rendering engine,as a practical case to demonstrate the real-world application of these modern principles in software engineering.In conclusion,the study offers insights into the potential future trajectory of graphics APIs,spotlighting emerging design patterns and technological innovations.It also ventures to predict the development trends and capabilities of next-generation graphics APIs.
文摘针对神经辐射场(neural radiance fields,NeRF)在稀疏视图输入及复杂场景下新视图合成易出现伪影和纹理模糊的问题,提出了一种基于显式特征匹配和缩放点积注意力的神经辐射场方法(NeRF based on explicit feature matching and scaled dot-product attention,EMD-NeRF)。使用多尺度特征提取网络从输入的稀疏视图中提取多尺度特征信息。利用融合点积模块计算视图交互信息,作为共享分支。采用余弦相似度作为匹配线索,进行相似性嵌入体渲染。使用正则化损失函数增强场景颜色密度场的质量,提高所渲染的新视图的真实性。在多个开源数据集上的实验结果均证明了所提方法的有效性。