With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mo...With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.展开更多
Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores t...Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.展开更多
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions a...The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.展开更多
Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,incl...Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a ...High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.展开更多
The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green ph...The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.展开更多
In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system ...In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system was designed and built.The acquired images generated initial point clouds through TransMVSNet.Meanwhile,after extracting features from the images through the improved SE-ConvNeXt network,the extracted features were aligned and fused with the initial point cloud to generate high-quality neural point cloud.After ray-tracing and sampling of the neural point cloud,the ResMLP neural network designed in this paper was used to regress the volume density and radiance under a given viewing angle,which introduced spatial coordinate and relative positional encoding.The reconstruction and rendering of arbitrary-scale super-resolution of damaged functional surface is realized.In this paper,the influence of illumination conditions and background environment on the model performance is also studied through experiments,and the comparison and ablation experiments for the improved methods proposed in this paper is conducted.The experimental results show that the improved model has good effect.Finally,the application experiment of object detection task is carried out,and the experimental results show that the model has good practicability.展开更多
Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for ma...Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.展开更多
Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-castin...Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-casting,where the field of the implicit surface is assumed to be piecewise polynomials defined on the grid of a rectangular domain.A critical issue for direct rendering based on ray-casting is the computational cost of finding intersections between surfaces and rays.In particular,ray-casting requires many function evaluations along each ray,severely slowing the rendering speed.In this paper,a method is proposed to achieve direct rendering of polynomial-based implicit surfaces in real-time by strategically narrowing the search range and designing the shader to exploit the structure of piecewise polynomials.In experiments,the proposed method achieved a high framerate performance for different test cases,with a speed-up factor ranging from 1.1 to 218.2.In addition,the proposed method demonstrated better efficiency with high cell resolution.In terms of memory consumption,the proposed method saved between 90.94%and 99.64%in different test cases.Generally,the proposed method became more memory-efficient as the cell resolution increased.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic si...BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic significance,auricular morphology attracts interest from various fields,such as medicine and industry.The size of the ears is culturally associated with health and longevity,while surgical techniques for ear reconstruction address both congenital and aesthetic concerns.AIM To determine whether known correlations with various measurements and observations regarding sex and age could also be established through computed tomography(CT).METHODS Computed tomography scans of the head from 342 females and 329 males aged 18 to 97 years(mean=60±19 years)were included in this study.Different auricular lengths,widths and perimeters were measured for both sides.Additionally,the preauricular area was assessed using three-dimensional volume rendering tech-nique images.RESULTS The measured auricular dimensions in centimeters are presented as mean values(right/left)for males(length 16.91±0.51/6.93±0.52;length 22.83±0.35/2.84±0.34;width 13.94±0.32/4.01±0.36;width 23.51±0.34/3.46±0.31;perimeter 17.66±1.25/17.71±1.28)and females(length 16.44±0.5/6.48±0.51;length 22.7±0.32/2.71±0.33;width 13.6±0.32/3.68±0.31;width 23.3±0.3/3.26±0.27;perimeter 16.36±1.2/16.46±1.2).A positive correlation with age was shown in all measurements,with the highest value for perimeter in both,males(r-value:right/left:0.49/0.47)and females(r-value:right/left:0.53/0.53).After confounding factors were excluded,the preauricular vertical line was first seen at 45 years.The mean age for males with preauricular vertical lines was 66.65±10.92 years(95%CI:63.99-69.3),while without vertical lines,it was 44.48±16.15 years(95%CI:41.21-47.74);for females,it was 70.18±12.44 years(95%CI:68.9-71.46)with and 47.87±17.09 years(95%CI:45.96-49.78)without vertical lines.CONCLUSION In this study,we pioneered the use of CT volumetric data to examine human auricle morphology and we achieved a precise 3D(pre-)auricular assessment.Sex-specific positive correlations between ear dimensions and age,as well as the mean age for the appearance of preauricular lines,were identified,providing valuable insights into the capabilities of modern CT devices.展开更多
Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians an...Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating.However,this remains a challenging task because(1)there is a large amount of speckle noise in ultrasound images and(2)ultrasound images usually have low contrasts,making it difficult to distinguish different tissues and organs.However,traditional local-illumination-based methods do not achieve satisfactory results.This real-time requirement makes the task increasingly challenging.This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization.This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances,respectively.The indirect illumination effect was simulated using volumetric photon mapping.Calculating each photon’s brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation.This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device.Experiments show that our technology,compared to conventional methodologies,can generate realistic rendering results with far more depth information.展开更多
Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this strok...Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.展开更多
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compressio...Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.展开更多
Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D onli...Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.展开更多
渲染是一种计算机图形图像生成技术,它以存储在计算机中的几何场景模型为基础,经过附加色彩、纹理及材质,并根据设定的光照条件及场景光照关系,计算生成具有高真实度的视景图像。实现3D动画渲染十分消耗计算机的性能,为了减少3D动画渲...渲染是一种计算机图形图像生成技术,它以存储在计算机中的几何场景模型为基础,经过附加色彩、纹理及材质,并根据设定的光照条件及场景光照关系,计算生成具有高真实度的视景图像。实现3D动画渲染十分消耗计算机的性能,为了减少3D动画渲染所花费的时间,利用德国maxon公司Cinema 4D软件的Cinema 4D Team Render对3D动画进行分布式渲染测试。结果表明,此种方法确实可以成倍地减少3D动画渲染所花费的时间。展开更多
基金Supported by the National Key R&D Program of China under grant No.2022YFB3303203the National Natural Science Foundation of China under grant No.62272275.
文摘With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.
基金supported partially by the National Natural Science Foundation of China(No.U19A2063)the Jilin Provincial Science&Technology Development Program of China(No.20230201080GX)。
文摘Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.
基金supported by the National Natural Science(No.U19A2063)the Jilin Provincial Development Program of Science and Technology (No.20230201080GX)the Jilin Province Education Department Scientific Research Project (No.JJKH20230851KJ)。
文摘The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.
基金Supported by National Natural Science Foundation of China(No.62072020)the Leading Talents in Innovation and Entrepreneurship of Qingdao,China(19-3-2-21-zhc).
文摘Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
基金Supported by the National Natural Science Foundation of China under Grants 61631010 and 61806085.
文摘High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.
基金the National Natural Science Foundation of China(22003035,21963006,22073061)the Project of Shaanxi Province Youth Science and Technology New Star(2023KJXX-076)the National Training Program of Innovation and Entrepreneurship for Undergraduates(202314390018)。
文摘The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.
基金supported by the National Science and Technology Major Project,China(No.J2019-Ⅲ-0009-0053)the National Natural Science Foundation of China(No.12075319)。
文摘In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system was designed and built.The acquired images generated initial point clouds through TransMVSNet.Meanwhile,after extracting features from the images through the improved SE-ConvNeXt network,the extracted features were aligned and fused with the initial point cloud to generate high-quality neural point cloud.After ray-tracing and sampling of the neural point cloud,the ResMLP neural network designed in this paper was used to regress the volume density and radiance under a given viewing angle,which introduced spatial coordinate and relative positional encoding.The reconstruction and rendering of arbitrary-scale super-resolution of damaged functional surface is realized.In this paper,the influence of illumination conditions and background environment on the model performance is also studied through experiments,and the comparison and ablation experiments for the improved methods proposed in this paper is conducted.The experimental results show that the improved model has good effect.Finally,the application experiment of object detection task is carried out,and the experimental results show that the model has good practicability.
基金supported in part by the National Natural Science Foundation of China (62176059, 62101136)。
文摘Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.
基金supported by JSPS KAKENHI Grant Number 21K11928。
文摘Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-casting,where the field of the implicit surface is assumed to be piecewise polynomials defined on the grid of a rectangular domain.A critical issue for direct rendering based on ray-casting is the computational cost of finding intersections between surfaces and rays.In particular,ray-casting requires many function evaluations along each ray,severely slowing the rendering speed.In this paper,a method is proposed to achieve direct rendering of polynomial-based implicit surfaces in real-time by strategically narrowing the search range and designing the shader to exploit the structure of piecewise polynomials.In experiments,the proposed method achieved a high framerate performance for different test cases,with a speed-up factor ranging from 1.1 to 218.2.In addition,the proposed method demonstrated better efficiency with high cell resolution.In terms of memory consumption,the proposed method saved between 90.94%and 99.64%in different test cases.Generally,the proposed method became more memory-efficient as the cell resolution increased.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金Supported by Schulz B received funding from Guerbet AG,No.8050.
文摘BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic significance,auricular morphology attracts interest from various fields,such as medicine and industry.The size of the ears is culturally associated with health and longevity,while surgical techniques for ear reconstruction address both congenital and aesthetic concerns.AIM To determine whether known correlations with various measurements and observations regarding sex and age could also be established through computed tomography(CT).METHODS Computed tomography scans of the head from 342 females and 329 males aged 18 to 97 years(mean=60±19 years)were included in this study.Different auricular lengths,widths and perimeters were measured for both sides.Additionally,the preauricular area was assessed using three-dimensional volume rendering tech-nique images.RESULTS The measured auricular dimensions in centimeters are presented as mean values(right/left)for males(length 16.91±0.51/6.93±0.52;length 22.83±0.35/2.84±0.34;width 13.94±0.32/4.01±0.36;width 23.51±0.34/3.46±0.31;perimeter 17.66±1.25/17.71±1.28)and females(length 16.44±0.5/6.48±0.51;length 22.7±0.32/2.71±0.33;width 13.6±0.32/3.68±0.31;width 23.3±0.3/3.26±0.27;perimeter 16.36±1.2/16.46±1.2).A positive correlation with age was shown in all measurements,with the highest value for perimeter in both,males(r-value:right/left:0.49/0.47)and females(r-value:right/left:0.53/0.53).After confounding factors were excluded,the preauricular vertical line was first seen at 45 years.The mean age for males with preauricular vertical lines was 66.65±10.92 years(95%CI:63.99-69.3),while without vertical lines,it was 44.48±16.15 years(95%CI:41.21-47.74);for females,it was 70.18±12.44 years(95%CI:68.9-71.46)with and 47.87±17.09 years(95%CI:45.96-49.78)without vertical lines.CONCLUSION In this study,we pioneered the use of CT volumetric data to examine human auricle morphology and we achieved a precise 3D(pre-)auricular assessment.Sex-specific positive correlations between ear dimensions and age,as well as the mean age for the appearance of preauricular lines,were identified,providing valuable insights into the capabilities of modern CT devices.
基金supported by a grant from General Research Fund of Hong Kong Research Grants Council,No.15218521a grant under the scheme of Collaborative Research with World-leading Research Groups in the Hong Kong Polytechnic University,No.G-SACF。
文摘Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating.However,this remains a challenging task because(1)there is a large amount of speckle noise in ultrasound images and(2)ultrasound images usually have low contrasts,making it difficult to distinguish different tissues and organs.However,traditional local-illumination-based methods do not achieve satisfactory results.This real-time requirement makes the task increasingly challenging.This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization.This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances,respectively.The indirect illumination effect was simulated using volumetric photon mapping.Calculating each photon’s brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation.This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device.Experiments show that our technology,compared to conventional methodologies,can generate realistic rendering results with far more depth information.
基金This research was supported by the Chung-Ang University Research Scholarship Grants in 2017.
文摘Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312105), the National Natural Science Founda-tion of China (No. 60573074), the Natural Science Foundation of Shanxi Province, China (No. 20041040), Shanxi Foundation of Tackling Key Problem in Science and Technology (No. 051129), and Key NSFC Project of "Digital Olympic Museum" (No. 60533080), China
文摘Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
基金the Science and Technology Program of Educational Commission of Jiangxi Province,China(DA202104172)the Innovation and Entrepreneurship Course Program of Nanchang Hangkong University(KCPY1910)the Teaching Reform Research Program of Nanchang Hangkong University(JY21040).
文摘Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
文摘渲染是一种计算机图形图像生成技术,它以存储在计算机中的几何场景模型为基础,经过附加色彩、纹理及材质,并根据设定的光照条件及场景光照关系,计算生成具有高真实度的视景图像。实现3D动画渲染十分消耗计算机的性能,为了减少3D动画渲染所花费的时间,利用德国maxon公司Cinema 4D软件的Cinema 4D Team Render对3D动画进行分布式渲染测试。结果表明,此种方法确实可以成倍地减少3D动画渲染所花费的时间。