With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mo...With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.展开更多
Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores t...Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.展开更多
Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,incl...Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a ...High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.展开更多
The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green ph...The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.展开更多
In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system ...In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system was designed and built.The acquired images generated initial point clouds through TransMVSNet.Meanwhile,after extracting features from the images through the improved SE-ConvNeXt network,the extracted features were aligned and fused with the initial point cloud to generate high-quality neural point cloud.After ray-tracing and sampling of the neural point cloud,the ResMLP neural network designed in this paper was used to regress the volume density and radiance under a given viewing angle,which introduced spatial coordinate and relative positional encoding.The reconstruction and rendering of arbitrary-scale super-resolution of damaged functional surface is realized.In this paper,the influence of illumination conditions and background environment on the model performance is also studied through experiments,and the comparison and ablation experiments for the improved methods proposed in this paper is conducted.The experimental results show that the improved model has good effect.Finally,the application experiment of object detection task is carried out,and the experimental results show that the model has good practicability.展开更多
Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for ma...Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.展开更多
Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-castin...Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-casting,where the field of the implicit surface is assumed to be piecewise polynomials defined on the grid of a rectangular domain.A critical issue for direct rendering based on ray-casting is the computational cost of finding intersections between surfaces and rays.In particular,ray-casting requires many function evaluations along each ray,severely slowing the rendering speed.In this paper,a method is proposed to achieve direct rendering of polynomial-based implicit surfaces in real-time by strategically narrowing the search range and designing the shader to exploit the structure of piecewise polynomials.In experiments,the proposed method achieved a high framerate performance for different test cases,with a speed-up factor ranging from 1.1 to 218.2.In addition,the proposed method demonstrated better efficiency with high cell resolution.In terms of memory consumption,the proposed method saved between 90.94%and 99.64%in different test cases.Generally,the proposed method became more memory-efficient as the cell resolution increased.展开更多
BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic si...BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic significance,auricular morphology attracts interest from various fields,such as medicine and industry.The size of the ears is culturally associated with health and longevity,while surgical techniques for ear reconstruction address both congenital and aesthetic concerns.AIM To determine whether known correlations with various measurements and observations regarding sex and age could also be established through computed tomography(CT).METHODS Computed tomography scans of the head from 342 females and 329 males aged 18 to 97 years(mean=60±19 years)were included in this study.Different auricular lengths,widths and perimeters were measured for both sides.Additionally,the preauricular area was assessed using three-dimensional volume rendering tech-nique images.RESULTS The measured auricular dimensions in centimeters are presented as mean values(right/left)for males(length 16.91±0.51/6.93±0.52;length 22.83±0.35/2.84±0.34;width 13.94±0.32/4.01±0.36;width 23.51±0.34/3.46±0.31;perimeter 17.66±1.25/17.71±1.28)and females(length 16.44±0.5/6.48±0.51;length 22.7±0.32/2.71±0.33;width 13.6±0.32/3.68±0.31;width 23.3±0.3/3.26±0.27;perimeter 16.36±1.2/16.46±1.2).A positive correlation with age was shown in all measurements,with the highest value for perimeter in both,males(r-value:right/left:0.49/0.47)and females(r-value:right/left:0.53/0.53).After confounding factors were excluded,the preauricular vertical line was first seen at 45 years.The mean age for males with preauricular vertical lines was 66.65±10.92 years(95%CI:63.99-69.3),while without vertical lines,it was 44.48±16.15 years(95%CI:41.21-47.74);for females,it was 70.18±12.44 years(95%CI:68.9-71.46)with and 47.87±17.09 years(95%CI:45.96-49.78)without vertical lines.CONCLUSION In this study,we pioneered the use of CT volumetric data to examine human auricle morphology and we achieved a precise 3D(pre-)auricular assessment.Sex-specific positive correlations between ear dimensions and age,as well as the mean age for the appearance of preauricular lines,were identified,providing valuable insights into the capabilities of modern CT devices.展开更多
Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians an...Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating.However,this remains a challenging task because(1)there is a large amount of speckle noise in ultrasound images and(2)ultrasound images usually have low contrasts,making it difficult to distinguish different tissues and organs.However,traditional local-illumination-based methods do not achieve satisfactory results.This real-time requirement makes the task increasingly challenging.This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization.This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances,respectively.The indirect illumination effect was simulated using volumetric photon mapping.Calculating each photon’s brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation.This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device.Experiments show that our technology,compared to conventional methodologies,can generate realistic rendering results with far more depth information.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this strok...Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.展开更多
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compressio...Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.展开更多
Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D onli...Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.展开更多
Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformati...Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.展开更多
Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key...Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.展开更多
Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rend...Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.展开更多
Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refractio...Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.展开更多
A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing m...A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.展开更多
基金Supported by the National Key R&D Program of China under grant No.2022YFB3303203the National Natural Science Foundation of China under grant No.62272275.
文摘With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.
基金supported partially by the National Natural Science Foundation of China(No.U19A2063)the Jilin Provincial Science&Technology Development Program of China(No.20230201080GX)。
文摘Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.
基金Supported by National Natural Science Foundation of China(No.62072020)the Leading Talents in Innovation and Entrepreneurship of Qingdao,China(19-3-2-21-zhc).
文摘Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
基金Supported by the National Natural Science Foundation of China under Grants 61631010 and 61806085.
文摘High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.
基金the National Natural Science Foundation of China(22003035,21963006,22073061)the Project of Shaanxi Province Youth Science and Technology New Star(2023KJXX-076)the National Training Program of Innovation and Entrepreneurship for Undergraduates(202314390018)。
文摘The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.
基金supported by the National Science and Technology Major Project,China(No.J2019-Ⅲ-0009-0053)the National Natural Science Foundation of China(No.12075319)。
文摘In order to reconstruct and render the weak and repetitive texture of the damaged functional surface of aviation,an improved neural radiance field,named TranSR-NeRF,is proposed.In this paper,a data acquisition system was designed and built.The acquired images generated initial point clouds through TransMVSNet.Meanwhile,after extracting features from the images through the improved SE-ConvNeXt network,the extracted features were aligned and fused with the initial point cloud to generate high-quality neural point cloud.After ray-tracing and sampling of the neural point cloud,the ResMLP neural network designed in this paper was used to regress the volume density and radiance under a given viewing angle,which introduced spatial coordinate and relative positional encoding.The reconstruction and rendering of arbitrary-scale super-resolution of damaged functional surface is realized.In this paper,the influence of illumination conditions and background environment on the model performance is also studied through experiments,and the comparison and ablation experiments for the improved methods proposed in this paper is conducted.The experimental results show that the improved model has good effect.Finally,the application experiment of object detection task is carried out,and the experimental results show that the model has good practicability.
基金supported in part by the National Natural Science Foundation of China (62176059, 62101136)。
文摘Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.
基金supported by JSPS KAKENHI Grant Number 21K11928。
文摘Three-dimensional surfaces are typically modeled as implicit surfaces.However,direct rendering of implicit surfaces is not simple,especially when such surfaces contain finely detailed shapes.One approach is ray-casting,where the field of the implicit surface is assumed to be piecewise polynomials defined on the grid of a rectangular domain.A critical issue for direct rendering based on ray-casting is the computational cost of finding intersections between surfaces and rays.In particular,ray-casting requires many function evaluations along each ray,severely slowing the rendering speed.In this paper,a method is proposed to achieve direct rendering of polynomial-based implicit surfaces in real-time by strategically narrowing the search range and designing the shader to exploit the structure of piecewise polynomials.In experiments,the proposed method achieved a high framerate performance for different test cases,with a speed-up factor ranging from 1.1 to 218.2.In addition,the proposed method demonstrated better efficiency with high cell resolution.In terms of memory consumption,the proposed method saved between 90.94%and 99.64%in different test cases.Generally,the proposed method became more memory-efficient as the cell resolution increased.
基金Supported by Schulz B received funding from Guerbet AG,No.8050.
文摘BACKGROUND The auricle,or auricula,defines the visible boundaries of the external ear and is essential in forensic investigations,including facial reconstruction and human remains identification.Beyond its forensic significance,auricular morphology attracts interest from various fields,such as medicine and industry.The size of the ears is culturally associated with health and longevity,while surgical techniques for ear reconstruction address both congenital and aesthetic concerns.AIM To determine whether known correlations with various measurements and observations regarding sex and age could also be established through computed tomography(CT).METHODS Computed tomography scans of the head from 342 females and 329 males aged 18 to 97 years(mean=60±19 years)were included in this study.Different auricular lengths,widths and perimeters were measured for both sides.Additionally,the preauricular area was assessed using three-dimensional volume rendering tech-nique images.RESULTS The measured auricular dimensions in centimeters are presented as mean values(right/left)for males(length 16.91±0.51/6.93±0.52;length 22.83±0.35/2.84±0.34;width 13.94±0.32/4.01±0.36;width 23.51±0.34/3.46±0.31;perimeter 17.66±1.25/17.71±1.28)and females(length 16.44±0.5/6.48±0.51;length 22.7±0.32/2.71±0.33;width 13.6±0.32/3.68±0.31;width 23.3±0.3/3.26±0.27;perimeter 16.36±1.2/16.46±1.2).A positive correlation with age was shown in all measurements,with the highest value for perimeter in both,males(r-value:right/left:0.49/0.47)and females(r-value:right/left:0.53/0.53).After confounding factors were excluded,the preauricular vertical line was first seen at 45 years.The mean age for males with preauricular vertical lines was 66.65±10.92 years(95%CI:63.99-69.3),while without vertical lines,it was 44.48±16.15 years(95%CI:41.21-47.74);for females,it was 70.18±12.44 years(95%CI:68.9-71.46)with and 47.87±17.09 years(95%CI:45.96-49.78)without vertical lines.CONCLUSION In this study,we pioneered the use of CT volumetric data to examine human auricle morphology and we achieved a precise 3D(pre-)auricular assessment.Sex-specific positive correlations between ear dimensions and age,as well as the mean age for the appearance of preauricular lines,were identified,providing valuable insights into the capabilities of modern CT devices.
基金supported by a grant from General Research Fund of Hong Kong Research Grants Council,No.15218521a grant under the scheme of Collaborative Research with World-leading Research Groups in the Hong Kong Polytechnic University,No.G-SACF。
文摘Three-dimensional(3D)fetal ultrasound has been widely used in prenatal examinations.Realistic and real-time volumetric ultrasound volume rendering can enhance the effectiveness of diagnoses and assist obstetricians and pregnant mothers in communicating.However,this remains a challenging task because(1)there is a large amount of speckle noise in ultrasound images and(2)ultrasound images usually have low contrasts,making it difficult to distinguish different tissues and organs.However,traditional local-illumination-based methods do not achieve satisfactory results.This real-time requirement makes the task increasingly challenging.This study presents a novel real-time volume-rendering method equipped with a global illumination model for 3D fetal ultrasound visualization.This method can render direct illumination and indirect illumination separately by calculating single scattering and multiple scattering radiances,respectively.The indirect illumination effect was simulated using volumetric photon mapping.Calculating each photon’s brightness is proposed using a novel screen-space destiny estimation to avoid complicated storage structures and accelerate computation.This study proposes a high dynamic range approach to address the issue of fetal skin with a dynamic range exceeding that of the display device.Experiments show that our technology,compared to conventional methodologies,can generate realistic rendering results with far more depth information.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金This research was supported by the Chung-Ang University Research Scholarship Grants in 2017.
文摘Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312105), the National Natural Science Founda-tion of China (No. 60573074), the Natural Science Foundation of Shanxi Province, China (No. 20041040), Shanxi Foundation of Tackling Key Problem in Science and Technology (No. 051129), and Key NSFC Project of "Digital Olympic Museum" (No. 60533080), China
文摘Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
基金the Science and Technology Program of Educational Commission of Jiangxi Province,China(DA202104172)the Innovation and Entrepreneurship Course Program of Nanchang Hangkong University(KCPY1910)the Teaching Reform Research Program of Nanchang Hangkong University(JY21040).
文摘Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
文摘Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.
文摘Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.
文摘Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.
基金the Fundamental Research Funds for the Central Universities,the National Key R&D Program of China(2018 YFB 1403900)the High-quality and Cutting-edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.
基金Project supported by the National Natural Science Foundation of China (No 60802013)the Natural Science Foundation of Zhe-jiang Province, China (No Y106574)
文摘A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.