With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mo...With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.展开更多
Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores t...Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.展开更多
Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,incl...Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a ...High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.展开更多
The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green ph...The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.展开更多
Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this strok...Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.展开更多
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compressio...Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.展开更多
Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D onli...Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.展开更多
Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformati...Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.展开更多
Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key...Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.展开更多
Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rend...Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.展开更多
Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refractio...Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.展开更多
This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping...This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping method, and a haptic feedback model rendering the operating resistance and the interactive feedback on the slave side. Two devices using the 3 R and DELTA mechanisms respectively are developed to be manipulated to control the position and orientation of a large-sized slave robot by using both of a user's two hands respectively. The hybrid motion mapping method combines rate control and variable scaled position mapping to realize accurate and efficient master-slave control. Haptic feedback for these two mapping modes is designed with emphasis on ergonomics to improve the immersion of haptic based teleoperation. A stiffness estimation method is used to calculate the contact stiffness on the slave side and play the contact force rendered by using a traditional spring-damping model to a user on the master side stably. Experiments by using virtual environments to simulate the slave side are conducted to validate the effectiveness and efficiency of the proposed solution.展开更多
A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing m...A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.展开更多
A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,0...A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.展开更多
Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captu...Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.展开更多
AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four gr...AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four groups:the first group was normal control group without any intervention,other three groups were exposed by LEDs with low(LED-L),medium(LED-M),and high(LED-H)CRI respectively for 12 h a day,continuously for 4 wk.The changes in tear secretion(Schirmer I test,SIt),tear film break-up time(BUT),and corneal fluorescein sodium staining(CFS)scores were compared at different times(1 d before experiment,2 and 4 wk after the experiment).The histopathological changes of rat lacrimal gland and retina were observed at 4 wk,and the expressions of tumor necrosis factor-α(TNF-α)and interleukin-6(IL-6)in lacrimal gland were detected by immunofluorescence method.RESULTS:With the increase of light exposed time,the CFS value of each light exposed group continued to increase,and the BUT and SIt scores continued to decrease,which were different from the control group,and the differences between the light exposed groups were statistically significant.Hematoxylin-eosin(HE)results showed that the lacrimal glands of each exposed group were seen varying degrees of acinar atrophy,vacuoledistribution,increasing of eosinophil granules,etc.;the retina showed obvious reduction of photoreceptor cell layer and changes in retinal thickness;LED-L group has the most significant change in all tests.Immunofluorescence suggested that the positive expressions of TNF-αand IL-6 in the lacrimal glands of each exposed group were higher than those of the control group.CONCLUSION:LED exposure for 4 wk can cause the pathological changes of lacrimal gland and retina of rats,and increase the expression of TNF-αand IL-6 in lacrimal gland,the degree of damage is negatively correlated with the CRI.展开更多
Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of eleva...Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.展开更多
The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The c...The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.展开更多
基金Supported by the National Key R&D Program of China under grant No.2022YFB3303203the National Natural Science Foundation of China under grant No.62272275.
文摘With technological advancements,virtual reality(VR),once limited to high-end professional applications,is rapidly expanding into entertainment and broader consumer domains.However,the inherent contradiction between mobile hardware computing power and the demand for high-resolution,high-refresh-rate rendering has intensified,leading to critical bottlenecks,including frame latency and power overload,which constrain large-scale applications of VR systems.This study systematically analyzes four key technologies for efficient VR rendering:(1)foveated rendering,which dynamically reduces rendering precision in peripheral regions based on the physiological characteristics of the human visual system(HVS),thereby significantly decreasing graphics computation load;(2)stereo rendering,optimized through consistent stereo rendering acceleration algorithms;(3)cloud rendering,utilizing object-based decomposition and illumination-based decomposition for distributed resource scheduling;and(4)low-power rendering,integrating parameter-optimized rendering,super-resolution technology,and frame-generation technology to enhance mobile energy efficiency.Through a systematic review of the core principles and optimization approaches of these technologies,this study establishes research benchmarks for developing efficient VR systems that achieve high fidelity and low latency while providing further theoretical support for the engineering implementation and industrial advancement of VR rendering technologies.
基金supported partially by the National Natural Science Foundation of China(No.U19A2063)the Jilin Provincial Science&Technology Development Program of China(No.20230201080GX)。
文摘Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.
基金Supported by National Natural Science Foundation of China(No.62072020)the Leading Talents in Innovation and Entrepreneurship of Qingdao,China(19-3-2-21-zhc).
文摘Background Physics-based differentiable rendering(PBDR)aims to propagate gradients from scene parameters to image pixels or vice versa.The physically correct gradients obtained can be used in various applications,including inverse rendering and machine learning.Currently,two categories of methods are prevalent in the PBDR community:reparameterization and boundary sampling methods.The state-of-the-art boundary sampling methods rely on a guiding structure to calculate the gradients efficiently.They utilize the rays generated in traditional path-tracing methods and project them onto the object silhouette boundary to initialize the guiding structure.Methods In this study,we propose an augmentation of previous projective-sampling-based boundary-sampling methods in a bidirectional manner.Specifically,we utilize the rays spawned from the sensors and also employ the rays emitted by the emitters to initialize the guiding structure.Results To demonstrate the benefits of our technique,we perform a comparative analysis of differentiable rendering and inverse rendering performance.We utilize a range of synthetic scene examples and evaluate our method against state-of-the-art projective-sampling-based differentiable rendering methods.Conclusions The experiments show that our method achieves lower variance gradients in the forward differentiable rendering process and better geometry reconstruction quality in the inverse-rendering results.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
基金Supported by the National Natural Science Foundation of China under Grants 61631010 and 61806085.
文摘High-fidelity tactile rendering offers significant potential for improving the richness and immersion of touchscreen interactions.This study focuses on a quantitative description of tactile rendering fidelity using a custom-designed hybrid electrovibration and mechanical vibration(HEM)device.An electrovibration and mechanical vibration(EMV)algorithm that renders 3D gratings with different physical heights was proposed and shown to achieve 81%accuracy in shape recognition.Models of tactile rendering fidelity were established based on the evaluation of the height discrimination threshold,and the psychophysical-physical relationships between the discrimination and reference heights were well described by a modification of Weber’s law,with correlation coefficients higher than 0.9.The physiological-physical relationship between the pulse firing rate and the physical stimulation voltage was modeled using the Izhikevich spiking model with a logarithmic relationship.
基金the National Natural Science Foundation of China(22003035,21963006,22073061)the Project of Shaanxi Province Youth Science and Technology New Star(2023KJXX-076)the National Training Program of Innovation and Entrepreneurship for Undergraduates(202314390018)。
文摘The utilization of phosphors that achieve full-spectrum lighting has emerged as a prevailing trend in the advancement of white light-emitting diode(WLED)lighting.In this study,we successfully prepared a novel green phosphor Ba_(2)Sc_(2)((BO_(3))_(2)B_(2)O_(5)):Ce^(3+)(BSBO:Ce^(3+))that can be utilized for full-spectrum lighting and low-temperature sensors.BSBO:Ce^(3+)exhibits a broad-band excitation spectrum centered at 410 nm,and a broad-band emission spectrum centered at 525 nm.The internal and external quantum efficiencies of BSBO:Ce^(3+)are 99%and 49%,respectively.The thermal stability of BSBO:Ce^(3+)can be improved by substituting partial Sc atoms with smaller cations.The thermal quenching mechanism of BSBO:Ce^(3+)and the lattice occupancy of Ce ions in BSBO are discussed in detail.Furthermore,by combining the green phosphor BSBO:Ce^(3+),the commercial blue phosphor and the red phosphor on a 405 nm chip,a white light source was obtained with a high average color rendering index(CRI)of 96.6,a low correlated color temperature(CCT)of 3988 K,and a high luminous efficacy of 88.0 Im/W.The lu-minous efficacy of the WLED exhibits negligible degradation during the 1000 h light aging experiment.What's more,an emission peak at 468 nm appears when excited at 352 nm and 80 K,however,the relative intensity of the peaks at 468 and 525 nm gradually weakens with increasing temperature,indicating the potential of this material as a low-temperature sensor.
基金This research was supported by the Chung-Ang University Research Scholarship Grants in 2017.
文摘Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312105), the National Natural Science Founda-tion of China (No. 60573074), the Natural Science Foundation of Shanxi Province, China (No. 20041040), Shanxi Foundation of Tackling Key Problem in Science and Technology (No. 051129), and Key NSFC Project of "Digital Olympic Museum" (No. 60533080), China
文摘Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
基金the Science and Technology Program of Educational Commission of Jiangxi Province,China(DA202104172)the Innovation and Entrepreneurship Course Program of Nanchang Hangkong University(KCPY1910)the Teaching Reform Research Program of Nanchang Hangkong University(JY21040).
文摘Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
文摘Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.
文摘Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.
文摘Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.
基金the Fundamental Research Funds for the Central Universities,the National Key R&D Program of China(2018 YFB 1403900)the High-quality and Cutting-edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.
基金supported by the Open Research Fund of Key Laboratory of Space Utilization,Chinese Academy of Sciences(No.LSU-YKZX-2017-02)
文摘This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping method, and a haptic feedback model rendering the operating resistance and the interactive feedback on the slave side. Two devices using the 3 R and DELTA mechanisms respectively are developed to be manipulated to control the position and orientation of a large-sized slave robot by using both of a user's two hands respectively. The hybrid motion mapping method combines rate control and variable scaled position mapping to realize accurate and efficient master-slave control. Haptic feedback for these two mapping modes is designed with emphasis on ergonomics to improve the immersion of haptic based teleoperation. A stiffness estimation method is used to calculate the contact stiffness on the slave side and play the contact force rendered by using a traditional spring-damping model to a user on the master side stably. Experiments by using virtual environments to simulate the slave side are conducted to validate the effectiveness and efficiency of the proposed solution.
基金Project supported by the National Natural Science Foundation of China (No 60802013)the Natural Science Foundation of Zhe-jiang Province, China (No Y106574)
文摘A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.
基金the National Natural Science Foundation of China (Grant Nos.61204087, 61306099)the Guangdong Natural Science Foundation (Grant No. S2012040007003)+2 种基金China Postdoctoral Science Foundation (2013M531841)the Fundamental Research Funds for the Central Universities (2014ZM0003, 2014ZM0034, 2014ZM0037, 2014ZZ0028)the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20120172120008)
文摘A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.
文摘Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.
基金Supported by the Natural Science Foundation of Fujian Province(No.2020J01652)the Undergraduate Innovation and Entrepreneurship Training Program of Fujian Medical University(No.YC2003)。
文摘AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four groups:the first group was normal control group without any intervention,other three groups were exposed by LEDs with low(LED-L),medium(LED-M),and high(LED-H)CRI respectively for 12 h a day,continuously for 4 wk.The changes in tear secretion(Schirmer I test,SIt),tear film break-up time(BUT),and corneal fluorescein sodium staining(CFS)scores were compared at different times(1 d before experiment,2 and 4 wk after the experiment).The histopathological changes of rat lacrimal gland and retina were observed at 4 wk,and the expressions of tumor necrosis factor-α(TNF-α)and interleukin-6(IL-6)in lacrimal gland were detected by immunofluorescence method.RESULTS:With the increase of light exposed time,the CFS value of each light exposed group continued to increase,and the BUT and SIt scores continued to decrease,which were different from the control group,and the differences between the light exposed groups were statistically significant.Hematoxylin-eosin(HE)results showed that the lacrimal glands of each exposed group were seen varying degrees of acinar atrophy,vacuoledistribution,increasing of eosinophil granules,etc.;the retina showed obvious reduction of photoreceptor cell layer and changes in retinal thickness;LED-L group has the most significant change in all tests.Immunofluorescence suggested that the positive expressions of TNF-αand IL-6 in the lacrimal glands of each exposed group were higher than those of the control group.CONCLUSION:LED exposure for 4 wk can cause the pathological changes of lacrimal gland and retina of rats,and increase the expression of TNF-αand IL-6 in lacrimal gland,the degree of damage is negatively correlated with the CRI.
基金Supported by the National Natural Science Foundation of China(61363075)the National High Technology Research and Development Program of China(863 Program)(2012AA12A308)the Yue Qi Young Scholars Program of China University of Mining&Technology,Beijing(800015Z1117)
文摘Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.
基金supported the National Science Foundation of China(61773051,61761166011,51705016)Beijing Natural Science Foundation(4172048)the Fundamental Research Funds for the Central Universities(2017JBZ003)
文摘The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.