This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display.The method addresses a common problem that visualized content is often out of focus,which adversely affects perceive...This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display.The method addresses a common problem that visualized content is often out of focus,which adversely affects perceived 3D content.The method outperforms existing focusing method,having the error lower by almost 30%.The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts.The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments.A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance.A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory.The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses.The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.展开更多
Light field rendering is an image-based rendering method that does not use 3 D models but only images of the scene as input to render new views.Light field approximation,represented as a set of images,suffers from so-...Light field rendering is an image-based rendering method that does not use 3 D models but only images of the scene as input to render new views.Light field approximation,represented as a set of images,suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene.Without information about depths in the scene,proper focusing of the light field scene is limited to a single focusing distance.The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes,based on statistical analysis of the pixel values contributing to the final image.Unlike existing techniques,this method does not need precomputed or acquired depth information.Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data,yielding visually satisfactory results.Experimental evaluation of the proposed method,implemented on a GPU,is presented in this paper.展开更多
基金supported by Horizon Europe project Hungry EcoCities,grant agreement No 101069990.
文摘This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display.The method addresses a common problem that visualized content is often out of focus,which adversely affects perceived 3D content.The method outperforms existing focusing method,having the error lower by almost 30%.The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts.The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments.A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance.A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory.The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses.The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.
基金supported by The Ministry of Education,Youth and Sports from the National Programme of Sustainability(NPU II)project IT4Innovations excellence in science,LQ1602。
文摘Light field rendering is an image-based rendering method that does not use 3 D models but only images of the scene as input to render new views.Light field approximation,represented as a set of images,suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene.Without information about depths in the scene,proper focusing of the light field scene is limited to a single focusing distance.The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes,based on statistical analysis of the pixel values contributing to the final image.Unlike existing techniques,this method does not need precomputed or acquired depth information.Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data,yielding visually satisfactory results.Experimental evaluation of the proposed method,implemented on a GPU,is presented in this paper.