Depth-image-based rendering(DIBR) is widely used in 3 DTV, free-viewpoint video, and interactive 3 D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, ...Depth-image-based rendering(DIBR) is widely used in 3 DTV, free-viewpoint video, and interactive 3 D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2 D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2 D image quality metrics and state-of-the-art DIBR-related metrics.展开更多
We propose a normalizing flow based on the wavelet framework for super-resolution(SR)called WDFSR.It learns the conditional distribution mapping between low-resolution images in the RGB domain and high-resolution imag...We propose a normalizing flow based on the wavelet framework for super-resolution(SR)called WDFSR.It learns the conditional distribution mapping between low-resolution images in the RGB domain and high-resolution images in the wavelet domain to simultaneously generate high-resolution images of different styles.To address the issue of some flowbased models being sensitive to datasets,which results in training fluctuations that reduce the mapping ability of the model and weaken generalization,we designed a method that combines a T-distribution and QR decomposition layer.Our method alleviates this problem while maintaining the ability of the model to map different distributions and produce higher-quality images.Good contextual conditional features can promote model training and enhance the distribution mapping capabilities for conditional distribution mapping.Therefore,we propose a Refinement layer combined with an attention mechanism to refine and fuse the extracted condition features to improve image quality.Extensive experiments on several SR datasets demonstrate that WDFSR outperforms most general CNN-and flow-based models in terms of PSNR value and perception quality.We also demonstrated that our framework works well for other low-level vision tasks,such as low-light enhancement.The pretrained models and source code with guidance for reference are available at https://github.com/Lisbegin/WDFSR.展开更多
基金sponsored by the National Key R&D Program of China (No. 2017YFB1002702)the National Natural Science Foundation of China (Nos. 61572058, 61472363)
文摘Depth-image-based rendering(DIBR) is widely used in 3 DTV, free-viewpoint video, and interactive 3 D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2 D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2 D image quality metrics and state-of-the-art DIBR-related metrics.
基金grateful to Zhejiang Gongshang University for its valuable computing resources and outstanding laboratory facilities,and support from the National Natural Science Foundation of China(Grant No.62172366)the Zhejiang Provincial Natural Science Foundation of China(Grant No.LY22F020013)+1 种基金“Pioneer”and“Leading Goose”R&D Program of Zhejiang Province(Grant No.2023C01150),Major Sci-Tech Innovation Project of Hangzhou City(Grant No.2022AIZD0110)“Digital+”Discipline Construction Project of Zhejiang Gongshang University(Grant No.SZJ2022B009).
文摘We propose a normalizing flow based on the wavelet framework for super-resolution(SR)called WDFSR.It learns the conditional distribution mapping between low-resolution images in the RGB domain and high-resolution images in the wavelet domain to simultaneously generate high-resolution images of different styles.To address the issue of some flowbased models being sensitive to datasets,which results in training fluctuations that reduce the mapping ability of the model and weaken generalization,we designed a method that combines a T-distribution and QR decomposition layer.Our method alleviates this problem while maintaining the ability of the model to map different distributions and produce higher-quality images.Good contextual conditional features can promote model training and enhance the distribution mapping capabilities for conditional distribution mapping.Therefore,we propose a Refinement layer combined with an attention mechanism to refine and fuse the extracted condition features to improve image quality.Extensive experiments on several SR datasets demonstrate that WDFSR outperforms most general CNN-and flow-based models in terms of PSNR value and perception quality.We also demonstrated that our framework works well for other low-level vision tasks,such as low-light enhancement.The pretrained models and source code with guidance for reference are available at https://github.com/Lisbegin/WDFSR.