In-loop filters have been comprehensively explored during the development of video coding standards due to their remarkable noise-reduction capabilities.In the early stage of video coding,in-loop filters,such as the d...In-loop filters have been comprehensively explored during the development of video coding standards due to their remarkable noise-reduction capabilities.In the early stage of video coding,in-loop filters,such as the deblocking filter,sample adaptive offset,and adaptive loop filter,were performed separately for each component.Recently,cross-component filters have been studied to improve chroma fidelity by exploiting correlations between the luma and chroma channels.This paper introduces the cross-component filters used in the state-ofthe-art video coding standards,including the cross-component adaptive loop filter and cross-component sample adaptive offset.Crosscomponent filters aim to reduce compression artifacts based on the correlation between different components and provide more accurate pixel reconstruction values.We present their origin,development,and status in the current video coding standards.Finally,we conduct discussions on the further evolution of cross-component filters.展开更多
The Joint Video Experts Team(JVET)has announced the latest generation of the Versatile Video Coding(VVC,H.266)standard.The in-loop filter in VVC inherits the De-Blocking Filter(DBF)and Sample Adaptive Offset(SAO)of Hi...The Joint Video Experts Team(JVET)has announced the latest generation of the Versatile Video Coding(VVC,H.266)standard.The in-loop filter in VVC inherits the De-Blocking Filter(DBF)and Sample Adaptive Offset(SAO)of High Efficiency Video Coding(HEVC,H.265),and adds the Adaptive Loop Filter(ALF)to minimize the error between the original sample and the decoded sample.However,for chaotic moving video encoding with low bitrates,serious blocking artifacts still remain after in-loop filtering due to the severe quantization distortion of texture details.To tackle this problem,this paper proposes a Convolutional Neural Network(CNN)based VVC in-loop filter for chaotic moving video encoding with low bitrates.First,a blur-aware attention network is designed to perceive the blurring effect and to restore texture details.Then,a deep in-loop filtering method is proposed based on the blur-aware network to replace the VVC in-loop filter.Finally,experimental results show that the proposed method could averagely save 8.3%of bit consumption at similar subjective quality.Meanwhile,under close bit rate consumption,the proposed method could reconstruct more texture information,thereby significantly reducing the blocking artifacts and improving the visual quality.展开更多
基金supported in part by National Science Foundation of China under Grant No.62031013PCL-CMCC Foundation for Science and Innovation under Grant No.2024ZY1C0040+1 种基金New Cornerstone Science Foundation for the Xplorer PrizeHigh performance Computing Platform of Peking University。
文摘In-loop filters have been comprehensively explored during the development of video coding standards due to their remarkable noise-reduction capabilities.In the early stage of video coding,in-loop filters,such as the deblocking filter,sample adaptive offset,and adaptive loop filter,were performed separately for each component.Recently,cross-component filters have been studied to improve chroma fidelity by exploiting correlations between the luma and chroma channels.This paper introduces the cross-component filters used in the state-ofthe-art video coding standards,including the cross-component adaptive loop filter and cross-component sample adaptive offset.Crosscomponent filters aim to reduce compression artifacts based on the correlation between different components and provide more accurate pixel reconstruction values.We present their origin,development,and status in the current video coding standards.Finally,we conduct discussions on the further evolution of cross-component filters.
基金supported by National Natural Science Foundation of China under grant U20A20157,61771082,62271096 and 61871062the General Program of Chonqing Natural Science Foundation under grant cstc2021jcyj-msxm X0032+2 种基金the Natural Science Foundation of Chongqing,China(cstc2020jcyj-zdxm X0024)the Science and Technology Research Program of Chongqing Municipal Education Commission under grant KJQN202300632the University Innovation Research Group of Chongqing(CXQT20017)。
文摘The Joint Video Experts Team(JVET)has announced the latest generation of the Versatile Video Coding(VVC,H.266)standard.The in-loop filter in VVC inherits the De-Blocking Filter(DBF)and Sample Adaptive Offset(SAO)of High Efficiency Video Coding(HEVC,H.265),and adds the Adaptive Loop Filter(ALF)to minimize the error between the original sample and the decoded sample.However,for chaotic moving video encoding with low bitrates,serious blocking artifacts still remain after in-loop filtering due to the severe quantization distortion of texture details.To tackle this problem,this paper proposes a Convolutional Neural Network(CNN)based VVC in-loop filter for chaotic moving video encoding with low bitrates.First,a blur-aware attention network is designed to perceive the blurring effect and to restore texture details.Then,a deep in-loop filtering method is proposed based on the blur-aware network to replace the VVC in-loop filter.Finally,experimental results show that the proposed method could averagely save 8.3%of bit consumption at similar subjective quality.Meanwhile,under close bit rate consumption,the proposed method could reconstruct more texture information,thereby significantly reducing the blocking artifacts and improving the visual quality.