Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data...Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data-driven method and model-centric numerical simulation, while overcoming certain difficulties of both. The Eulerian method, which has been widely employed in flow simulation, stores variables such as velocity and density on regular Cartesian grids, thereby it could be associated with (volumetric) video data on the same domain. This paper proposes a novel method for flow simulation, which is tightly coupling video-based reconstruction with physically-based simulation and making use of meaningful physical attributes during re-simulation. First, we reconstruct the density field from a single-view video. Second, we estimate the velocity field using the reconstructed density field as prior. In the iterative process, the pressure projection can be treated as a physical constraint and the results of each step are corrected by obtained velocity field in the Eulerian framework. Third, we use the reconstructed density field and velocity field to guide the Eulerian simulation with anticipated new results. Through the guidance of video data, we can produce new flows that closely match with the real scene exhibited in data acquisition. Moreover, in the multigrid Eulerian simulation, we can generate new visual effects which cannot be created from raw video acquisition, with a goal of easily producing many more visually interesting results and respecting true physical attributes at the same time. We demonstrate salient advantages of our hybrid method with a variety of animation examples.展开更多
Recently,deep learning-based video compressive sensing reconstruction(VCSR)technologies have significantly improved reconstructed video quality by taking advantage of spatial and temporal correlations.However,existing...Recently,deep learning-based video compressive sensing reconstruction(VCSR)technologies have significantly improved reconstructed video quality by taking advantage of spatial and temporal correlations.However,existing VcSR work mainly focuses on improving deep learning-based motion compensation without optimizing local and global information,leaving much room for further improvement.This paper proposes a novel VcSR method,JVCSR+,which focuses on optimizing feature information,removing reconstruction artifacts,and increasing the resolution simultaneously.Specifically,the measurement matrix in the proposed compressive sensing(CS)module can learn adaptively,so that sampled measurements retain more image structure information for better reconstruction.An average search module is also proposed to detect more suitable areas for references,thereby attaining superior motion compensation performance.Within the loop,the enhanced frame is utilized as a reference to improve recovery of the current frame.Furthermore,we propose an out-loop super-resolution module for VCSR to obtain high-quality images at low bitrates.The results of extensive experiments demonstrate that our proposed JVcSR+obtains promising performance compared to state-of-the-art CS methods within the same bitrate range.展开更多
Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of ...Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not onlv the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms.展开更多
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61532002, 61672237, 61672077 and 61672149, the Natural Science Foundation of USA under Grant Nos. IIS-1715985, IIS-0949467, IIS-1047715, and IIS-1049448, and the National High Technology Research and Development 863 Program of China under Grant No. 2015AA016404.
文摘Hybrid approaches such as combining video data with pure physics-based simulation have been popular in the recent decade for computer graphics. The key motivation is to clearly retain salient advantages from both data-driven method and model-centric numerical simulation, while overcoming certain difficulties of both. The Eulerian method, which has been widely employed in flow simulation, stores variables such as velocity and density on regular Cartesian grids, thereby it could be associated with (volumetric) video data on the same domain. This paper proposes a novel method for flow simulation, which is tightly coupling video-based reconstruction with physically-based simulation and making use of meaningful physical attributes during re-simulation. First, we reconstruct the density field from a single-view video. Second, we estimate the velocity field using the reconstructed density field as prior. In the iterative process, the pressure projection can be treated as a physical constraint and the results of each step are corrected by obtained velocity field in the Eulerian framework. Third, we use the reconstructed density field and velocity field to guide the Eulerian simulation with anticipated new results. Through the guidance of video data, we can produce new flows that closely match with the real scene exhibited in data acquisition. Moreover, in the multigrid Eulerian simulation, we can generate new visual effects which cannot be created from raw video acquisition, with a goal of easily producing many more visually interesting results and respecting true physical attributes at the same time. We demonstrate salient advantages of our hybrid method with a variety of animation examples.
基金supported by JSPS KAKENHI Grant Number JP22K12101。
文摘Recently,deep learning-based video compressive sensing reconstruction(VCSR)technologies have significantly improved reconstructed video quality by taking advantage of spatial and temporal correlations.However,existing VcSR work mainly focuses on improving deep learning-based motion compensation without optimizing local and global information,leaving much room for further improvement.This paper proposes a novel VcSR method,JVCSR+,which focuses on optimizing feature information,removing reconstruction artifacts,and increasing the resolution simultaneously.Specifically,the measurement matrix in the proposed compressive sensing(CS)module can learn adaptively,so that sampled measurements retain more image structure information for better reconstruction.An average search module is also proposed to detect more suitable areas for references,thereby attaining superior motion compensation performance.Within the loop,the enhanced frame is utilized as a reference to improve recovery of the current frame.Furthermore,we propose an out-loop super-resolution module for VCSR to obtain high-quality images at low bitrates.The results of extensive experiments demonstrate that our proposed JVcSR+obtains promising performance compared to state-of-the-art CS methods within the same bitrate range.
基金supported by the National Natural Science Foundation of China (61320106006, 61532006, 61502042)
文摘Existing learning-based super-resolution (SR) reconstruction algorithms are mainly designed for single image, which ignore the spatio-temporal relationship between video frames. Aiming at applying the advantages of learning-based algorithms to video SR field, a novel video SR reconstruction algorithm based on deep convolutional neural network (CNN) and spatio-temporal similarity (STCNN-SR) was proposed in this paper. It is a deep learning method for video SR reconstruction, which considers not onlv the mapping relationship among associated low-resolution (LR) and high-resolution (HR) image blocks, but also the spatio-temporal non-local complementary and redundant information between adjacent low-resolution video frames. The reconstruction speed can be improved obviously with the pre-trained end-to-end reconstructed coefficients. Moreover, the performance of video SR will be further improved by the optimization process with spatio-temporal similarity. Experimental results demonstrated that the proposed algorithm achieves a competitive SR quality on both subjective and objective evaluations, when compared to other state-of-the-art algorithms.