We present a compact and lightweight augmented reality(AR) near-eye display system capable of providing adaptive view volumes centered on a real-world object along the user's line of sight. In contrast to prior ga...We present a compact and lightweight augmented reality(AR) near-eye display system capable of providing adaptive view volumes centered on a real-world object along the user's line of sight. In contrast to prior gaze-contingent approaches, which require extremely high-precision eye-tracking hardware, our method employs a calibrated time-of-flight sensor and piezo actuators for vari-focal and multi-focal functionality in an objectcontingent manner, without complex computations. To address the challenge of decomposing multi-focal images across diverse depth ranges, we propose a volume-aware decomposition network, trained on RGB-D datasets with a wide distribution of view volume sizes. By augmenting the depth range of the training dataset, our neural decomposition network generates decomposed images for the prototype in real time, adapting to each target view volume. Simulation and experimental results validate that our approach achieves significantly higher image quality(up to 14.4 d B PSNR) than baselines trained on fixed depth ranges, for both shallow and deep view volumes. The proposed method enables practical and robust object-contingent focal plane adaptation for AR applications.展开更多
基金Korea Evaluation Institute of Industrial Technology(RS-2025-02307192)Institute of Information&Communications Technology Planning&Evaluation(RS-2024-00337012)Defense Acquisition Program Administration(23-CM-DI-11).
文摘We present a compact and lightweight augmented reality(AR) near-eye display system capable of providing adaptive view volumes centered on a real-world object along the user's line of sight. In contrast to prior gaze-contingent approaches, which require extremely high-precision eye-tracking hardware, our method employs a calibrated time-of-flight sensor and piezo actuators for vari-focal and multi-focal functionality in an objectcontingent manner, without complex computations. To address the challenge of decomposing multi-focal images across diverse depth ranges, we propose a volume-aware decomposition network, trained on RGB-D datasets with a wide distribution of view volume sizes. By augmenting the depth range of the training dataset, our neural decomposition network generates decomposed images for the prototype in real time, adapting to each target view volume. Simulation and experimental results validate that our approach achieves significantly higher image quality(up to 14.4 d B PSNR) than baselines trained on fixed depth ranges, for both shallow and deep view volumes. The proposed method enables practical and robust object-contingent focal plane adaptation for AR applications.