期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Object-contingent multi-focal near-eye display enabled by a neural decomposition network for augmented reality
1
作者 JIWOON YEOM YOONMO YANG +4 位作者 JAE HA CHOI HYOUN SOO JANG JAE WOONG KIM HEEYOON JEONG KWANG-SOON CHOI 《Photonics Research》 2025年第11期3182-3198,共17页
We present a compact and lightweight augmented reality(AR) near-eye display system capable of providing adaptive view volumes centered on a real-world object along the user's line of sight. In contrast to prior ga... We present a compact and lightweight augmented reality(AR) near-eye display system capable of providing adaptive view volumes centered on a real-world object along the user's line of sight. In contrast to prior gaze-contingent approaches, which require extremely high-precision eye-tracking hardware, our method employs a calibrated time-of-flight sensor and piezo actuators for vari-focal and multi-focal functionality in an objectcontingent manner, without complex computations. To address the challenge of decomposing multi-focal images across diverse depth ranges, we propose a volume-aware decomposition network, trained on RGB-D datasets with a wide distribution of view volume sizes. By augmenting the depth range of the training dataset, our neural decomposition network generates decomposed images for the prototype in real time, adapting to each target view volume. Simulation and experimental results validate that our approach achieves significantly higher image quality(up to 14.4 d B PSNR) than baselines trained on fixed depth ranges, for both shallow and deep view volumes. The proposed method enables practical and robust object-contingent focal plane adaptation for AR applications. 展开更多
关键词 neural decomposition network augmented reality time flight sensor multi focal adaptive view volumes piezo actuators object contingent near eye display
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部