期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Effects of Visual and Auditory Instructions on Space Station Procedural Tasks
1
作者 Yan Zhao You Li +3 位作者 Ao Jiang HongRui Zhang HaoTian She WenHao Zhan 《Space(Science & Technology)》 2024年第1期479-489,共11页
In order to compare the effects of visual and auditory instructions on the crew when guiding astronauts to complete the procedural tasks in the space station,in this study,subjects were recruited to complete the progr... In order to compare the effects of visual and auditory instructions on the crew when guiding astronauts to complete the procedural tasks in the space station,in this study,subjects were recruited to complete the programmed task of starting from the node module,locating the scientific cabinet and spectrometer,and finally operating the orbital replaceable unit on the spectrometer.Meanwhile,the task performance,eye movement parameters,and cognitive load induced by 2 kinds of instructions in the task were statistically analyzed.The results showed that there were highly significant differences in terms of task completion time,the NASA-TLX(Task Load Index)total score,and eye movement index between the 2 instructions(P<0.01).There were also significant differences in error rate and effort(P<0.05).This study proves that visual instruction interaction is better than auditory instruction.Our work provides important reference for the selection of human–computer interaction mode for procedural tasks on space stations.It also provides the experience and theoretical evidence missing so far and proves the benefits of augmented reality assistance in terms of task performance and human factors. 展开更多
关键词 programmed task visual auditory instructions cognitive load starting node modulelocating visual instructions auditory instructions space stationin procedural tasks
原文传递
Industrial applications of AR headsets:a review of the devices and experience 被引量:1
2
作者 Artem B.Solomashenko Olga L.Afanaseva +3 位作者 Maria V.Shishova Igor.E.Gulianskii Sergey.A.Sobolnikov Nikolay V.Petrov 《Light(Advanced Manufacturing)》 2025年第2期166-195,共30页
This review considers the modern industrial applications of augmented reality headsets.It draws upon a synthesis of information from open sources and press releases of companies,as well as the first-hand experiences o... This review considers the modern industrial applications of augmented reality headsets.It draws upon a synthesis of information from open sources and press releases of companies,as well as the first-hand experiences of industry representatives.Furthermore,the research incorporates insights from both profile events and in-depth discussions with skilled professionals.A specific focus is placed on the ergonomic characteristics of headsets:image quality,user-friendliness,etc.To provide an objective evaluation of the various headsets,a metric has been proposed which is dependent on the specific application case.This enables a comprehensive comparison of the various devices in terms of their quantitative characteristics,which is of particular importance for the formation of a rapidly developing industry. 展开更多
关键词 Augmented reality Head-mounted display HEADSET Applications Manufacturing Assembly ERGONOMIC Diffractive waveguide Visual instructions Field of view
原文传递
Mini-InternVL:a flexible-transfer pocket multi-modal model with 5%parameters and 90%performance
3
作者 Zhangwei Gao Zhe Chen +12 位作者 Erfei Cui Yiming Ren Weiyun Wang Jinguo Zhu Hao Tian Shenglong Ye Junjun He Xizhou Zhu Lewei Lu Tong Lu Yu Qiao Jifeng Dai Wenhai Wang 《Visual Intelligence》 2024年第1期392-408,共17页
Multi-modal large language models(MLLMs)have demonstrated impressive performance in vision-language tasks across a wide range of domains.However,the large model scale and associated high computational cost pose signif... Multi-modal large language models(MLLMs)have demonstrated impressive performance in vision-language tasks across a wide range of domains.However,the large model scale and associated high computational cost pose significant challenges for training and deploying MLLMs on consumer-grade GPUs or edge devices,thereby hindering their widespread application.In this work,we introduce Mini-InternVL,a series of MLLMs with parameters ranging from 1 billion to 4 billion,which achieves 90% of the performance with only 5% of the parameters.This significant improvement in efficiency and effectiveness makes our models more accessible and applicable in various real-world scenarios.To further promote the adoption of our models,we are developing a unified adaptation framework for Mini-InternVL,which enables our models to transfer and outperform specialized models in downstream tasks,including autonomous driving,medical image processing,and remote sensing.We believe that our models can provide valuable insights and resources to advance the development of efficient and effective MLLMs. 展开更多
关键词 Lightweight multi-modal large language model Vision-language model Knowledge distillation Visual instruction tuning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部