Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in ed...Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.展开更多
Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to...Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to visualize the optic nerve intraoperatively.To address this issue,an endoscopic image-based augmented reality surgical navigation system is developed in this study.The system aims to virtually fuse the optic nerve onto the endoscopic images,assisting surgeons in determining the optic nerve’s position and reducing surgical risks.First,a calibration algorithm based on a checkerboard grid of immobile points is proposed,building upon existing calibration methods.Additionally,to tackle accuracy issues associated with augmented reality technology,an optical navigation and visual fusion compensation algorithm is proposed to improve the intraoperative tracking accuracy.To evaluate the system’s performance,model experiments were meticulously designed and conducted.The results confirm the accuracy and stability of the proposed system,with an average tracking error of(0.99±0.46)mm.This outcome demonstrates the effectiveness of the proposed algorithm in improving the augmented reality surgical navigation system’s accuracy.Furthermore,the system successfully displays hidden optic nerves and other deep tissues,thus showcasing the promising potential for future applications in orbital and maxillofacial surgery.展开更多
Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the ra...Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the rapid development of digital technology,AR has been increasingly adopted in orthopedic surgeries across the globe,especially in total knee arthroplasty procedures which demand high precision.By overlaying digital information onto the surgeon's field of view,AR systems enhance precision,improve alignment accuracy,and reduce the risk of complications associated with malalignment.Some concerns have been raised despite accuracy,including the learning curve,long-term outcomes,and technical limitations.Furthermore,it is essential for health practitioners to gain trust in the utilisation of AR.展开更多
The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers wer...The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers were divided into the AR navigation group using surgical guides and the free-hand group.Each group consisted of a novice and an experienced spine surgeon.A total of 80 pedicle screws were implanted.First,the AR group reconstructed the 3D model and planned the screw insertion route according to the computed tomography data of L2 lumbar vertebrae.Then,the Microsoft HoloLens™2 was used to identify the vertebral model,and the planned virtual path was superimposed on the real cone model.Next,the screw was placed according to the projected trajectory.Finally,Micron Tracker was used to measure the deviation of screws from the preoperatively planned trajectory,and pedicle screws were evaluated using the Gertzbein-Robbins scale.In the AR group,the linear deviations of the experienced doctor and the novice were(1.59±0.39)mm and(1.73±0.52)mm respectively,and the angle deviations were 2.72°±0.61°and 2.87°±0.63°respectively.In the free-hand group,the linear deviations of the experienced doctor and the novice were(2.88±0.58)mm and(5.25±0.62)mm respectively,and the angle deviations were 4.41°±1.18°and 7.15°±1.45°respectively.Both kinds of deviations between the two groups were significantly different(P<0.05).The screw accuracy rate was 95%in the AR navigation group and 77.5%in the free-hand group.The results of this study indicate that the integration of surgical guides and AR is an innovative technique that can substantially enhance the safety and precision of spinal surgery and assist inexperienced doctors in completing the surgery.展开更多
The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmente...The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmented reality technology to achieve three-dimensional visualization,thereby enhancing the complete resection rate of tumor and the success rate of surgery.Based on the preoperative MRI data of the patients,a 3D virtual model is reconstructed and 3D printed.A brain biomimetic model is created using gel injection molding.By considering cerebrospinal fluid loss and tumor cyst fluid loss as independent variables,the highest point displacement in the vertical bone window direction is determined as the dependent variable after positioning the patient for surgery.An orthogonal experiment is conducted on the biomimetic model to establish a predictive model,and this model is incorporated into the augmented reality navigation system.To validate the predictive model,five participants wore HoloLens2 devices,overlaying the patient’s 3D virtual model onto the physical head model.Subsequently,the spatial coordinates of the tumor’s highest point after displacement were measured on both the physical and virtual models(actual coordinates and predicted coordinates,respectively).The difference between these coordinates represents the model’s prediction error.The results indicate that the measured and predicted errors for the displacement of the tumor’s highest point on the X and Y axes range from−0.6787 mm to 0.2957 mm and−0.4314 mm to 0.2253 mm,respectively.The relative errors for each experimental group are within 10%,demonstrating a good fit of the model.This method of establishing a regression model represents a preliminary attempt to predict brain tumor displacement in specific situations.It also provides a new approach for surgeons.By combining augmented reality visualization,it addresses the need for predicting tumor displacement and precisely locating brain anatomical structures in a simple and cost-effective manner.展开更多
With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical syst...With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical system.The clinical practice ability of nursing staff is directly related to the safety of patient diagnosis and treatment and the rehabilitation process.However,the current clinical nursing talent training model is facing bottlenecks such as limited practical scenarios and fragmented case cognition.This study focuses on the teaching application of augmented reality(AR)technology in hospital Settings and systematically reviews the research progress on the improvement of clinical practice ability of trainee nurses based on the AR immersive teaching model.By constructing a clinical teaching scenario that integrates virtual and real,AR technology can dynamically simulate complex case handling processes and enhance nursing students’three-dimensional cognition of condition assessment,operation norms,and emergency plans.Hospitals,as the core base for practical teaching,can effectively shorten the connection cycle between theoretical teaching and clinical practice by integrating AR technology,improve the clinical practice level of trainee nurses,and provide an innovative model for optimizing the path of clinical nursing talent cultivation.展开更多
Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of s...Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of surgical failure.Therefore,it is necessary to design a visualization system for closed thoracic drainage.Augmented reality(AR)technology can assist in visualizing the internal anatomical structure and determining the insertion point on the body surface.The structure of the currently used steel-needle-guided chest tube was modified by integrating it with an ultrafine diameter camera to provide real-time visualization of the puncture process.After simulation experiments,the overall registration error of the AR method was measured to be within(3.59±0.53)mm,indicating its potential for clinical application.The ultrafine diameter camera module and improved steel-needle-guided chest tube can timely reflect the position of the needle tip in the human body.A comparative experiment showed that video guidance could improve the safety of the puncture process compared to the traditional method.Finally,a qualitative evaluation of the usability of the system was conducted through a questionnaire.This system facilitates the visualization of closed thoracic drainage puncture procedure and pro-vides an implementation scheme to enhance the accuracy and safety of the operative step,which is conducive to reducing the learning curve and improving the proficiency of the doctors.展开更多
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi...BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.展开更多
Objective:This study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal i...Objective:This study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal interventions for cancer treatment.Methods:A specialized software transforms 2D medical images into precise 3D digital models, facilitating improved anatomical understanding and surgical planning. Patient-specific 3D printed anatomical models are utilized for preoperative planning, intraoperative guidance, and surgical education. AR technology enables the overlay of digital perceptions onto real-world surgical environments.Results:Patient-specific 3D printed anatomical models have multiple applications, such as preoperative planning, intraoperative guidance, trainee education, and patient counseling. Virtual reality involves substituting the real world with a computer-generated 3D environment, while AR overlays digitally created perceptions onto the existing reality. The advances in 3D modeling technology have sparked considerable interest in their application to partial nephrectomy in the realm of renal cancer. 3D printing, also known as additive manufacturing, constructs 3D objects based on computer-aided design or digital 3D models. Utilizing 3D-printed preoperative renal models provides benefits for surgical planning, offering a more reliable assessment of the tumor's relationship with vital anatomical structures and enabling better preparation for procedures. AR technology allows surgeons to visualize patient-specific renal anatomical structures and their spatial relationships with surrounding organs by projecting CT/MRI images onto a live laparoscopic video. Incorporating patient-specific 3D digital models into healthcare enhances best practice, resulting in improved patient care, increased patient satisfaction, and cost saving for the healthcare system.展开更多
Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overt...Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.展开更多
Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Water covers most of the Earth’s surface and is nowhere near a good ecological or recreational state in many areas of the world.Moreover,only a small fraction of the water is potable.As climate change-induced extreme...Water covers most of the Earth’s surface and is nowhere near a good ecological or recreational state in many areas of the world.Moreover,only a small fraction of the water is potable.As climate change-induced extreme weather events become ever more prevalent,more and more issues arise,such as worsening water quality problems.Therefore,protecting invaluable and useable drinking water is critical.Environmental agencies must continuously check water sources to determine whether they are in a good or healthy state regarding pollutant levels and ecological status.The currently available tools are better suited for stationary laboratory use,and domain specialists lack suitable tools for onsite visualisation and interactive exploration of environmental data.Meanwhile,data collection for laboratory analysis requires substantial time and significant effort.We,therefore,developed an augmented reality system with a Microsoft HoloLens 2 device to explore the visualisation of water quality and status in situ.The developed prototype visualises geo-referenced sensor measurements incorporated into the perspective of the surroundings.Any users interested in water bodies’conditions can quickly examine and retrieve an overview of water body status using augmented reality and then take necessary steps to address the current situation.展开更多
Augmented Reality(AR)offers new opportunities for Citizen Science(CS)projects regarding data visualization,data collection,and training of participants.Since limited research on the usage of AR in CS projects exists,a...Augmented Reality(AR)offers new opportunities for Citizen Science(CS)projects regarding data visualization,data collection,and training of participants.Since limited research on the usage of AR in CS projects exists,an online survey is conducted in this study by reaching out to CS project managers to determine the extent of its current use.The survey can identify areas where CS project managers themselves see the greatest potential for AR in their projects and reasons that exist against the use of AR.A total of 53 CS project managers participated in the survey and shared their opinions and concerns.Of all participating CS projects,only three are currently using AR.However,27 CS projects indicated that AR could be beneficial for their project.Especially projects with a geographic focus,in which participants are involved in the process of collecting spatial data,expressed this opinion.Particularly in the areas“data visualization”and“attraction/motivation of participants”the projects identified potential for AR.Arguments against the use of AR named by 23 CS projects include remote study areas,financial considerations,and the lack of a practical use case.This study shows initial trends regarding the use of AR in CS projects and highlights specific use cases for the application of AR.展开更多
The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art educat...Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.展开更多
Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the v...Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.展开更多
The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep...The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep learning.However,the hardware-specific MAR is costly and heavy,and the App-based MAR requires an additional download and installation and it also lacks cross-platform ability.These limitations hamper the pervasive promotion of MAR.This paper argues that mobile Web AR(MWAR)holds the potential to become a practical and pervasive solution that can effectively scale to millions of end-users because MWAR can be developed as a lightweight,cross-platform,and low-cost solution for end-to-end delivery of MAR.The main challenges for making MWAR a reality lie in the low efficiency for dense computing in Web browsers,a large delay for real-time interactions over mobile networks,and the lack of standardization.The good news is that the newly emerging 5G and Beyond 5G(B5G)cellular networks can mitigate these issues to some extent via techniques such as network slicing,device-to-device communication,and mobile edge computing.In this paper,we first give an overview of the challenges and opportunities of MWAR in the 5G era.Then we describe our design and development of a generic service-oriented framework(called MWAR5)to provide a scalable,flexible,and easy to deploy MWAR solution.We evaluate the performance of our MWAR5 system in an actually deployed 5G trial network under the collaborative configurations,which shows encouraging results.Moreover,we also share the experiences and insights from our development and deployment,including some exciting future directions of MWAR over 5G and B5G networks.展开更多
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. T...To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.展开更多
文摘Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.
基金the National Natural Science Foundation of China(Nos.82330063 and M-0019)the Interdisciplinary Program of Shanghai Jiao Tong University(Nos.YG2022QN056,YG2023ZD19,and YG2023ZD15)+2 种基金the Cross Disciplinary Research Fund of Shanghai Ninth People’s Hospital,Shanghai Jiao Tong University School of Medicine(No.JYJC202115)the Translation Clinical R&D Project of Medical Robot of Shanghai Ninth People’s Hospital,Shanghai Jiao Tong University School of Medicine(No.IMR-NPH202002)the Shanghai Key Clinical Specialty,Shanghai Eye Disease Research Center(No.2022ZZ01003)。
文摘Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to visualize the optic nerve intraoperatively.To address this issue,an endoscopic image-based augmented reality surgical navigation system is developed in this study.The system aims to virtually fuse the optic nerve onto the endoscopic images,assisting surgeons in determining the optic nerve’s position and reducing surgical risks.First,a calibration algorithm based on a checkerboard grid of immobile points is proposed,building upon existing calibration methods.Additionally,to tackle accuracy issues associated with augmented reality technology,an optical navigation and visual fusion compensation algorithm is proposed to improve the intraoperative tracking accuracy.To evaluate the system’s performance,model experiments were meticulously designed and conducted.The results confirm the accuracy and stability of the proposed system,with an average tracking error of(0.99±0.46)mm.This outcome demonstrates the effectiveness of the proposed algorithm in improving the augmented reality surgical navigation system’s accuracy.Furthermore,the system successfully displays hidden optic nerves and other deep tissues,thus showcasing the promising potential for future applications in orbital and maxillofacial surgery.
基金Supported by The Hunan Provincial Natural Science Foundation of China,No.2023JJ30773,No.2025JJ60480,and No.2025JJ60552The Scientific Research Program of The Hunan Provincial Health Commission,No.202204072544+4 种基金The Science and Technology Innovation Program of Hunan Province,No.2024RC3053The CBT ECR/MCR Scheme,No.324910-0028/07National Natural Science Foundation of China,No.32300652The Scientific Research Program of Hunan Provincial Health Commission,No.W20243023The Scientific Research Launch Project for New Employees of The Second Xiangya Hospital of Central South University.
文摘Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the rapid development of digital technology,AR has been increasingly adopted in orthopedic surgeries across the globe,especially in total knee arthroplasty procedures which demand high precision.By overlaying digital information onto the surgeon's field of view,AR systems enhance precision,improve alignment accuracy,and reduce the risk of complications associated with malalignment.Some concerns have been raised despite accuracy,including the learning curve,long-term outcomes,and technical limitations.Furthermore,it is essential for health practitioners to gain trust in the utilisation of AR.
基金the National Natural Science Foundation of China(No.11502146)the 1 Batch of 2021 MOE of PRC Industry University Collaborative Education Program(No.202101042008)。
文摘The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers were divided into the AR navigation group using surgical guides and the free-hand group.Each group consisted of a novice and an experienced spine surgeon.A total of 80 pedicle screws were implanted.First,the AR group reconstructed the 3D model and planned the screw insertion route according to the computed tomography data of L2 lumbar vertebrae.Then,the Microsoft HoloLens™2 was used to identify the vertebral model,and the planned virtual path was superimposed on the real cone model.Next,the screw was placed according to the projected trajectory.Finally,Micron Tracker was used to measure the deviation of screws from the preoperatively planned trajectory,and pedicle screws were evaluated using the Gertzbein-Robbins scale.In the AR group,the linear deviations of the experienced doctor and the novice were(1.59±0.39)mm and(1.73±0.52)mm respectively,and the angle deviations were 2.72°±0.61°and 2.87°±0.63°respectively.In the free-hand group,the linear deviations of the experienced doctor and the novice were(2.88±0.58)mm and(5.25±0.62)mm respectively,and the angle deviations were 4.41°±1.18°and 7.15°±1.45°respectively.Both kinds of deviations between the two groups were significantly different(P<0.05).The screw accuracy rate was 95%in the AR navigation group and 77.5%in the free-hand group.The results of this study indicate that the integration of surgical guides and AR is an innovative technique that can substantially enhance the safety and precision of spinal surgery and assist inexperienced doctors in completing the surgery.
基金the University of Shanghai for Science and Technology’s Medical Engineering Interdisciplinary Project(No.10-22-308-520)the Ministry of Education’s First Batch of Industry-Education Cooperation Collaborative Education Projects(No.202101042008)+1 种基金the Fundamental Research Funds for the Central Universities(No.YG2019QNA34)the Shanghai Municipal Health Commission for Youth Clinical Research Project(No.20194Y0134)。
文摘The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmented reality technology to achieve three-dimensional visualization,thereby enhancing the complete resection rate of tumor and the success rate of surgery.Based on the preoperative MRI data of the patients,a 3D virtual model is reconstructed and 3D printed.A brain biomimetic model is created using gel injection molding.By considering cerebrospinal fluid loss and tumor cyst fluid loss as independent variables,the highest point displacement in the vertical bone window direction is determined as the dependent variable after positioning the patient for surgery.An orthogonal experiment is conducted on the biomimetic model to establish a predictive model,and this model is incorporated into the augmented reality navigation system.To validate the predictive model,five participants wore HoloLens2 devices,overlaying the patient’s 3D virtual model onto the physical head model.Subsequently,the spatial coordinates of the tumor’s highest point after displacement were measured on both the physical and virtual models(actual coordinates and predicted coordinates,respectively).The difference between these coordinates represents the model’s prediction error.The results indicate that the measured and predicted errors for the displacement of the tumor’s highest point on the X and Y axes range from−0.6787 mm to 0.2957 mm and−0.4314 mm to 0.2253 mm,respectively.The relative errors for each experimental group are within 10%,demonstrating a good fit of the model.This method of establishing a regression model represents a preliminary attempt to predict brain tumor displacement in specific situations.It also provides a new approach for surgeons.By combining augmented reality visualization,it addresses the need for predicting tumor displacement and precisely locating brain anatomical structures in a simple and cost-effective manner.
文摘With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical system.The clinical practice ability of nursing staff is directly related to the safety of patient diagnosis and treatment and the rehabilitation process.However,the current clinical nursing talent training model is facing bottlenecks such as limited practical scenarios and fragmented case cognition.This study focuses on the teaching application of augmented reality(AR)technology in hospital Settings and systematically reviews the research progress on the improvement of clinical practice ability of trainee nurses based on the AR immersive teaching model.By constructing a clinical teaching scenario that integrates virtual and real,AR technology can dynamically simulate complex case handling processes and enhance nursing students’three-dimensional cognition of condition assessment,operation norms,and emergency plans.Hospitals,as the core base for practical teaching,can effectively shorten the connection cycle between theoretical teaching and clinical practice by integrating AR technology,improve the clinical practice level of trainee nurses,and provide an innovative model for optimizing the path of clinical nursing talent cultivation.
基金the Shanghai Municipal Education Commission-Gaofeng Clinical Medicine Grant(No.20172005)。
文摘Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of surgical failure.Therefore,it is necessary to design a visualization system for closed thoracic drainage.Augmented reality(AR)technology can assist in visualizing the internal anatomical structure and determining the insertion point on the body surface.The structure of the currently used steel-needle-guided chest tube was modified by integrating it with an ultrafine diameter camera to provide real-time visualization of the puncture process.After simulation experiments,the overall registration error of the AR method was measured to be within(3.59±0.53)mm,indicating its potential for clinical application.The ultrafine diameter camera module and improved steel-needle-guided chest tube can timely reflect the position of the needle tip in the human body.A comparative experiment showed that video guidance could improve the safety of the puncture process compared to the traditional method.Finally,a qualitative evaluation of the usability of the system was conducted through a questionnaire.This system facilitates the visualization of closed thoracic drainage puncture procedure and pro-vides an implementation scheme to enhance the accuracy and safety of the operative step,which is conducive to reducing the learning curve and improving the proficiency of the doctors.
文摘BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.
文摘Objective:This study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal interventions for cancer treatment.Methods:A specialized software transforms 2D medical images into precise 3D digital models, facilitating improved anatomical understanding and surgical planning. Patient-specific 3D printed anatomical models are utilized for preoperative planning, intraoperative guidance, and surgical education. AR technology enables the overlay of digital perceptions onto real-world surgical environments.Results:Patient-specific 3D printed anatomical models have multiple applications, such as preoperative planning, intraoperative guidance, trainee education, and patient counseling. Virtual reality involves substituting the real world with a computer-generated 3D environment, while AR overlays digitally created perceptions onto the existing reality. The advances in 3D modeling technology have sparked considerable interest in their application to partial nephrectomy in the realm of renal cancer. 3D printing, also known as additive manufacturing, constructs 3D objects based on computer-aided design or digital 3D models. Utilizing 3D-printed preoperative renal models provides benefits for surgical planning, offering a more reliable assessment of the tumor's relationship with vital anatomical structures and enabling better preparation for procedures. AR technology allows surgeons to visualize patient-specific renal anatomical structures and their spatial relationships with surrounding organs by projecting CT/MRI images onto a live laparoscopic video. Incorporating patient-specific 3D digital models into healthcare enhances best practice, resulting in improved patient care, increased patient satisfaction, and cost saving for the healthcare system.
基金financially supported by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D Program(Project No.P0016038)supported by the MSIT(Ministry of Sci-ence and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2022-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金supported by the Freshwater Competence Centre,Academy of Finland(Decision No.345008)the Nordic University Cooperation on Edge Intelligence(Grant No.168043).
文摘Water covers most of the Earth’s surface and is nowhere near a good ecological or recreational state in many areas of the world.Moreover,only a small fraction of the water is potable.As climate change-induced extreme weather events become ever more prevalent,more and more issues arise,such as worsening water quality problems.Therefore,protecting invaluable and useable drinking water is critical.Environmental agencies must continuously check water sources to determine whether they are in a good or healthy state regarding pollutant levels and ecological status.The currently available tools are better suited for stationary laboratory use,and domain specialists lack suitable tools for onsite visualisation and interactive exploration of environmental data.Meanwhile,data collection for laboratory analysis requires substantial time and significant effort.We,therefore,developed an augmented reality system with a Microsoft HoloLens 2 device to explore the visualisation of water quality and status in situ.The developed prototype visualises geo-referenced sensor measurements incorporated into the perspective of the surroundings.Any users interested in water bodies’conditions can quickly examine and retrieve an overview of water body status using augmented reality and then take necessary steps to address the current situation.
基金support by the Open Access Publication Funds of Technische Universitat Braunschweig.
文摘Augmented Reality(AR)offers new opportunities for Citizen Science(CS)projects regarding data visualization,data collection,and training of participants.Since limited research on the usage of AR in CS projects exists,an online survey is conducted in this study by reaching out to CS project managers to determine the extent of its current use.The survey can identify areas where CS project managers themselves see the greatest potential for AR in their projects and reasons that exist against the use of AR.A total of 53 CS project managers participated in the survey and shared their opinions and concerns.Of all participating CS projects,only three are currently using AR.However,27 CS projects indicated that AR could be beneficial for their project.Especially projects with a geographic focus,in which participants are involved in the process of collecting spatial data,expressed this opinion.Particularly in the areas“data visualization”and“attraction/motivation of participants”the projects identified potential for AR.Arguments against the use of AR named by 23 CS projects include remote study areas,financial considerations,and the lack of a practical use case.This study shows initial trends regarding the use of AR in CS projects and highlights specific use cases for the application of AR.
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
文摘Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.
基金supported by grants from the Mission Plan Program of Beijing Municipal Administration of Hospitals(SML20152201)Beijing Municipal Administration of Hospitals Clinical Medicine Development of Special Funding(ZYLX201712)+1 种基金the National Natural Science Foundation of China(81427803)Beijing Tsinghua Changgung Hospital Fund(12015C1039)
文摘Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
基金supported in part by the National Key R&D Program of China under Grant 2018YFE0205503in part by the National Natural Science Foundation of China (NSFC) under Grant 61671081+4 种基金in part by the Funds for International Cooperation and Exchange of NSFC under Grant 61720106007in part by the 111 Project under Grant B18008in part by the Beijing Natural Science Foundation under Grant 4172042in part by the Fundamental Research Funds for the Central Universities under Grant 2018XKJC01in part by the BUPT Excellent Ph.D. Students Foundation under Grant CX2019213
文摘The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep learning.However,the hardware-specific MAR is costly and heavy,and the App-based MAR requires an additional download and installation and it also lacks cross-platform ability.These limitations hamper the pervasive promotion of MAR.This paper argues that mobile Web AR(MWAR)holds the potential to become a practical and pervasive solution that can effectively scale to millions of end-users because MWAR can be developed as a lightweight,cross-platform,and low-cost solution for end-to-end delivery of MAR.The main challenges for making MWAR a reality lie in the low efficiency for dense computing in Web browsers,a large delay for real-time interactions over mobile networks,and the lack of standardization.The good news is that the newly emerging 5G and Beyond 5G(B5G)cellular networks can mitigate these issues to some extent via techniques such as network slicing,device-to-device communication,and mobile edge computing.In this paper,we first give an overview of the challenges and opportunities of MWAR in the 5G era.Then we describe our design and development of a generic service-oriented framework(called MWAR5)to provide a scalable,flexible,and easy to deploy MWAR solution.We evaluate the performance of our MWAR5 system in an actually deployed 5G trial network under the collaborative configurations,which shows encouraging results.Moreover,we also share the experiences and insights from our development and deployment,including some exciting future directions of MWAR over 5G and B5G networks.
基金supported by a Grant-in-Aid for Scientific Research (22659366) from the Japan Society for the Promotion of Science
文摘To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.