Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While suc...Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While such a computing paradigm can only employ a single photonic device as the nonlinear node for data processing,the performance highly relies on the fading memory provided by the delay feedback loop(FL),which sets a restriction on the extensibility of physical implementation,especially for highly integrated chips.Here,we present a simplified photonic scheme for more flexible parameter configurations leveraging the designed quasi-convolution coding(QC),which completely gets rid of the dependence on FL.Unlike delay-based TDRC,encoded data in QC-based RC(QRC)enables temporal feature extraction,facilitating augmented memory capabilities.Thus,our proposed QRC is enabled to deal with time-related tasks or sequential data without the implementation of FL.Furthermore,we can implement this hardware with a low-power,easily integrable vertical-cavity surface-emitting laser for high-performance parallel processing.We illustrate the concept validation through simulation and experimental comparison of QRC and TDRC,wherein the simpler-structured QRC outperforms across various benchmark tasks.Our results may underscore an auspicious solution for the hardware implementation of deep neural networks.展开更多
Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in ed...Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the ra...Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the rapid development of digital technology,AR has been increasingly adopted in orthopedic surgeries across the globe,especially in total knee arthroplasty procedures which demand high precision.By overlaying digital information onto the surgeon's field of view,AR systems enhance precision,improve alignment accuracy,and reduce the risk of complications associated with malalignment.Some concerns have been raised despite accuracy,including the learning curve,long-term outcomes,and technical limitations.Furthermore,it is essential for health practitioners to gain trust in the utilisation of AR.展开更多
1.Introduction With the increasing demand for petroleum and natural gas resources,along with technological advancements in exploration and production,the primary frontier of oil and gas resources has shifted from conv...1.Introduction With the increasing demand for petroleum and natural gas resources,along with technological advancements in exploration and production,the primary frontier of oil and gas resources has shifted from conventional oil and gas development to the domains of“Two Deeps,One Unconventional,One Mature,”which include deep onshore,deepwater,unconventional resources,and mature oilfields[1].展开更多
The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmente...The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmented reality technology to achieve three-dimensional visualization,thereby enhancing the complete resection rate of tumor and the success rate of surgery.Based on the preoperative MRI data of the patients,a 3D virtual model is reconstructed and 3D printed.A brain biomimetic model is created using gel injection molding.By considering cerebrospinal fluid loss and tumor cyst fluid loss as independent variables,the highest point displacement in the vertical bone window direction is determined as the dependent variable after positioning the patient for surgery.An orthogonal experiment is conducted on the biomimetic model to establish a predictive model,and this model is incorporated into the augmented reality navigation system.To validate the predictive model,five participants wore HoloLens2 devices,overlaying the patient’s 3D virtual model onto the physical head model.Subsequently,the spatial coordinates of the tumor’s highest point after displacement were measured on both the physical and virtual models(actual coordinates and predicted coordinates,respectively).The difference between these coordinates represents the model’s prediction error.The results indicate that the measured and predicted errors for the displacement of the tumor’s highest point on the X and Y axes range from−0.6787 mm to 0.2957 mm and−0.4314 mm to 0.2253 mm,respectively.The relative errors for each experimental group are within 10%,demonstrating a good fit of the model.This method of establishing a regression model represents a preliminary attempt to predict brain tumor displacement in specific situations.It also provides a new approach for surgeons.By combining augmented reality visualization,it addresses the need for predicting tumor displacement and precisely locating brain anatomical structures in a simple and cost-effective manner.展开更多
Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to...Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to visualize the optic nerve intraoperatively.To address this issue,an endoscopic image-based augmented reality surgical navigation system is developed in this study.The system aims to virtually fuse the optic nerve onto the endoscopic images,assisting surgeons in determining the optic nerve’s position and reducing surgical risks.First,a calibration algorithm based on a checkerboard grid of immobile points is proposed,building upon existing calibration methods.Additionally,to tackle accuracy issues associated with augmented reality technology,an optical navigation and visual fusion compensation algorithm is proposed to improve the intraoperative tracking accuracy.To evaluate the system’s performance,model experiments were meticulously designed and conducted.The results confirm the accuracy and stability of the proposed system,with an average tracking error of(0.99±0.46)mm.This outcome demonstrates the effectiveness of the proposed algorithm in improving the augmented reality surgical navigation system’s accuracy.Furthermore,the system successfully displays hidden optic nerves and other deep tissues,thus showcasing the promising potential for future applications in orbital and maxillofacial surgery.展开更多
Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of s...Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of surgical failure.Therefore,it is necessary to design a visualization system for closed thoracic drainage.Augmented reality(AR)technology can assist in visualizing the internal anatomical structure and determining the insertion point on the body surface.The structure of the currently used steel-needle-guided chest tube was modified by integrating it with an ultrafine diameter camera to provide real-time visualization of the puncture process.After simulation experiments,the overall registration error of the AR method was measured to be within(3.59±0.53)mm,indicating its potential for clinical application.The ultrafine diameter camera module and improved steel-needle-guided chest tube can timely reflect the position of the needle tip in the human body.A comparative experiment showed that video guidance could improve the safety of the puncture process compared to the traditional method.Finally,a qualitative evaluation of the usability of the system was conducted through a questionnaire.This system facilitates the visualization of closed thoracic drainage puncture procedure and pro-vides an implementation scheme to enhance the accuracy and safety of the operative step,which is conducive to reducing the learning curve and improving the proficiency of the doctors.展开更多
With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical syst...With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical system.The clinical practice ability of nursing staff is directly related to the safety of patient diagnosis and treatment and the rehabilitation process.However,the current clinical nursing talent training model is facing bottlenecks such as limited practical scenarios and fragmented case cognition.This study focuses on the teaching application of augmented reality(AR)technology in hospital Settings and systematically reviews the research progress on the improvement of clinical practice ability of trainee nurses based on the AR immersive teaching model.By constructing a clinical teaching scenario that integrates virtual and real,AR technology can dynamically simulate complex case handling processes and enhance nursing students’three-dimensional cognition of condition assessment,operation norms,and emergency plans.Hospitals,as the core base for practical teaching,can effectively shorten the connection cycle between theoretical teaching and clinical practice by integrating AR technology,improve the clinical practice level of trainee nurses,and provide an innovative model for optimizing the path of clinical nursing talent cultivation.展开更多
The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers wer...The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers were divided into the AR navigation group using surgical guides and the free-hand group.Each group consisted of a novice and an experienced spine surgeon.A total of 80 pedicle screws were implanted.First,the AR group reconstructed the 3D model and planned the screw insertion route according to the computed tomography data of L2 lumbar vertebrae.Then,the Microsoft HoloLens™2 was used to identify the vertebral model,and the planned virtual path was superimposed on the real cone model.Next,the screw was placed according to the projected trajectory.Finally,Micron Tracker was used to measure the deviation of screws from the preoperatively planned trajectory,and pedicle screws were evaluated using the Gertzbein-Robbins scale.In the AR group,the linear deviations of the experienced doctor and the novice were(1.59±0.39)mm and(1.73±0.52)mm respectively,and the angle deviations were 2.72°±0.61°and 2.87°±0.63°respectively.In the free-hand group,the linear deviations of the experienced doctor and the novice were(2.88±0.58)mm and(5.25±0.62)mm respectively,and the angle deviations were 4.41°±1.18°and 7.15°±1.45°respectively.Both kinds of deviations between the two groups were significantly different(P<0.05).The screw accuracy rate was 95%in the AR navigation group and 77.5%in the free-hand group.The results of this study indicate that the integration of surgical guides and AR is an innovative technique that can substantially enhance the safety and precision of spinal surgery and assist inexperienced doctors in completing the surgery.展开更多
The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep...The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep learning.However,the hardware-specific MAR is costly and heavy,and the App-based MAR requires an additional download and installation and it also lacks cross-platform ability.These limitations hamper the pervasive promotion of MAR.This paper argues that mobile Web AR(MWAR)holds the potential to become a practical and pervasive solution that can effectively scale to millions of end-users because MWAR can be developed as a lightweight,cross-platform,and low-cost solution for end-to-end delivery of MAR.The main challenges for making MWAR a reality lie in the low efficiency for dense computing in Web browsers,a large delay for real-time interactions over mobile networks,and the lack of standardization.The good news is that the newly emerging 5G and Beyond 5G(B5G)cellular networks can mitigate these issues to some extent via techniques such as network slicing,device-to-device communication,and mobile edge computing.In this paper,we first give an overview of the challenges and opportunities of MWAR in the 5G era.Then we describe our design and development of a generic service-oriented framework(called MWAR5)to provide a scalable,flexible,and easy to deploy MWAR solution.We evaluate the performance of our MWAR5 system in an actually deployed 5G trial network under the collaborative configurations,which shows encouraging results.Moreover,we also share the experiences and insights from our development and deployment,including some exciting future directions of MWAR over 5G and B5G networks.展开更多
In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discus...In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.展开更多
Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speed...Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.展开更多
Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number o...Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.展开更多
The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the v...Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.展开更多
Mitigation of sonic boom to an acceptable stage is a key point for the next generation of supersonic transports. Meanwhile, designing a supersonic aircraft with an ideal ground signature is always the focus of researc...Mitigation of sonic boom to an acceptable stage is a key point for the next generation of supersonic transports. Meanwhile, designing a supersonic aircraft with an ideal ground signature is always the focus of research on sonic boom reduction. This paper presents an inverse design approach to optimize the near-field signature of an aircraft, making it close to the shaped ideal ground signature after the propagation in the atmosphere. Using the Proper Orthogonal Decomposition(POD) method, a guessed input of augmented Burgers equation is inversely achieved. By multiple POD iterations, the guessed ground signatures successively approach the target ground signature until the convergence criteria is reached. Finally, the corresponding equivalent area distribution is calculated from the optimal near-field signature through the classical Whitham F-function theory. To validate this method, an optimization example of Lockheed Martin 1021 is demonstrated. The modified configuration has a fully shaped ground signature and achieves a drop of perceived loudness by 7.94 PLdB. This improvement is achieved via shaping the original near-field signature into wiggles and damping it by atmospheric attenuation. At last, a nonphysical ground signature is set as the target to test the robustness of this inverse design method and shows that this method is robust enough for various inputs.展开更多
基金National Natural Science Foundation of China(62171305,62405206,62004135,62001317,62111530301)Natural Science Foundation of Jiangsu Province(BK20240778,BK20241917)+3 种基金State Key Laboratory of Advanced Optical Communication Systems and Networks,China(2023GZKF08)China Postdoctoral Science Foundation(2024M752314)Postdoctoral Fellowship Program of CPSF(GZC20231883)Innovative and Entrepreneurial Talent Program of Jiangsu Province(JSSCRC2021527).
文摘Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While such a computing paradigm can only employ a single photonic device as the nonlinear node for data processing,the performance highly relies on the fading memory provided by the delay feedback loop(FL),which sets a restriction on the extensibility of physical implementation,especially for highly integrated chips.Here,we present a simplified photonic scheme for more flexible parameter configurations leveraging the designed quasi-convolution coding(QC),which completely gets rid of the dependence on FL.Unlike delay-based TDRC,encoded data in QC-based RC(QRC)enables temporal feature extraction,facilitating augmented memory capabilities.Thus,our proposed QRC is enabled to deal with time-related tasks or sequential data without the implementation of FL.Furthermore,we can implement this hardware with a low-power,easily integrable vertical-cavity surface-emitting laser for high-performance parallel processing.We illustrate the concept validation through simulation and experimental comparison of QRC and TDRC,wherein the simpler-structured QRC outperforms across various benchmark tasks.Our results may underscore an auspicious solution for the hardware implementation of deep neural networks.
文摘Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
基金Supported by The Hunan Provincial Natural Science Foundation of China,No.2023JJ30773,No.2025JJ60480,and No.2025JJ60552The Scientific Research Program of The Hunan Provincial Health Commission,No.202204072544+4 种基金The Science and Technology Innovation Program of Hunan Province,No.2024RC3053The CBT ECR/MCR Scheme,No.324910-0028/07National Natural Science Foundation of China,No.32300652The Scientific Research Program of Hunan Provincial Health Commission,No.W20243023The Scientific Research Launch Project for New Employees of The Second Xiangya Hospital of Central South University.
文摘Augmented reality(AR)is a technology that superimposes digital information onto real-world objects via head-mounted display devices to improve surgical finesse through visually enhanced medical information.With the rapid development of digital technology,AR has been increasingly adopted in orthopedic surgeries across the globe,especially in total knee arthroplasty procedures which demand high precision.By overlaying digital information onto the surgeon's field of view,AR systems enhance precision,improve alignment accuracy,and reduce the risk of complications associated with malalignment.Some concerns have been raised despite accuracy,including the learning curve,long-term outcomes,and technical limitations.Furthermore,it is essential for health practitioners to gain trust in the utilisation of AR.
基金the Science Foundation of China University of Petroleum,Beijing(Grant No.2462024YJRC021)the National Natural Science Foundation of China(Grant No.U24B2031 and 52104013).
文摘1.Introduction With the increasing demand for petroleum and natural gas resources,along with technological advancements in exploration and production,the primary frontier of oil and gas resources has shifted from conventional oil and gas development to the domains of“Two Deeps,One Unconventional,One Mature,”which include deep onshore,deepwater,unconventional resources,and mature oilfields[1].
基金the University of Shanghai for Science and Technology’s Medical Engineering Interdisciplinary Project(No.10-22-308-520)the Ministry of Education’s First Batch of Industry-Education Cooperation Collaborative Education Projects(No.202101042008)+1 种基金the Fundamental Research Funds for the Central Universities(No.YG2019QNA34)the Shanghai Municipal Health Commission for Youth Clinical Research Project(No.20194Y0134)。
文摘The purpose of this study is to establish a multivariate nonlinear regression mathematical model to predict the displacement of tumor during brain tumor resection surgery.And the study will be integrated with augmented reality technology to achieve three-dimensional visualization,thereby enhancing the complete resection rate of tumor and the success rate of surgery.Based on the preoperative MRI data of the patients,a 3D virtual model is reconstructed and 3D printed.A brain biomimetic model is created using gel injection molding.By considering cerebrospinal fluid loss and tumor cyst fluid loss as independent variables,the highest point displacement in the vertical bone window direction is determined as the dependent variable after positioning the patient for surgery.An orthogonal experiment is conducted on the biomimetic model to establish a predictive model,and this model is incorporated into the augmented reality navigation system.To validate the predictive model,five participants wore HoloLens2 devices,overlaying the patient’s 3D virtual model onto the physical head model.Subsequently,the spatial coordinates of the tumor’s highest point after displacement were measured on both the physical and virtual models(actual coordinates and predicted coordinates,respectively).The difference between these coordinates represents the model’s prediction error.The results indicate that the measured and predicted errors for the displacement of the tumor’s highest point on the X and Y axes range from−0.6787 mm to 0.2957 mm and−0.4314 mm to 0.2253 mm,respectively.The relative errors for each experimental group are within 10%,demonstrating a good fit of the model.This method of establishing a regression model represents a preliminary attempt to predict brain tumor displacement in specific situations.It also provides a new approach for surgeons.By combining augmented reality visualization,it addresses the need for predicting tumor displacement and precisely locating brain anatomical structures in a simple and cost-effective manner.
基金the National Natural Science Foundation of China(Nos.82330063 and M-0019)the Interdisciplinary Program of Shanghai Jiao Tong University(Nos.YG2022QN056,YG2023ZD19,and YG2023ZD15)+2 种基金the Cross Disciplinary Research Fund of Shanghai Ninth People’s Hospital,Shanghai Jiao Tong University School of Medicine(No.JYJC202115)the Translation Clinical R&D Project of Medical Robot of Shanghai Ninth People’s Hospital,Shanghai Jiao Tong University School of Medicine(No.IMR-NPH202002)the Shanghai Key Clinical Specialty,Shanghai Eye Disease Research Center(No.2022ZZ01003)。
文摘Endoscopic transnasal optic nerve decompression surgery plays a crucial role in minimal invasive treatment of complex traumatic optic neuropathy.However,a major challenge faced during the procedure is the inability to visualize the optic nerve intraoperatively.To address this issue,an endoscopic image-based augmented reality surgical navigation system is developed in this study.The system aims to virtually fuse the optic nerve onto the endoscopic images,assisting surgeons in determining the optic nerve’s position and reducing surgical risks.First,a calibration algorithm based on a checkerboard grid of immobile points is proposed,building upon existing calibration methods.Additionally,to tackle accuracy issues associated with augmented reality technology,an optical navigation and visual fusion compensation algorithm is proposed to improve the intraoperative tracking accuracy.To evaluate the system’s performance,model experiments were meticulously designed and conducted.The results confirm the accuracy and stability of the proposed system,with an average tracking error of(0.99±0.46)mm.This outcome demonstrates the effectiveness of the proposed algorithm in improving the augmented reality surgical navigation system’s accuracy.Furthermore,the system successfully displays hidden optic nerves and other deep tissues,thus showcasing the promising potential for future applications in orbital and maxillofacial surgery.
基金the Shanghai Municipal Education Commission-Gaofeng Clinical Medicine Grant(No.20172005)。
文摘Closed thoracic drainage can be performed using a steel-needle-guided chest tube to treat pleural effusion or pneumothorax in clinics.However,the puncture procedure during surgery is invisible,increasing the risk of surgical failure.Therefore,it is necessary to design a visualization system for closed thoracic drainage.Augmented reality(AR)technology can assist in visualizing the internal anatomical structure and determining the insertion point on the body surface.The structure of the currently used steel-needle-guided chest tube was modified by integrating it with an ultrafine diameter camera to provide real-time visualization of the puncture process.After simulation experiments,the overall registration error of the AR method was measured to be within(3.59±0.53)mm,indicating its potential for clinical application.The ultrafine diameter camera module and improved steel-needle-guided chest tube can timely reflect the position of the needle tip in the human body.A comparative experiment showed that video guidance could improve the safety of the puncture process compared to the traditional method.Finally,a qualitative evaluation of the usability of the system was conducted through a questionnaire.This system facilitates the visualization of closed thoracic drainage puncture procedure and pro-vides an implementation scheme to enhance the accuracy and safety of the operative step,which is conducive to reducing the learning curve and improving the proficiency of the doctors.
文摘With the iteration and upgrading of medical technology and the continuous growth of public health demands,the quality of nursing services has become a core indicator for measuring the effectiveness of the medical system.The clinical practice ability of nursing staff is directly related to the safety of patient diagnosis and treatment and the rehabilitation process.However,the current clinical nursing talent training model is facing bottlenecks such as limited practical scenarios and fragmented case cognition.This study focuses on the teaching application of augmented reality(AR)technology in hospital Settings and systematically reviews the research progress on the improvement of clinical practice ability of trainee nurses based on the AR immersive teaching model.By constructing a clinical teaching scenario that integrates virtual and real,AR technology can dynamically simulate complex case handling processes and enhance nursing students’three-dimensional cognition of condition assessment,operation norms,and emergency plans.Hospitals,as the core base for practical teaching,can effectively shorten the connection cycle between theoretical teaching and clinical practice by integrating AR technology,improve the clinical practice level of trainee nurses,and provide an innovative model for optimizing the path of clinical nursing talent cultivation.
基金the National Natural Science Foundation of China(No.11502146)the 1 Batch of 2021 MOE of PRC Industry University Collaborative Education Program(No.202101042008)。
文摘The aim of this study was to assess the potential of surgical guides as a complementary tool to augmented reality(AR)in enhancing the safety and precision of pedicle screw placement in spinal surgery.Four trainers were divided into the AR navigation group using surgical guides and the free-hand group.Each group consisted of a novice and an experienced spine surgeon.A total of 80 pedicle screws were implanted.First,the AR group reconstructed the 3D model and planned the screw insertion route according to the computed tomography data of L2 lumbar vertebrae.Then,the Microsoft HoloLens™2 was used to identify the vertebral model,and the planned virtual path was superimposed on the real cone model.Next,the screw was placed according to the projected trajectory.Finally,Micron Tracker was used to measure the deviation of screws from the preoperatively planned trajectory,and pedicle screws were evaluated using the Gertzbein-Robbins scale.In the AR group,the linear deviations of the experienced doctor and the novice were(1.59±0.39)mm and(1.73±0.52)mm respectively,and the angle deviations were 2.72°±0.61°and 2.87°±0.63°respectively.In the free-hand group,the linear deviations of the experienced doctor and the novice were(2.88±0.58)mm and(5.25±0.62)mm respectively,and the angle deviations were 4.41°±1.18°and 7.15°±1.45°respectively.Both kinds of deviations between the two groups were significantly different(P<0.05).The screw accuracy rate was 95%in the AR navigation group and 77.5%in the free-hand group.The results of this study indicate that the integration of surgical guides and AR is an innovative technique that can substantially enhance the safety and precision of spinal surgery and assist inexperienced doctors in completing the surgery.
基金supported in part by the National Key R&D Program of China under Grant 2018YFE0205503in part by the National Natural Science Foundation of China (NSFC) under Grant 61671081+4 种基金in part by the Funds for International Cooperation and Exchange of NSFC under Grant 61720106007in part by the 111 Project under Grant B18008in part by the Beijing Natural Science Foundation under Grant 4172042in part by the Fundamental Research Funds for the Central Universities under Grant 2018XKJC01in part by the BUPT Excellent Ph.D. Students Foundation under Grant CX2019213
文摘The popularity of wearable devices and smartphones has fueled the development of Mobile Augmented Reality(MAR),which provides immersive experiences over the real world using techniques,such as computer vision and deep learning.However,the hardware-specific MAR is costly and heavy,and the App-based MAR requires an additional download and installation and it also lacks cross-platform ability.These limitations hamper the pervasive promotion of MAR.This paper argues that mobile Web AR(MWAR)holds the potential to become a practical and pervasive solution that can effectively scale to millions of end-users because MWAR can be developed as a lightweight,cross-platform,and low-cost solution for end-to-end delivery of MAR.The main challenges for making MWAR a reality lie in the low efficiency for dense computing in Web browsers,a large delay for real-time interactions over mobile networks,and the lack of standardization.The good news is that the newly emerging 5G and Beyond 5G(B5G)cellular networks can mitigate these issues to some extent via techniques such as network slicing,device-to-device communication,and mobile edge computing.In this paper,we first give an overview of the challenges and opportunities of MWAR in the 5G era.Then we describe our design and development of a generic service-oriented framework(called MWAR5)to provide a scalable,flexible,and easy to deploy MWAR solution.We evaluate the performance of our MWAR5 system in an actually deployed 5G trial network under the collaborative configurations,which shows encouraging results.Moreover,we also share the experiences and insights from our development and deployment,including some exciting future directions of MWAR over 5G and B5G networks.
文摘In this paper, a class of augmented Lagrangiaus of Di Pillo and Grippo (DGALs) was considered, for solving equality-constrained problems via unconstrained minimization techniques. The relationship was further discussed between the uneonstrained minimizers of DGALs on the product space of problem variables and multipliers, and the solutions of the eonstrained problem and the corresponding values of the Lagrange multipliers. The resulting properties indicate more precisely that this class of DGALs is exact multiplier penalty functions. Therefore, a solution of the equslity-constralned problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of a DGAL on the product space of problem variables and multipliers.
基金Supported by the Key Research Program of the Chinese Academy of Sciences(ZDRE-KT-2021-3)。
文摘Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.
基金the National Key R&D Program of China(2018YFB1004901)the Independent Innovation Team Project of Jinan City(2019GXRC013).
文摘Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
基金supported by grants from the Mission Plan Program of Beijing Municipal Administration of Hospitals(SML20152201)Beijing Municipal Administration of Hospitals Clinical Medicine Development of Special Funding(ZYLX201712)+1 种基金the National Natural Science Foundation of China(81427803)Beijing Tsinghua Changgung Hospital Fund(12015C1039)
文摘Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
文摘Mitigation of sonic boom to an acceptable stage is a key point for the next generation of supersonic transports. Meanwhile, designing a supersonic aircraft with an ideal ground signature is always the focus of research on sonic boom reduction. This paper presents an inverse design approach to optimize the near-field signature of an aircraft, making it close to the shaped ideal ground signature after the propagation in the atmosphere. Using the Proper Orthogonal Decomposition(POD) method, a guessed input of augmented Burgers equation is inversely achieved. By multiple POD iterations, the guessed ground signatures successively approach the target ground signature until the convergence criteria is reached. Finally, the corresponding equivalent area distribution is calculated from the optimal near-field signature through the classical Whitham F-function theory. To validate this method, an optimization example of Lockheed Martin 1021 is demonstrated. The modified configuration has a fully shaped ground signature and achieves a drop of perceived loudness by 7.94 PLdB. This improvement is achieved via shaping the original near-field signature into wiggles and damping it by atmospheric attenuation. At last, a nonphysical ground signature is set as the target to test the robustness of this inverse design method and shows that this method is robust enough for various inputs.