The development of brain-computer interfaces(BCI)based on motor imagery(MI)has greatly improved patients’quality of life with movement disorders.The classification of upper limb MI has been widely studied and applied...The development of brain-computer interfaces(BCI)based on motor imagery(MI)has greatly improved patients’quality of life with movement disorders.The classification of upper limb MI has been widely studied and applied in many fields,including rehabilitation.However,the physiological representations of left and right lower limb movements are too close and activated deep in the cerebral cortex,making it difficult to distinguish their features.Therefore,classifying lower limbs motor imagery is more challenging.In this study,we propose a feature extraction method based on functional connectivity,which utilizes phase-locked values to construct a functional connectivity matrix as the features of the left and right legs,which can effectively avoid the problem of physiological representations of the left and right lower limbs being too close to each other during movement.In addition,considering the topology and the temporal characteristics of the electroencephalogram(EEG),we designed a temporal-spatial convolutional network(TSGCN)to capture the spatiotemporal information for classification.Experimental results show that the accuracy of the proposed method is higher than that of existing methods,achieving an average classification accuracy of 73.58%on the internal dataset.Finally,this study explains the network mechanism of left and right foot MI from the perspective of graph theoretic features and demonstrates the feasibility of decoding lower limb MI.展开更多
Background Eye-tracking technology for mobile devices has made significant progress.However,owing to limited computing capacity and the complexity of context,the conventional image feature-based technology cannot extr...Background Eye-tracking technology for mobile devices has made significant progress.However,owing to limited computing capacity and the complexity of context,the conventional image feature-based technology cannot extract features accurately,thus affecting the performance.Methods This study proposes a novel approach by combining appearance-and feature-based eye-tracking methods.Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points.The feature points were used to generate feature vectors,such as corner center-pupil center,by which the gaze fixation coordinates were calculated.Results To obtain feature vectors with the best performance,we compared different vectors under different image resolution and illumination conditions,and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93°when the image resolution was 96×48 pixels,with light sources illuminating from the front of the eye.Conclusions Compared with the current methods,our method improved the accuracy of gaze fixation and it was more usable.展开更多
Background Navigation assistance is essential for users when roaming virtual reality scenes;however,the traditional navigation method requires users to manually request a map for viewing,which leads to low immersion a...Background Navigation assistance is essential for users when roaming virtual reality scenes;however,the traditional navigation method requires users to manually request a map for viewing,which leads to low immersion and poor user experience.Methods To address this issue,we first collected data on who required navigation assistance in a virtual reality environment,including various eye movement features,such as gaze fixation,pupil size,and gaze angle.Subsequently,we used the boosting-based XGBoost algorithm to train a prediction model and finally used it to predict whether users require navigation assistance in a roaming task.Results After evaluating the performance of the model,the accuracy,precision,recall,and F1-score of our model reached approximately 95%.In addition,by applying the model to a virtual reality scene,an adaptive navigation assistance system based on the real-time eye movement data of the user was implemented.Conclusions Compared with traditional navigation assistance methods,our new adaptive navigation assistance method could enable the user to be more immersive and effective while roaming in a virtual reality(VR)environment.展开更多
Code review is intended to find bugs in early development phases,improving code quality for later integration and testing.However,due to the lack of experience with algorithm design,or software development,individual ...Code review is intended to find bugs in early development phases,improving code quality for later integration and testing.However,due to the lack of experience with algorithm design,or software development,individual novice programmers face challenges while reviewing code.In this paper,we utilize collaborative eye tracking to record the gaze data from multiple reviewers,and share the gaze visualization among them during the code review process.The visualizations,such as borders highlighting current reviewed code lines,transition lines connecting related reviewed code lines,reveal the visual attention about program functions that can facilitate understanding and bug tracing.This can help novice reviewers to make sense to confirm the potential bugs or avoid repeated reviewing of code,and potentially even help to improve reviewing skills.We built a prototype system,and conducted a user study with paired reviewers.The results showed that the shared real-time visualization allowed the reviewers to find bugs more efficiently.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 62172368the Natural Science Foundation of Zhejiang Province under Grant LR22F020003.
文摘The development of brain-computer interfaces(BCI)based on motor imagery(MI)has greatly improved patients’quality of life with movement disorders.The classification of upper limb MI has been widely studied and applied in many fields,including rehabilitation.However,the physiological representations of left and right lower limb movements are too close and activated deep in the cerebral cortex,making it difficult to distinguish their features.Therefore,classifying lower limbs motor imagery is more challenging.In this study,we propose a feature extraction method based on functional connectivity,which utilizes phase-locked values to construct a functional connectivity matrix as the features of the left and right legs,which can effectively avoid the problem of physiological representations of the left and right lower limbs being too close to each other during movement.In addition,considering the topology and the temporal characteristics of the electroencephalogram(EEG),we designed a temporal-spatial convolutional network(TSGCN)to capture the spatiotemporal information for classification.Experimental results show that the accuracy of the proposed method is higher than that of existing methods,achieving an average classification accuracy of 73.58%on the internal dataset.Finally,this study explains the network mechanism of left and right foot MI from the perspective of graph theoretic features and demonstrates the feasibility of decoding lower limb MI.
基金Supported by the National Natural Science Foundation of China (61772468, 62172368)the Fundamental Research Funds forthe Provincial Universities of Zhejiang (RF-B2019001)
文摘Background Eye-tracking technology for mobile devices has made significant progress.However,owing to limited computing capacity and the complexity of context,the conventional image feature-based technology cannot extract features accurately,thus affecting the performance.Methods This study proposes a novel approach by combining appearance-and feature-based eye-tracking methods.Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points.The feature points were used to generate feature vectors,such as corner center-pupil center,by which the gaze fixation coordinates were calculated.Results To obtain feature vectors with the best performance,we compared different vectors under different image resolution and illumination conditions,and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93°when the image resolution was 96×48 pixels,with light sources illuminating from the front of the eye.Conclusions Compared with the current methods,our method improved the accuracy of gaze fixation and it was more usable.
基金Supported by the National Natural Science Foundation of China (62172368)the Natural Science Foundation of Zhejiang Province (LR22F020003)。
文摘Background Navigation assistance is essential for users when roaming virtual reality scenes;however,the traditional navigation method requires users to manually request a map for viewing,which leads to low immersion and poor user experience.Methods To address this issue,we first collected data on who required navigation assistance in a virtual reality environment,including various eye movement features,such as gaze fixation,pupil size,and gaze angle.Subsequently,we used the boosting-based XGBoost algorithm to train a prediction model and finally used it to predict whether users require navigation assistance in a roaming task.Results After evaluating the performance of the model,the accuracy,precision,recall,and F1-score of our model reached approximately 95%.In addition,by applying the model to a virtual reality scene,an adaptive navigation assistance system based on the real-time eye movement data of the user was implemented.Conclusions Compared with traditional navigation assistance methods,our new adaptive navigation assistance method could enable the user to be more immersive and effective while roaming in a virtual reality(VR)environment.
基金We also gratefully acknowledge the grant from National Natural Science Foundation of China(Grant Nos.61772468,62172368)National Key Research&Development Program of China(2016YFB1001403)Fundamental Research Funds for the Provincial Universities of Zhejiang(RF-B2019001).
文摘Code review is intended to find bugs in early development phases,improving code quality for later integration and testing.However,due to the lack of experience with algorithm design,or software development,individual novice programmers face challenges while reviewing code.In this paper,we utilize collaborative eye tracking to record the gaze data from multiple reviewers,and share the gaze visualization among them during the code review process.The visualizations,such as borders highlighting current reviewed code lines,transition lines connecting related reviewed code lines,reveal the visual attention about program functions that can facilitate understanding and bug tracing.This can help novice reviewers to make sense to confirm the potential bugs or avoid repeated reviewing of code,and potentially even help to improve reviewing skills.We built a prototype system,and conducted a user study with paired reviewers.The results showed that the shared real-time visualization allowed the reviewers to find bugs more efficiently.