AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A...AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A total of 141 healthy computer users underwent comprehensive clinical visual function assessments,including evaluations of refractive errors,accommodation(amplitude of accommodation,positive relative accommodation,negative relative accommodation,accommodative accuracy,and accommodative facility),and vergence(phoria,positive and negative fusional vergence,near point of convergence,and vergence facility).Total CVS-Q scores were recorded to explore potential associations between symptom scores and the aforementioned clinical visual function parameters.RESULTS:The cohort included 54 males(38.3%)with a mean age of 23.9±0.58y and 87 age-matched females(61.7%)with a mean age of 23.9±0.53y.The multiple regression model was statistically significant[R²=0.60,F=13.28,degrees of freedom(DF=17122,P<0.001].This indicates that 60%of the variance in total CVS-Q scores(reflecting reported symptoms)could be explained by four clinical measurements:amplitude of accommodation,positive relative accommodation,exophoria at distance and near,and positive fusional vergence at near.CONCLUSION:The total CVS-Q score is a valid and reliable tool for predicting the presence of various nonstrabismic binocular vision anomalies and refractive errors in symptomatic computer users.展开更多
This paper presents an automated imaging-to-CAD reconstruction system that combines telecentric vision and deep learning for high-accuracy digital reconstruction of printed circuit boards(PCBs).The framework integrate...This paper presents an automated imaging-to-CAD reconstruction system that combines telecentric vision and deep learning for high-accuracy digital reconstruction of printed circuit boards(PCBs).The framework integrates a telecentric camera with a Cartesian scanning platform to capture distortion-free,high-resolution PCB images,which are stitched into a single orthographic composite.A YOLO-based detection model,trained on a dataset of 270 PCB images across 23 component classes with data augmentation,identifies and localizes electronic components with a mean average precision of 0.932.Detected components are automatically matched to corresponding 3D CAD models from a part library and assembled within a Fusion 360 environment,producing a 3D digital replica.Experimental results show a similarity score of 0.894 and dimensional deviations below 2%,outperforming both SensoPart image measurement and manual vernier methods.The proposed approach bridges optical metrology and CAD automation,providing a scalable solution for AI-assisted reverse engineering,digital archiving,and intelligent manufacturing.展开更多
Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel...Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.展开更多
Compared to monocular depth estimation,multi-view depth estimation often yields more accurate results.However,traditional multi-view depth estimation methods often fail to leverage semantic information fully and strug...Compared to monocular depth estimation,multi-view depth estimation often yields more accurate results.However,traditional multi-view depth estimation methods often fail to leverage semantic information fully and struggle to effectively fuse information from multiple views,leading to suboptimal prediction performance in challenging scenarios such as texture-less regions and reflective surfaces.To address these limitations,we present MVI-Depth,a novel framework with two core innovations:(1)a Semantic Fusion Module(SFM)that establishes semantic correspondence,and(2)a Depth Updating Module(DUM)enabling iterative depth refinement.Specifically,MVI-Depth initially establishes a main view representation that integrates single-view depth,depth features,and semantic features.Subsequent feature extraction from neighbouring views enables the construction of the original cost volume.Recognising the inherent limitations of direct cost volume utilisation in complex scenes,the proposed SFM constructs an aligned semantic cost volume to utilise the complementarity between semantic and depth information,forming an improved final cost volume.The final cost volume is updated through the proposed DUM to achieve iterative depth optimisation.Comprehensive evaluations demonstrate that MVI-Depth achieves superior performance across all standard metrics on both ScanNet and KITTI benchmarks,outperforming existing methods.Additional experiments on the 7-Scenes dataset further confirm the framework's robust generalisation capabilities in diverse environments.展开更多
Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone t...Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability.展开更多
The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-lear...The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-learning(DL)-driven CV in four key areas of materials science:microstructure-based performance prediction,microstructure information generation,microstructure defect detection,and crystal structure-based property prediction.The CV has significantly reduced the cost of traditional experimental methods used in material performance prediction.Moreover,recent progress made in generating microstructure images and detecting microstructural defects using CV has led to increased efficiency and reliability in material performance assessments.The DL-driven CV models can accelerate the design of new materials with optimized performance by integrating predictions based on both crystal and microstructural data,thereby allowing for the discovery and innovation of next-generation materials.Finally,the review provides insights into the rapid interdisciplinary developments in the field of materials science and future prospects.展开更多
In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in...In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in medical imaging applications,they operate based on fundamentally different computational principles.This report attempts to provide brief application notes on ViTs and CNNs,particularly focusing on scenarios that guide the selection of one architecture over the other in practical medical implementations.Generally,CNNs rely on convolutional kernels,localized receptive fields,and weight sharing,enabling efficient hierarchical feature extraction.These properties contribute to strong performance in detecting spatially constrained patterns such as textures,edges,and anatomical boundaries,while maintaining relatively low computational requirements.ViTs,on the other hand,decompose images into smaller segments referred to as tokens and employ self-attention mechanisms to model relationships across the entire image.This global modeling capability allows ViTs to capture long-range dependencies that may be difficult for convolution-based architectures to learn.However,ViTs typically achieve optimal performance when trained on extremely large datasets or when supported by extensive pretraining,as their reduced inductive bias requires greater data exposure to learn robust representations.This report briefly examines the architectural structure,underlying mathematical foundations,and relative performance characteristics of CNNs and ViTs,drawing upon recent findings from contemporary research.Emphasis is placed on understanding how differences in data availability,computational resources,and task requirements influence model effectiveness across medical imaging domains.Most importantly,the report serves as a concise application guide for practitioners seeking informed implementation decisions between these two influential deep learning frameworks.展开更多
of patients.In medicine and rehabilitation engineering,machine vision technology has been widely used to design intelligent prostheses to help patients restore limb function.Grip strength control is one of the key cha...of patients.In medicine and rehabilitation engineering,machine vision technology has been widely used to design intelligent prostheses to help patients restore limb function.Grip strength control is one of the key challenges in developing prosthetic hands;for example,patients need to appropriately control the grip strength of the prosthetic hand depending on the nature and size of the object to be gripped to prevent it from slipping or being damaged.This study combines machine learning and deep learning techniques to determine object grip information by analyzing images of such objects,including their type,texture,and size,so as to select the appropriate grip strength threshold.The electromyographic gesture-control mode is integrated with the visual recognition system to achieve active detection and control of the intelligent prosthetic hand.This research is also transplanted into the K210 main control board for offline recognition to achieve more efficient real-time performance.The experimental results demonstrate that the system achieves an object recognition accuracy rate of 90%,and the real-machine recognition rate is above 85%.The system successfully implements adaptive grasping for eggs(fragile items)and water bottles(rigid objects).展开更多
Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challe...Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challenges for de-ployment on resource-constrained edge devices.Although post-training quantization(PTQ)provides a promising solution by reducing model precision with minimal calibration data,aggressive low-bit quantization typically leads to substantial perfor-mance degradation.To address this challenge,we present the truncated uniform-log2 quantizer and progressive bit-decline reconstruction method for vision Transformer quantization(TP-ViT).It is an innovative PTQ framework specifically designed for ViTs,featuring two key technical contributions:(1)truncated uniform-log2 quantizer,a novel quantization approach which effectively handles outlier values in post-Softmax activations,significantly reducing quantization errors;(2)bit-decline optimiza-tion strategy,which employs transition weights to gradually reduce bit precision while maintaining model performance under extreme quantization conditions.Comprehensive experiments on image classification,object detection,and instance segmenta-tion tasks demonstrate TP-ViT’s superior performance compared to state-of-the-art PTQ methods,particularly in challenging 3-bit quantization scenarios.Our framework achieves a notable 6.18 percentage points improvement in top-1 accuracy for ViT-small under 3-bit quantization.These results validate TP-ViT’s robustness and general applicability,paving the way for more efficient deployment of ViT models in computer vision applications on edge hardware.展开更多
基金Supported by Ongoing Research Funding Program(ORFFT-2025-054-1),King Saud University,Riyadh,Saudi Arabia.
文摘AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A total of 141 healthy computer users underwent comprehensive clinical visual function assessments,including evaluations of refractive errors,accommodation(amplitude of accommodation,positive relative accommodation,negative relative accommodation,accommodative accuracy,and accommodative facility),and vergence(phoria,positive and negative fusional vergence,near point of convergence,and vergence facility).Total CVS-Q scores were recorded to explore potential associations between symptom scores and the aforementioned clinical visual function parameters.RESULTS:The cohort included 54 males(38.3%)with a mean age of 23.9±0.58y and 87 age-matched females(61.7%)with a mean age of 23.9±0.53y.The multiple regression model was statistically significant[R²=0.60,F=13.28,degrees of freedom(DF=17122,P<0.001].This indicates that 60%of the variance in total CVS-Q scores(reflecting reported symptoms)could be explained by four clinical measurements:amplitude of accommodation,positive relative accommodation,exophoria at distance and near,and positive fusional vergence at near.CONCLUSION:The total CVS-Q score is a valid and reliable tool for predicting the presence of various nonstrabismic binocular vision anomalies and refractive errors in symptomatic computer users.
基金funded by the Ratchadaphiseksomphot Endowment Fund,Chulalongkorn University grant number:RSF-AnH-69-06-21-01.
文摘This paper presents an automated imaging-to-CAD reconstruction system that combines telecentric vision and deep learning for high-accuracy digital reconstruction of printed circuit boards(PCBs).The framework integrates a telecentric camera with a Cartesian scanning platform to capture distortion-free,high-resolution PCB images,which are stitched into a single orthographic composite.A YOLO-based detection model,trained on a dataset of 270 PCB images across 23 component classes with data augmentation,identifies and localizes electronic components with a mean average precision of 0.932.Detected components are automatically matched to corresponding 3D CAD models from a part library and assembled within a Fusion 360 environment,producing a 3D digital replica.Experimental results show a similarity score of 0.894 and dimensional deviations below 2%,outperforming both SensoPart image measurement and manual vernier methods.The proposed approach bridges optical metrology and CAD automation,providing a scalable solution for AI-assisted reverse engineering,digital archiving,and intelligent manufacturing.
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.
基金supported by the National Natural Science Foundation of China(Nos.62373009 and 62073004).
文摘Compared to monocular depth estimation,multi-view depth estimation often yields more accurate results.However,traditional multi-view depth estimation methods often fail to leverage semantic information fully and struggle to effectively fuse information from multiple views,leading to suboptimal prediction performance in challenging scenarios such as texture-less regions and reflective surfaces.To address these limitations,we present MVI-Depth,a novel framework with two core innovations:(1)a Semantic Fusion Module(SFM)that establishes semantic correspondence,and(2)a Depth Updating Module(DUM)enabling iterative depth refinement.Specifically,MVI-Depth initially establishes a main view representation that integrates single-view depth,depth features,and semantic features.Subsequent feature extraction from neighbouring views enables the construction of the original cost volume.Recognising the inherent limitations of direct cost volume utilisation in complex scenes,the proposed SFM constructs an aligned semantic cost volume to utilise the complementarity between semantic and depth information,forming an improved final cost volume.The final cost volume is updated through the proposed DUM to achieve iterative depth optimisation.Comprehensive evaluations demonstrate that MVI-Depth achieves superior performance across all standard metrics on both ScanNet and KITTI benchmarks,outperforming existing methods.Additional experiments on the 7-Scenes dataset further confirm the framework's robust generalisation capabilities in diverse environments.
文摘Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability.
基金financially supported by the National Science Fund for Distinguished Young Scholars,China(No.52025041)the National Natural Science Foundation of China(Nos.52450003,U2341267,and 52174294)+1 种基金the National Postdoctoral Program for Innovative Talents,China(No.BX20240437)the Fundamental Research Funds for the Central Universities,China(Nos.FRF-IDRY-23-037 and FRF-TP-20-02C2)。
文摘The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-learning(DL)-driven CV in four key areas of materials science:microstructure-based performance prediction,microstructure information generation,microstructure defect detection,and crystal structure-based property prediction.The CV has significantly reduced the cost of traditional experimental methods used in material performance prediction.Moreover,recent progress made in generating microstructure images and detecting microstructural defects using CV has led to increased efficiency and reliability in material performance assessments.The DL-driven CV models can accelerate the design of new materials with optimized performance by integrating predictions based on both crystal and microstructural data,thereby allowing for the discovery and innovation of next-generation materials.Finally,the review provides insights into the rapid interdisciplinary developments in the field of materials science and future prospects.
文摘In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in medical imaging applications,they operate based on fundamentally different computational principles.This report attempts to provide brief application notes on ViTs and CNNs,particularly focusing on scenarios that guide the selection of one architecture over the other in practical medical implementations.Generally,CNNs rely on convolutional kernels,localized receptive fields,and weight sharing,enabling efficient hierarchical feature extraction.These properties contribute to strong performance in detecting spatially constrained patterns such as textures,edges,and anatomical boundaries,while maintaining relatively low computational requirements.ViTs,on the other hand,decompose images into smaller segments referred to as tokens and employ self-attention mechanisms to model relationships across the entire image.This global modeling capability allows ViTs to capture long-range dependencies that may be difficult for convolution-based architectures to learn.However,ViTs typically achieve optimal performance when trained on extremely large datasets or when supported by extensive pretraining,as their reduced inductive bias requires greater data exposure to learn robust representations.This report briefly examines the architectural structure,underlying mathematical foundations,and relative performance characteristics of CNNs and ViTs,drawing upon recent findings from contemporary research.Emphasis is placed on understanding how differences in data availability,computational resources,and task requirements influence model effectiveness across medical imaging domains.Most importantly,the report serves as a concise application guide for practitioners seeking informed implementation decisions between these two influential deep learning frameworks.
文摘of patients.In medicine and rehabilitation engineering,machine vision technology has been widely used to design intelligent prostheses to help patients restore limb function.Grip strength control is one of the key challenges in developing prosthetic hands;for example,patients need to appropriately control the grip strength of the prosthetic hand depending on the nature and size of the object to be gripped to prevent it from slipping or being damaged.This study combines machine learning and deep learning techniques to determine object grip information by analyzing images of such objects,including their type,texture,and size,so as to select the appropriate grip strength threshold.The electromyographic gesture-control mode is integrated with the visual recognition system to achieve active detection and control of the intelligent prosthetic hand.This research is also transplanted into the K210 main control board for offline recognition to achieve more efficient real-time performance.The experimental results demonstrate that the system achieves an object recognition accuracy rate of 90%,and the real-machine recognition rate is above 85%.The system successfully implements adaptive grasping for eggs(fragile items)and water bottles(rigid objects).
基金supported by the National Natural Science Foundation of China(Nos.62301092 and 62301093).
文摘Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challenges for de-ployment on resource-constrained edge devices.Although post-training quantization(PTQ)provides a promising solution by reducing model precision with minimal calibration data,aggressive low-bit quantization typically leads to substantial perfor-mance degradation.To address this challenge,we present the truncated uniform-log2 quantizer and progressive bit-decline reconstruction method for vision Transformer quantization(TP-ViT).It is an innovative PTQ framework specifically designed for ViTs,featuring two key technical contributions:(1)truncated uniform-log2 quantizer,a novel quantization approach which effectively handles outlier values in post-Softmax activations,significantly reducing quantization errors;(2)bit-decline optimiza-tion strategy,which employs transition weights to gradually reduce bit precision while maintaining model performance under extreme quantization conditions.Comprehensive experiments on image classification,object detection,and instance segmenta-tion tasks demonstrate TP-ViT’s superior performance compared to state-of-the-art PTQ methods,particularly in challenging 3-bit quantization scenarios.Our framework achieves a notable 6.18 percentage points improvement in top-1 accuracy for ViT-small under 3-bit quantization.These results validate TP-ViT’s robustness and general applicability,paving the way for more efficient deployment of ViT models in computer vision applications on edge hardware.