AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A...AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A total of 141 healthy computer users underwent comprehensive clinical visual function assessments,including evaluations of refractive errors,accommodation(amplitude of accommodation,positive relative accommodation,negative relative accommodation,accommodative accuracy,and accommodative facility),and vergence(phoria,positive and negative fusional vergence,near point of convergence,and vergence facility).Total CVS-Q scores were recorded to explore potential associations between symptom scores and the aforementioned clinical visual function parameters.RESULTS:The cohort included 54 males(38.3%)with a mean age of 23.9±0.58y and 87 age-matched females(61.7%)with a mean age of 23.9±0.53y.The multiple regression model was statistically significant[R²=0.60,F=13.28,degrees of freedom(DF=17122,P<0.001].This indicates that 60%of the variance in total CVS-Q scores(reflecting reported symptoms)could be explained by four clinical measurements:amplitude of accommodation,positive relative accommodation,exophoria at distance and near,and positive fusional vergence at near.CONCLUSION:The total CVS-Q score is a valid and reliable tool for predicting the presence of various nonstrabismic binocular vision anomalies and refractive errors in symptomatic computer users.展开更多
Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone t...Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability.展开更多
The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-lear...The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-learning(DL)-driven CV in four key areas of materials science:microstructure-based performance prediction,microstructure information generation,microstructure defect detection,and crystal structure-based property prediction.The CV has significantly reduced the cost of traditional experimental methods used in material performance prediction.Moreover,recent progress made in generating microstructure images and detecting microstructural defects using CV has led to increased efficiency and reliability in material performance assessments.The DL-driven CV models can accelerate the design of new materials with optimized performance by integrating predictions based on both crystal and microstructural data,thereby allowing for the discovery and innovation of next-generation materials.Finally,the review provides insights into the rapid interdisciplinary developments in the field of materials science and future prospects.展开更多
Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challe...Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challenges for de-ployment on resource-constrained edge devices.Although post-training quantization(PTQ)provides a promising solution by reducing model precision with minimal calibration data,aggressive low-bit quantization typically leads to substantial perfor-mance degradation.To address this challenge,we present the truncated uniform-log2 quantizer and progressive bit-decline reconstruction method for vision Transformer quantization(TP-ViT).It is an innovative PTQ framework specifically designed for ViTs,featuring two key technical contributions:(1)truncated uniform-log2 quantizer,a novel quantization approach which effectively handles outlier values in post-Softmax activations,significantly reducing quantization errors;(2)bit-decline optimiza-tion strategy,which employs transition weights to gradually reduce bit precision while maintaining model performance under extreme quantization conditions.Comprehensive experiments on image classification,object detection,and instance segmenta-tion tasks demonstrate TP-ViT’s superior performance compared to state-of-the-art PTQ methods,particularly in challenging 3-bit quantization scenarios.Our framework achieves a notable 6.18 percentage points improvement in top-1 accuracy for ViT-small under 3-bit quantization.These results validate TP-ViT’s robustness and general applicability,paving the way for more efficient deployment of ViT models in computer vision applications on edge hardware.展开更多
Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood s...Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood scenarios involving reflections,occlusions,or indistinct boundaries due to limited contextual modeling.To address these challenges,we propose a hybrid flood segmentation framework that integrates a Vision Transformer(ViT)encoder with a U-Net decoder,enhanced by a novel Flood-Aware Refinement Block(FARB).The FARB module improves boundary delineation and suppresses noise by combining residual smoothing with spatial-channel attention mechanisms.We evaluate our model on a UAV-acquired flood imagery dataset,demonstrating that the proposed ViTUNet+FARB architecture outperforms existing CNN and Transformer-based models in terms of accuracy and mean Intersection over Union(mIoU).Detailed ablation studies further validate the contribution of each component,confirming that the FARB design significantly enhances segmentation quality.To its better performance and computational efficiency,the proposed framework is well-suited for flood monitoring and disaster response applications,particularly in resource-constrained environments.展开更多
[Significance]In alignment with the national germplasm security strategy,current research efforts are accelerating the adoption of precision breeding in sheep.Within the whole-genome selection,accurate phenotyping of ...[Significance]In alignment with the national germplasm security strategy,current research efforts are accelerating the adoption of precision breeding in sheep.Within the whole-genome selection,accurate phenotyping of body morphometrics is critical for assessing growth performance and breeding value.Traditional manual measurements are inefficient,prone to human error,and may cause stress to sheep,limiting their suitability for precision sheep management.By summarizing the applications of sheep body size measurement technologies and analyzing their development directions,this paper provides theoretical references and practical guidance for the research and application of non contact sheep body size measurement.[Progress]This review synthesizes progress across three principal methodological paradigms:two-dimensional(2D)image-based techniques,three-dimensional(3D)point cloud-based approaches,and integrated 2D-3D fusion systems.2D methods,employing either handcrafted geometric features or deep learning-based keypoint detector algorithms,are cost-effective and operationally simple but sensitive to variation in imaging conditions and unable to capture critical circumference metrics.3D point-cloud approaches enable precise reconstruction of full animal morphology,supporting comprehensive body-size acquisition with higher accuracy,yet face challenges including high hardware costs,complex data workflows,and sensitivity to posture variability.Hybrid 2D-3D fusion systems combine semantic richness from RGB imagery with geometric completeness from point clouds.Having been effectively validated in other livestock specise,e.g.,cattle and pigs,these fusion systems have demonstrated excellent performance,providing important technical references and practical insights for sheep body size measurement.[Conclusions and Prospects]Firstly,future research should focus on constructing large-scale,high-quality datasets for sheep body size measurement that encompass diverse breeds,growth stages,and environmental conditions,thereby enhancing model robustness and generalization.Secondly,the development of lightweight artificial intelligence models is essential.Techniques such as model compression,quantization,and algorithmic optimization can substantially reduce computational complexity and storage requirements,facilitating deployment in resource-constrained environments.Thirdly,the 3D point cloud processing pipeline should be streamlined to improve the efficiency of data acquisition,filtering,registration,and segmentation,while promoting the integration of low-cost,high-resilience vision systems into practical farming scenarios.Fourthly,specific emphasis should be placed on improving the accuracy of curved-dimensional measurements,such as chest circumference,abdominal circumference,and shank circumference,through advances in pose standardization,refined 3D segmentation strategies,and multimodal data fusion.Finally,the cross-fertilization of sheep body size measurement technologies with analogous methods for other livestock species offers a promising pathway for mutual learning and collaborative innovation,accelerating the industrialization of automated sheep morphometric systems and supporting the development of intelligent,data-driven pasture management practices.展开更多
AIM:To investigate the association between functionaloutcomes and postoperative patient satisfaction 5y aftersmall incision lenticule extraction(SMILE)and femtosecondlaser-assisted in situ keratomileusis(FS-LASIK).MET...AIM:To investigate the association between functionaloutcomes and postoperative patient satisfaction 5y aftersmall incision lenticule extraction(SMILE)and femtosecondlaser-assisted in situ keratomileusis(FS-LASIK).METHODS:This is a cross-sectional study.Thepatients underwent basic ophthalmic examinations,axiallength measurement,wide-field fundus photography,andaccommodation function testing.Behavioral habits datawere collected using a self-administered questionnaire,andvisual symptoms were assessed with the Quality of Vision(QoV)questionnaire.Postoperative satisfaction was alsorecorded.RESULTS:Totally 410 subjects[820 eyes,160males(39.02%)and 250 females(60.98%)]who hadundergone SMILE or FS-LASIK 5y ago were enrolled.Themean(standard deviation,SD)age of all patients was29.83y(6.69).The mean(SD)preoperative manifest SEwas-5.80(2.04)diopters(D;range:-0.88 to-13.75).Patient satisfaction at 5y after undergoing SMILE or FSLASIKwas 91.70%.Patients were categorized into twogroups:dissatisfied group and satisfied group.Significantdifferences were observed between the two groups in termsof age(P=0.012),sex(P=0.021),preoperative degreeof myopia(P=0.049),postoperative visual symptoms(frequency,P=0.043;severity,P<0.001;bothersome,P=0.018),difficulty driving at night(P=0.001),andaccommodative amplitude(AMP,P=0.020).Multivariateanalysis confirmed that female sex(P=0.024),severityof visual symptoms(P=0.009),and difficulty driving atnight(P=0.006)were significantly associated with lowersatisfaction.The dissatisfied group showed higher rates ofstarbursts,double or multiple images,and high myopia,but lower age.The frequency,severity,and bothersome ofdistortion exhibited decreased with increasing age.CONCLUSION:Patient satisfaction 5y after SMILEand FS-LASIK is high and stable.Difficulty driving at night,sex,and severity of visual symptoms are important factorsinfluencing patient satisfaction.Special attention should bepaid to younger highly myopic female patients,particularlythose with starbursts and double or multiple images.It is crucial to monitor postoperative visual outcomesand provide patients with comprehensive preoperativecounseling to enhance long-term satisfaction.展开更多
In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and ta...In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.展开更多
critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study pr...critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.展开更多
AIM:To evaluate the differences in near point of convergence(NPC),fusional vergence,saccadic eye movements,versional eye movements,and heterophoria between patients diagnosed with Parkinson’s disease(PD)and healthy s...AIM:To evaluate the differences in near point of convergence(NPC),fusional vergence,saccadic eye movements,versional eye movements,and heterophoria between patients diagnosed with Parkinson’s disease(PD)and healthy subjects.METHODS:A cross-sectional comparative study was conducted,enrolling two cohorts:a PD group and a healthy control group.The PD group was recruited via non-random convenience sampling,while the control group was selected randomly from individuals without PD.All participants were screened according to predefined inclusion and exclusion criteria before undergoing a comprehensive optometric assessment,which included measurements of uncorrected visual acuity,corrected visual acuity,and objective and subjective refraction.Subsequently,binocular vision function evaluations were performed,covering NPC measurement,fusional vergence reserve assessment at both distance and near,saccadic eye movement testing,and versional eye movement and heterophoria assessment.RESULTS:A total of 42 PD patients and 41 healthy controls were included in the final analysis.The two groups were well-matched in terms of sex distribution[29 males(69.0%)in the PD group vs 29 males(70.7%)in the control group,P=0.867]and mean age(55.3±9.6y in the PD group vs 54.9±9.8y in the control group,P=0.866).The prevalence of abnormal versional eye movements was significantly higher in the PD group than in the control group(23.81%,95%CI:12.05%-39.45%vs 7.32%,95%CI:1.54%-19.92%;P=0.025).Near exophoria was more prevalent in PD patients(61.90%,95%CI:45.64%-76.43%)than in controls(17.07%,95%CI:7.15%-32.06%),with a significant difference[odds ratio(OR)=7.99;95%CI:2.83-21.99;P<0.001].The mean NPC was significantly greater(more receded)in the PD group than in the control group(9.01±3.74 cm vs 7.20±2.15 cm;P=0.007).A statistically significant positive correlation was observed between PD severity and NPC values(Pearson’s correlation coefficient=0.309;P=0.046).Except for distance baseout break and distance base-out recovery values,all other fusional vergence parameters were significantly lower in the PD group than in the control group(P<0.05).The mean saccadic test score was significantly lower in PD patients than in controls(3.29±0.57 vs 3.78±0.42;P<0.001).Among all fusional vergence indices,near base-in blur yielded the highest area under the curve(AUC=0.877),with a sensitivity of 69%and specificity of 90%,followed by distance base-out blur(AUC=0.824,sensitivity=97.6%,specificity=66.7%),near base-out blur(AUC=0.814,sensitivity=76.2%,specificity=72.7%),near base-out break(AUC=0.749,sensitivity=78.6%,specificity=67.6%),and near base-out recovery(AUC=0.749,sensitivity=95.2%,specificity=50%).CONCLUSION:PD is associated with significant binocular vision function impairment,with receded NPC and reduced near fusional vergence reserves being the most prominent disorders.These findings highlight the potential value of binocular vision assessment as a non-invasive biomarker for the early detection and clinical monitoring of PD.展开更多
Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spa...Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spatial and semantic information.However,the performance of CNN-based methods remains limited in classification accuracy,primarily due to insufficient exploration of local image characteristics.Unlike CNNs,Vision Transformer(ViT)captures discriminative features by modeling relationships between local image patches.However,such methods typically require a large number of training samples to perform effectively.In the context of foreign body classification on coal conveyor belts,the limited availability of training samples hinders the full exploitation of Vision Transformer’s(ViT)capabilities.To address this issue,we propose an efficient approach,termed Key Part-level Attention Vision Transformer(KPA-ViT),which incorporates key local information into the transformer architecture to enrich the training information.It comprises three main components:a key-point detection module,a key local mining module,and an attention module.To extract key local regions,a key-point detection strategy is first employed to identify the positions of key points.Subsequently,the key local mining module extracts the relevant local features based on these detected points.Finally,an attention module composed of self-attention and cross-attention blocks is introduced to integrate global and key part-level information,thereby enhancing the model’s ability to learn discriminative features.Compared to recent transformer-based frameworks—such as ViT,Swin-Transformer,and EfficientViT—the proposed KPA-ViT achieves performance improvements of 9.3%,6.6%,and 2.8%,respectively,on the CUMT-BelT dataset,demonstrating its effectiveness.展开更多
基金Supported by Ongoing Research Funding Program(ORFFT-2025-054-1),King Saud University,Riyadh,Saudi Arabia.
文摘AIM:To evaluate the efficacy of the total computer vision syndrome questionnaire(CVS-Q)score as a predictive tool for identifying individuals with symptomatic binocular vision anomalies and refractive errors.METHODS:A total of 141 healthy computer users underwent comprehensive clinical visual function assessments,including evaluations of refractive errors,accommodation(amplitude of accommodation,positive relative accommodation,negative relative accommodation,accommodative accuracy,and accommodative facility),and vergence(phoria,positive and negative fusional vergence,near point of convergence,and vergence facility).Total CVS-Q scores were recorded to explore potential associations between symptom scores and the aforementioned clinical visual function parameters.RESULTS:The cohort included 54 males(38.3%)with a mean age of 23.9±0.58y and 87 age-matched females(61.7%)with a mean age of 23.9±0.53y.The multiple regression model was statistically significant[R²=0.60,F=13.28,degrees of freedom(DF=17122,P<0.001].This indicates that 60%of the variance in total CVS-Q scores(reflecting reported symptoms)could be explained by four clinical measurements:amplitude of accommodation,positive relative accommodation,exophoria at distance and near,and positive fusional vergence at near.CONCLUSION:The total CVS-Q score is a valid and reliable tool for predicting the presence of various nonstrabismic binocular vision anomalies and refractive errors in symptomatic computer users.
文摘Lung cancer remains a major global health challenge,with early diagnosis crucial for improved patient survival.Traditional diagnostic techniques,including manual histopathology and radiological assessments,are prone to errors and variability.Deep learning methods,particularly Vision Transformers(ViT),have shown promise for improving diagnostic accuracy by effectively extracting global features.However,ViT-based approaches face challenges related to computational complexity and limited generalizability.This research proposes the DualSet ViT-PSO-SVM framework,integrating aViTwith dual attentionmechanisms,Particle Swarm Optimization(PSO),and SupportVector Machines(SVM),aiming for efficient and robust lung cancer classification acrossmultiple medical image datasets.The study utilized three publicly available datasets:LIDC-IDRI,LUNA16,and TCIA,encompassing computed tomography(CT)scans and histopathological images.Data preprocessing included normalization,augmentation,and segmentation.Dual attention mechanisms enhanced ViT’s feature extraction capabilities.PSO optimized feature selection,and SVM performed classification.Model performance was evaluated on individual and combined datasets,benchmarked against CNN-based and standard ViT approaches.The DualSet ViT-PSO-SVM significantly outperformed existing methods,achieving superior accuracy rates of 97.85%(LIDC-IDRI),98.32%(LUNA16),and 96.75%(TCIA).Crossdataset evaluations demonstrated strong generalization capabilities and stability across similar imagingmodalities.The proposed framework effectively bridges advanced deep learning techniques with clinical applicability,offering a robust diagnostic tool for lung cancer detection,reducing complexity,and improving diagnostic reliability and interpretability.
基金financially supported by the National Science Fund for Distinguished Young Scholars,China(No.52025041)the National Natural Science Foundation of China(Nos.52450003,U2341267,and 52174294)+1 种基金the National Postdoctoral Program for Innovative Talents,China(No.BX20240437)the Fundamental Research Funds for the Central Universities,China(Nos.FRF-IDRY-23-037 and FRF-TP-20-02C2)。
文摘The rapid advancements in computer vision(CV)technology have transformed the traditional approaches to material microstructure analysis.This review outlines the history of CV and explores the applications of deep-learning(DL)-driven CV in four key areas of materials science:microstructure-based performance prediction,microstructure information generation,microstructure defect detection,and crystal structure-based property prediction.The CV has significantly reduced the cost of traditional experimental methods used in material performance prediction.Moreover,recent progress made in generating microstructure images and detecting microstructural defects using CV has led to increased efficiency and reliability in material performance assessments.The DL-driven CV models can accelerate the design of new materials with optimized performance by integrating predictions based on both crystal and microstructural data,thereby allowing for the discovery and innovation of next-generation materials.Finally,the review provides insights into the rapid interdisciplinary developments in the field of materials science and future prospects.
基金supported by the National Natural Science Foundation of China(Nos.62301092 and 62301093).
文摘Vision Transformers(ViTs)have achieved remarkable success across various artificial intelligence-based computer vision applications.However,their demanding computational and memory requirements pose significant challenges for de-ployment on resource-constrained edge devices.Although post-training quantization(PTQ)provides a promising solution by reducing model precision with minimal calibration data,aggressive low-bit quantization typically leads to substantial perfor-mance degradation.To address this challenge,we present the truncated uniform-log2 quantizer and progressive bit-decline reconstruction method for vision Transformer quantization(TP-ViT).It is an innovative PTQ framework specifically designed for ViTs,featuring two key technical contributions:(1)truncated uniform-log2 quantizer,a novel quantization approach which effectively handles outlier values in post-Softmax activations,significantly reducing quantization errors;(2)bit-decline optimiza-tion strategy,which employs transition weights to gradually reduce bit precision while maintaining model performance under extreme quantization conditions.Comprehensive experiments on image classification,object detection,and instance segmenta-tion tasks demonstrate TP-ViT’s superior performance compared to state-of-the-art PTQ methods,particularly in challenging 3-bit quantization scenarios.Our framework achieves a notable 6.18 percentage points improvement in top-1 accuracy for ViT-small under 3-bit quantization.These results validate TP-ViT’s robustness and general applicability,paving the way for more efficient deployment of ViT models in computer vision applications on edge hardware.
基金supported by the National Research Foundation of Korea(NRF)grant funded by theKorea government(MSIT)(No.RS-2024-00405278)partially supported by the Jeju Industry-University Convergence District Project for Promoting Industry-Campus Cooperationfunded by the Ministry of Trade,Industry and Energy(MOTIE,Korea)[Project Name:Jeju Industry-University Convergence District Project for Promoting Industry-Campus Cooperation/Project Number:P0029950].
文摘Recent advances in deep learning have significantly improved flood detection and segmentation from aerial and satellite imagery.However,conventional convolutional neural networks(CNNs)often struggle in complex flood scenarios involving reflections,occlusions,or indistinct boundaries due to limited contextual modeling.To address these challenges,we propose a hybrid flood segmentation framework that integrates a Vision Transformer(ViT)encoder with a U-Net decoder,enhanced by a novel Flood-Aware Refinement Block(FARB).The FARB module improves boundary delineation and suppresses noise by combining residual smoothing with spatial-channel attention mechanisms.We evaluate our model on a UAV-acquired flood imagery dataset,demonstrating that the proposed ViTUNet+FARB architecture outperforms existing CNN and Transformer-based models in terms of accuracy and mean Intersection over Union(mIoU).Detailed ablation studies further validate the contribution of each component,confirming that the FARB design significantly enhances segmentation quality.To its better performance and computational efficiency,the proposed framework is well-suited for flood monitoring and disaster response applications,particularly in resource-constrained environments.
文摘[Significance]In alignment with the national germplasm security strategy,current research efforts are accelerating the adoption of precision breeding in sheep.Within the whole-genome selection,accurate phenotyping of body morphometrics is critical for assessing growth performance and breeding value.Traditional manual measurements are inefficient,prone to human error,and may cause stress to sheep,limiting their suitability for precision sheep management.By summarizing the applications of sheep body size measurement technologies and analyzing their development directions,this paper provides theoretical references and practical guidance for the research and application of non contact sheep body size measurement.[Progress]This review synthesizes progress across three principal methodological paradigms:two-dimensional(2D)image-based techniques,three-dimensional(3D)point cloud-based approaches,and integrated 2D-3D fusion systems.2D methods,employing either handcrafted geometric features or deep learning-based keypoint detector algorithms,are cost-effective and operationally simple but sensitive to variation in imaging conditions and unable to capture critical circumference metrics.3D point-cloud approaches enable precise reconstruction of full animal morphology,supporting comprehensive body-size acquisition with higher accuracy,yet face challenges including high hardware costs,complex data workflows,and sensitivity to posture variability.Hybrid 2D-3D fusion systems combine semantic richness from RGB imagery with geometric completeness from point clouds.Having been effectively validated in other livestock specise,e.g.,cattle and pigs,these fusion systems have demonstrated excellent performance,providing important technical references and practical insights for sheep body size measurement.[Conclusions and Prospects]Firstly,future research should focus on constructing large-scale,high-quality datasets for sheep body size measurement that encompass diverse breeds,growth stages,and environmental conditions,thereby enhancing model robustness and generalization.Secondly,the development of lightweight artificial intelligence models is essential.Techniques such as model compression,quantization,and algorithmic optimization can substantially reduce computational complexity and storage requirements,facilitating deployment in resource-constrained environments.Thirdly,the 3D point cloud processing pipeline should be streamlined to improve the efficiency of data acquisition,filtering,registration,and segmentation,while promoting the integration of low-cost,high-resilience vision systems into practical farming scenarios.Fourthly,specific emphasis should be placed on improving the accuracy of curved-dimensional measurements,such as chest circumference,abdominal circumference,and shank circumference,through advances in pose standardization,refined 3D segmentation strategies,and multimodal data fusion.Finally,the cross-fertilization of sheep body size measurement technologies with analogous methods for other livestock species offers a promising pathway for mutual learning and collaborative innovation,accelerating the industrialization of automated sheep morphometric systems and supporting the development of intelligent,data-driven pasture management practices.
基金Supported by Research and Transformation Application of Capital Clinical Diagnosis and Treatment Technology by Beijing Municipal Commission of Science and Technology(No.Z201100005520043).
文摘AIM:To investigate the association between functionaloutcomes and postoperative patient satisfaction 5y aftersmall incision lenticule extraction(SMILE)and femtosecondlaser-assisted in situ keratomileusis(FS-LASIK).METHODS:This is a cross-sectional study.Thepatients underwent basic ophthalmic examinations,axiallength measurement,wide-field fundus photography,andaccommodation function testing.Behavioral habits datawere collected using a self-administered questionnaire,andvisual symptoms were assessed with the Quality of Vision(QoV)questionnaire.Postoperative satisfaction was alsorecorded.RESULTS:Totally 410 subjects[820 eyes,160males(39.02%)and 250 females(60.98%)]who hadundergone SMILE or FS-LASIK 5y ago were enrolled.Themean(standard deviation,SD)age of all patients was29.83y(6.69).The mean(SD)preoperative manifest SEwas-5.80(2.04)diopters(D;range:-0.88 to-13.75).Patient satisfaction at 5y after undergoing SMILE or FSLASIKwas 91.70%.Patients were categorized into twogroups:dissatisfied group and satisfied group.Significantdifferences were observed between the two groups in termsof age(P=0.012),sex(P=0.021),preoperative degreeof myopia(P=0.049),postoperative visual symptoms(frequency,P=0.043;severity,P<0.001;bothersome,P=0.018),difficulty driving at night(P=0.001),andaccommodative amplitude(AMP,P=0.020).Multivariateanalysis confirmed that female sex(P=0.024),severityof visual symptoms(P=0.009),and difficulty driving atnight(P=0.006)were significantly associated with lowersatisfaction.The dissatisfied group showed higher rates ofstarbursts,double or multiple images,and high myopia,but lower age.The frequency,severity,and bothersome ofdistortion exhibited decreased with increasing age.CONCLUSION:Patient satisfaction 5y after SMILEand FS-LASIK is high and stable.Difficulty driving at night,sex,and severity of visual symptoms are important factorsinfluencing patient satisfaction.Special attention should bepaid to younger highly myopic female patients,particularlythose with starbursts and double or multiple images.It is crucial to monitor postoperative visual outcomesand provide patients with comprehensive preoperativecounseling to enhance long-term satisfaction.
文摘In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.
基金funded by the Ministry of Higher Education(MoHE)Malaysia through the Fundamental Research Grant Scheme—Early Career Researcher(FRGS-EC),grant number FRGSEC/1/2024/ICT02/UNIMAP/02/8.
文摘critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.
基金Supported by Mashhad University of Medical Sciences.
文摘AIM:To evaluate the differences in near point of convergence(NPC),fusional vergence,saccadic eye movements,versional eye movements,and heterophoria between patients diagnosed with Parkinson’s disease(PD)and healthy subjects.METHODS:A cross-sectional comparative study was conducted,enrolling two cohorts:a PD group and a healthy control group.The PD group was recruited via non-random convenience sampling,while the control group was selected randomly from individuals without PD.All participants were screened according to predefined inclusion and exclusion criteria before undergoing a comprehensive optometric assessment,which included measurements of uncorrected visual acuity,corrected visual acuity,and objective and subjective refraction.Subsequently,binocular vision function evaluations were performed,covering NPC measurement,fusional vergence reserve assessment at both distance and near,saccadic eye movement testing,and versional eye movement and heterophoria assessment.RESULTS:A total of 42 PD patients and 41 healthy controls were included in the final analysis.The two groups were well-matched in terms of sex distribution[29 males(69.0%)in the PD group vs 29 males(70.7%)in the control group,P=0.867]and mean age(55.3±9.6y in the PD group vs 54.9±9.8y in the control group,P=0.866).The prevalence of abnormal versional eye movements was significantly higher in the PD group than in the control group(23.81%,95%CI:12.05%-39.45%vs 7.32%,95%CI:1.54%-19.92%;P=0.025).Near exophoria was more prevalent in PD patients(61.90%,95%CI:45.64%-76.43%)than in controls(17.07%,95%CI:7.15%-32.06%),with a significant difference[odds ratio(OR)=7.99;95%CI:2.83-21.99;P<0.001].The mean NPC was significantly greater(more receded)in the PD group than in the control group(9.01±3.74 cm vs 7.20±2.15 cm;P=0.007).A statistically significant positive correlation was observed between PD severity and NPC values(Pearson’s correlation coefficient=0.309;P=0.046).Except for distance baseout break and distance base-out recovery values,all other fusional vergence parameters were significantly lower in the PD group than in the control group(P<0.05).The mean saccadic test score was significantly lower in PD patients than in controls(3.29±0.57 vs 3.78±0.42;P<0.001).Among all fusional vergence indices,near base-in blur yielded the highest area under the curve(AUC=0.877),with a sensitivity of 69%and specificity of 90%,followed by distance base-out blur(AUC=0.824,sensitivity=97.6%,specificity=66.7%),near base-out blur(AUC=0.814,sensitivity=76.2%,specificity=72.7%),near base-out break(AUC=0.749,sensitivity=78.6%,specificity=67.6%),and near base-out recovery(AUC=0.749,sensitivity=95.2%,specificity=50%).CONCLUSION:PD is associated with significant binocular vision function impairment,with receded NPC and reduced near fusional vergence reserves being the most prominent disorders.These findings highlight the potential value of binocular vision assessment as a non-invasive biomarker for the early detection and clinical monitoring of PD.
基金funded by the National Key Research and Development Program of China(grant number 2023YFC2907600)the National Natural Science Foundation of China(grant number 52504132)Tiandi Science and Technology Co.,Ltd.Science and Technology Innovation Venture Capital Special Project(grant number 2023-TD-ZD011-004).
文摘Foreign body classification on coal conveyor belts is a critical component of intelligent coal mining systems.Previous approaches have primarily utilized convolutional neural networks(CNNs)to effectively integrate spatial and semantic information.However,the performance of CNN-based methods remains limited in classification accuracy,primarily due to insufficient exploration of local image characteristics.Unlike CNNs,Vision Transformer(ViT)captures discriminative features by modeling relationships between local image patches.However,such methods typically require a large number of training samples to perform effectively.In the context of foreign body classification on coal conveyor belts,the limited availability of training samples hinders the full exploitation of Vision Transformer’s(ViT)capabilities.To address this issue,we propose an efficient approach,termed Key Part-level Attention Vision Transformer(KPA-ViT),which incorporates key local information into the transformer architecture to enrich the training information.It comprises three main components:a key-point detection module,a key local mining module,and an attention module.To extract key local regions,a key-point detection strategy is first employed to identify the positions of key points.Subsequently,the key local mining module extracts the relevant local features based on these detected points.Finally,an attention module composed of self-attention and cross-attention blocks is introduced to integrate global and key part-level information,thereby enhancing the model’s ability to learn discriminative features.Compared to recent transformer-based frameworks—such as ViT,Swin-Transformer,and EfficientViT—the proposed KPA-ViT achieves performance improvements of 9.3%,6.6%,and 2.8%,respectively,on the CUMT-BelT dataset,demonstrating its effectiveness.