In the aerospace field, residual stress directly affects the strength, fatigue life and dimensional stability of thin-walled structural components, and is a key factor to ensure flight safety and reliability. At prese...In the aerospace field, residual stress directly affects the strength, fatigue life and dimensional stability of thin-walled structural components, and is a key factor to ensure flight safety and reliability. At present, research on residual stress at home and abroad mainly focuses on the optimization of traditional detection technology, stress control of manufacturing process and service performance evaluation, among which research on residual stress detection methods mainly focuses on the improvement of the accuracy, sensitivity, reliability and other performance of existing detection methods, but it still faces many challenges such as extremely small detection range, low efficiency, large error and limited application range.展开更多
Legume foods are not only trendy but also rich in nutrients and offer unique health benefits.Nevertheless,allergies to soy and other legumes have emerged as critical issues in food safety,presenting significant challe...Legume foods are not only trendy but also rich in nutrients and offer unique health benefits.Nevertheless,allergies to soy and other legumes have emerged as critical issues in food safety,presenting significant challenges to the food processing industry and impacting consumer health.The complexity of legume allergens,coupled with inadequate allergen identification methods and the absence of robust detection and evaluation systems,complicates the management of these allergens.Here,we provide a comprehensive and critical review,mentioning various aspects related to legume allergies,including the types of legume allergens,the mechanisms behind these allergies,and the immunoglobulin E(Ig E)-binding epitopes involved,summarizing and discussing the detection techniques and the impact of different processing techniques on sensitization to legume proteins.Furthermore,this paper provides an overview of research advances in diagnostic and therapeutic strategies for legume allergens and discusses current challenges and prospects for studying legume allergens.展开更多
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ...Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.展开更多
As modern power systems grow in complexity,accurate and efficient fault detection has become increasingly important.While many existing reviews focus on a single modality,this paper presents a comprehensive survey fro...As modern power systems grow in complexity,accurate and efficient fault detection has become increasingly important.While many existing reviews focus on a single modality,this paper presents a comprehensive survey from a dual-modality perspective-infrared imaging and voiceprint analysis-two complementary,non-contact techniques that capture different fault characteristics.Infrared imaging excels at detecting thermal anomalies,while voiceprint signals provide insight into mechanical vibrations and internal discharge phenomena.We review both traditional signal processing and deep learning-based approaches for each modality,categorized by key processing stages such as feature extraction and classification.The paper highlights how these modalities address distinct fault types and how they may be fused to improve robustness and accuracy.Representative datasets are summarized,and practical challenges such as noise interference,limited fault samples,and deployment constraints are discussed.By offering a cross-modal,comparative analysis,this work aims to bridge fragmented research and guide future development in intelligent fault detection systems.The review concludes with research trends including multimodal fusion,lightweight models,and self-supervised learning.展开更多
Purpose–For the commonly used concrete mix for railway tunnel linings,concrete model specimens were made,and springback and core drilling tests were conducted at different ages.The springback strength was measured to...Purpose–For the commonly used concrete mix for railway tunnel linings,concrete model specimens were made,and springback and core drilling tests were conducted at different ages.The springback strength was measured to the compressive strength of the core sample with a diameter of 100mm and a height-to-diameter ratio of 1:1.By comparing the measured strength values,the relationship between the measured values under different strength measurement methods was analyzed.Design/methodology/approach–A comparative test of the core drilling method and the rebound method was conducted on the side walls of tunnel linings in some under-construction railways to study the feasibility of the rebound method in engineering quality supervision and inspection.Findings–Tests showed that the rebound strength was positively correlated with the core drill strength.The core drill test strength was significantly higher than the rebound test strength,and the strength still increased after 56 days of age.The rebound method is suitable for the general survey of concrete strength during the construction process and is not suitable for direct supervision and inspection.Originality/value–By studying the correlation of test strength of tunnel lining concrete using two methods,the differences in test results of different methods are proposed to provide a reference for the test and evaluation of tunnel lining strength in railway engineering.展开更多
Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted r...Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted reproduction and the cryopreservation of sperm,eggs and embryos,as well as the preservation of skin,fingers,and other isolated tissues.However,cryopreservation of large and complex tissues or organs remains highly challenging.In addition to the damage caused by the freezing and rewarming processes and the inherent complexity of tissues and organs,there is an urgent need to address issues related to damage detection and the investigation of injury mechanisms.It provides a retrospective analysis of existing methods for assessing tissue and organ viability.Although current techniques can detect damage to some extent,they tend to be relatively simple,time-consuming,and limited in their ability to provide timely and comprehensive assessments of viability.By summarizing and evaluating these approaches,our study aims to contribute to the improvement of viability detection methods and to promote further development in this critical area.展开更多
Pedestrian detection has been a hot spot in computer vision over the past decades due to the wide spectrum of promising applications,and the major challenge is false positives that occur during pedestrian detection.Th...Pedestrian detection has been a hot spot in computer vision over the past decades due to the wide spectrum of promising applications,and the major challenge is false positives that occur during pedestrian detection.The emergence of various Convolutional Neural Network-based detection strategies substantially enhances pedestrian detection accuracy but still does not solve this problem well.This paper deeply analyzes the detection framework of the two-stage CNN detection methods and finds out false positives in detection results are due to its training strategy misclassifying some false proposals,thus weakening the classification capability of the following subnetwork and hardly suppressing false ones.To solve this problem,this paper proposes a pedestrian-sensitive training algorithm to help two-stage CNN detection methods effectively learn to distinguish the pedestrian and non-pedestrian samples and suppress the false positives in the final detection results.The core of the proposed algorithm is to redesign the training proposal generating scheme for the two-stage CNN detection methods,which can avoid a certain number of false ones that mislead its training process.With the help of the proposed algorithm,the detection accuracy of the MetroNext,a smaller and more accurate metro passenger detector,is further improved,which further decreases false ones in its metro passenger detection results.Based on various challenging benchmark datasets,experiment results have demonstrated that the feasibility of the proposed algorithm is effective in improving pedestrian detection accuracy by removing false positives.Compared with the existing state-of-the-art detection networks,PSTNet demonstrates better overall prediction performance in accuracy,total number of parameters,and inference time;thus,it can become a practical solution for hunting pedestrians on various hardware platforms,especially for mobile and edge devices.展开更多
The detection of stellar flares is crucial to understanding dynamic processes at the stellar surface and their potential impact on surrounding exoplanetary systems.Extensive time series data acquired by the Transiting...The detection of stellar flares is crucial to understanding dynamic processes at the stellar surface and their potential impact on surrounding exoplanetary systems.Extensive time series data acquired by the Transiting Exoplanet Survey Satellite(TESS)offer valuable opportunities for large-scale flare studies.A variety of methods is currently employed for flare detection,with machine learning(ML)approaches demonstrating strong potential for automated classification tasks,particularly for the analysis of astronomical time series.This review provides an overview of the methods used to detect stellar flares in TESS data and evaluates their performance and effectiveness.It includes our assessment of both traditional detection techniques and more recent methods,such as ML algorithms,highlighting their strengths and limitations.By addressing current challenges and identifying promising approaches,this manuscript aims to support further studies and promote the development of stellar flare research.展开更多
Due to their high water content,stimulus responsiveness,and biocompatibility,hydrogels,which are functional materials with a three-dimensional network structure,are widely applied in fields such as biomedicine,environ...Due to their high water content,stimulus responsiveness,and biocompatibility,hydrogels,which are functional materials with a three-dimensional network structure,are widely applied in fields such as biomedicine,environmental monitoring,and flexible electronics.This paper provides a systematic review of hydrogel charac-terization methods and their applications,focusing on primary evaluation techniques for physical properties(e.g.,mechanical strength,swelling behavior,and pore structure),chemical properties(e.g.,composition,crosslink density,and degradation behavior),biocompatibility,and functional properties(e.g.,drug release,environmental stimulus response,and conductivity).It analyzes the challenges currently faced by characterization methods,such as a lack of standardization,difficulties in dynamic monitoring,an insufficient micro-macro correlation,and poor adaptability to complex environments.It proposes solutions,such as a hierarchical standardization system,in situ imaging technology,cross-scale characterization,and biomimetic testing platforms.Looking ahead,hydrogel characterization techniques will evolve toward intelligent,real-time,multimodal coupling and standardized approaches.These techniques will provide superior technical support for precision medicine,environmental restoration,and flexible electronics.They will also offer systematic methodological guidance for the performance optimization and practical application of hydrogel materials.展开更多
In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reve...In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reveals a pronounced focus on refining diagnostic accuracy,personalising treatment plans,and optimising resource allocation to enhance clinical outcomes.Nonetheless,this domain faces unique challenges,such as irregular data collection,inconsistent data quality,and patient-specific structural variations.This paper proposed a novel hybrid approach that integrates heuristic and stochastic methods for anomaly detection in patient clinical data to address these challenges.The strategy combines HPO-based optimal Density-Based Spatial Clustering of Applications with Noise for clustering patient exercise data,facilitating efficient anomaly identification.Subsequently,a stochastic method based on the Interquartile Range filters unreliable data points,ensuring that medical tools and professionals receive only the most pertinent and accurate information.The primary objective of this study is to equip healthcare pro-fessionals and researchers with a robust tool for managing extensive,high-dimensional clinical datasets,enabling effective isolation and removal of aberrant data points.Furthermore,a sophisticated regression model has been developed using Automated Machine Learning(AutoML)to assess the impact of the ensemble abnormal pattern detection approach.Various statistical error estimation techniques validate the efficacy of the hybrid approach alongside AutoML.Experimental results show that implementing this innovative hybrid model on patient rehabilitation data leads to a notable enhance-ment in AutoML performance,with an average improvement of 0.041 in the R2 score,surpassing the effectiveness of traditional regression models.展开更多
Jaundice,common condition in newborns,is characterized by yellowing of the skin and eyes due to elevated levels of bilirubin in the blood.Timely detection and management of jaundice are crucial to prevent potential co...Jaundice,common condition in newborns,is characterized by yellowing of the skin and eyes due to elevated levels of bilirubin in the blood.Timely detection and management of jaundice are crucial to prevent potential complications.Traditional jaundice assessment methods rely on visual inspection or invasive blood tests that are subjective and painful for infants,respectively.Although several automated methods for jaundice detection have been developed during the past few years,a limited number of reviews consolidating these developments have been presented till date,making it essential to systematically evaluate and present the existing advancements.This paper fills this gap by providing a thorough survey of automated methods for jaundice detection in neonates.The primary focus of the survey is to review the existing methodologies,techniques,and technologies used for neonatal jaundice detection.The key findings from the review indicate that image-based bilirubinometers and transcutaneous bilirubinometers are promising non-invasive alternatives,and provide a good trade-off between accuracy and ease of use.However,their effectiveness varies with factors like skin pigmentation,gestational age,and measurement site.Spectroscopic and biosensor-based techniques show high sensitivity but need further clinical validation.Despite advancements,several challenges including device calibration,large-scale validation,and regulatory barriers still haunt the researchers.Standardization,regulatory compliances,and seamless integration into healthcare workflows are the key hurdles to be addressed.By consolidating the current knowledge and discussing the challenges and opportunities in this field,this survey aims to contribute to the advancement of automatic jaundice detection and ultimately improve neonatal care.展开更多
Attacks are growing more complex and dangerous as network capabilities improve at a rapid pace.Network intrusion detection is usually regarded as an efficient means of dealing with security attacks.Many ways have been...Attacks are growing more complex and dangerous as network capabilities improve at a rapid pace.Network intrusion detection is usually regarded as an efficient means of dealing with security attacks.Many ways have been presented,utilizing various strategies and focusing on different types of visitors.Anomaly-based network intrusion monitoring is an essential area of intrusion detection investigation and development.Despite extensive research on anomaly-based network detection,there is still a lack of comprehensive literature reviews covering current methodologies and datasets.Despite the substantial research into anomaly-based network intrusion detection algorithms,there is a dearth of a research evaluation of new methodologies and datasets.We explore and evaluate 50 highest publications on anomaly-based intrusion detection using an in-depth review of related literature techniques.Our work thoroughly explores the technological environment of the subject in order to help future research in this sector.Our examination is carried out from the relevant angles:application areas,data preprocessing and threat detection approaches,assessment measures,and datasets.We select unresolved research difficulties and underexplored research areas from every viewpoint recommendation of the study.Finally,we outline five potentially increased research areas for the future.展开更多
Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveill...Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.展开更多
The choice of biopsy method is critical in diagnosing prostate cancer(PCa).This retrospective cohort study compared systematic biopsy(SB)or cognitive fusion-targeted biopsy combined with SB(CB)in detecting PCa and cli...The choice of biopsy method is critical in diagnosing prostate cancer(PCa).This retrospective cohort study compared systematic biopsy(SB)or cognitive fusion-targeted biopsy combined with SB(CB)in detecting PCa and clinically significant prostate cancer(csPCa).Data from 2572 men who underwent either SB or CB in Fudan University Shanghai Cancer Center(Shanghai,China)between January 2019 and December 2023 were analyzed.Propensity score matching(PSM)was used to balance baseline characteristics,and detection rates were compared before and after PSM.Subgroup analyses based on prostate-specific antigen(PSA)levels and Prostate Imaging-Reporting and Data System(PI-RADS)scores were performed.Primary and secondary outcomes were the detection rates of PCa and csPCa,respectively.Of 2572 men,1778 were included in the PSM analysis.Before PSM,CB had higher detection rates for both PCa(62.9%vs 52.4%,odds ratio[OR]:1.54,P<0.001)and csPCa(54.9%vs 43.3%,OR:1.60,P<0.001)compared to SB.After PSM,CB remained superior in detecting PCa(63.1%vs 47.9%,OR:1.86,P<0.001)and csPCa(55.0%vs 38.2%,OR:1.98,P<0.001).In patients with PSA 4–12 ng ml−1(>4 ng ml-1 and≤12 ng ml-1,which is also applicable to the following text),CB detected more PCa(59.8%vs 40.7%,OR:2.17,P<0.001)and csPCa(48.1%vs 27.7%,OR:2.42,P<0.001).CB also showed superior csPCa detection in those with PI-RADS 3 lesions(32.1%vs 18.0%,OR:2.15,P=0.038).Overall,CB significantly improves PCa and csPCa detection,especially in patients with PSA 4–12 ng ml−1 or PI-RADS 3 lesions.展开更多
Seismic data plays a pivotal role in fault detection,offering critical insights into subsurface structures and seismic hazards.Understanding fault detection from seismic data is essential for mitigating seismic risks ...Seismic data plays a pivotal role in fault detection,offering critical insights into subsurface structures and seismic hazards.Understanding fault detection from seismic data is essential for mitigating seismic risks and guiding land-use plans.This paper presents a comprehensive review of existing methodologies for fault detection,focusing on the application of Machine Learning(ML)and Deep Learning(DL)techniques to enhance accuracy and efficiency.Various ML and DL approaches are analyzed with respect to fault segmentation,adaptive learning,and fault detection models.These techniques,benchmarked against established seismic datasets,reveal significant improvements over classical methods in terms of accuracy and computational efficiency.Additionally,this review highlights emerging trends,including hybrid model applications and the integration of real-time data processing for seismic fault detection.By providing a detailed comparative analysis of current methodologies,this review aims to guide future research and foster advancements in the effectiveness and reliability of seismic studies.Ultimately,the study seeks to bridge the gap between theoretical investigations and practical implementations in fault detection.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
Software systems are vulnerable to security breaches as they expand in complexity and functionality.The confidentiality,integrity,and availability of data are gravely threatened by flaws in a system’s design,implemen...Software systems are vulnerable to security breaches as they expand in complexity and functionality.The confidentiality,integrity,and availability of data are gravely threatened by flaws in a system’s design,implementation,or configuration.To guarantee the durability&robustness of the software,vulnerability identification and fixation have become crucial areas of focus for developers,cybersecurity experts and industries.This paper presents a thorough multi-phase mathematical model for efficient patch management and vulnerability detection.To uniquely model these processes,the model incorporated the notion of the learning phenomenon in describing vulnerability fixation using a logistic learning function.Furthermore,the authors have used numerical methods to approximate the solution of the proposed framework where an analytical solution is difficult to attain.The suggested systematic architecture has been demonstrated through statistical analysis using patch datasets,which offers a solid basis for the research conclusions.According to computational research,learning dynamics improves security response and results in more effective vulnerability management.The suggested model offers a systematic approach to proactive vulnerability mitigation and has important uses in risk assessment,software maintenance,and cybersecurity.This study helps create more robust software systems by increasing patch management effectiveness,which benefits developers,cybersecurity experts,and sectors looking to reduce security threats in a growing digital world.展开更多
Precipitation events,which follow a life cycle of initiation,development,and decay,represent the fundamental form of precipitation.Comprehensive and accurate detection of these events is crucial for effective water re...Precipitation events,which follow a life cycle of initiation,development,and decay,represent the fundamental form of precipitation.Comprehensive and accurate detection of these events is crucial for effective water resource management and flood control.However,current investigations on their spatio-temporal patterns remain limited,largely because of the lack of systematic detection indices that are specifically designed for precipitation events,which constrains event-scale research.In this study,we defined a set of precipitation event detection indices(PEDI)that consists of five conventional and fourteen extreme indices to characterize precipitation events from the perspectives of intensity,duration,and frequency.Applications of the PEDI revealed the spatial patterns of hourly precipitation events in China and its first-and second-order river basins from 2008 to 2017.Both conventional and extreme precipitation events displayed spatial distribution patterns that gradually decreased in intensity,duration,and frequency from southeast to northwest China.Compared with those in northwest China,the average values of most PEDIs in southeast China were usually 2-10 times greater for first-order river basins and 3-15 times greater for second-order basins.The PEDI could serve as a reference method for investigating precipitation events at global,regional,and basin scales.展开更多
Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakt...Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakthroughs in this field,in the face of complex scenes,such as image blur and target occlusion,the traffic sign detection continues to exhibit limited accuracy,accompanied by false positives and missed detections.To address the above problems,a traffic sign detection algorithm,You Only Look Once-based Skip Dynamic Way(YOLO-SDW)based on You Only Look Once version 8 small(YOLOv8s),is proposed.Firstly,a Skip Connection Reconstruction(SCR)module is introduced to efficiently integrate fine-grained feature information and enhance the detection accuracy of the algorithm in complex scenes.Secondly,a C2f module based on Dynamic Snake Convolution(C2f-DySnake)is proposed to dynamically adjust the receptive field information,improve the algorithm’s feature extraction ability for blurred or occluded targets,and reduce the occurrence of false detections and missed detections.Finally,the Wise Powerful IoU v2(WPIoUv2)loss function is proposed to further improve the detection accuracy of the algorithm.Experimental results show that the average precision mAP@0.5 of YOLO-SDW on the TT100K dataset is 89.2%,and mAP@0.5:0.95 is 68.5%,which is 4%and 3.3%higher than the YOLOv8s baseline,respectively.YOLO-SDW ensures real-time performance while having higher accuracy.展开更多
In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds...In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds in current intelligent inspection algorithms,this paper proposes CG-YOLOv8,a lightweight and improved model based on YOLOv8n for PCB surface defect detection.The proposed method optimizes the network architecture and compresses parameters to reduce model complexity while maintaining high detection accuracy,thereby enhancing the capability of identifying diverse defects under complex conditions.Specifically,a cascaded multi-receptive field(CMRF)module is adopted to replace the SPPF module in the backbone to improve feature perception,and an inverted residual mobile block(IRMB)is integrated into the C2f module to further enhance performance.Additionally,conventional convolution layers are replaced with GSConv to reduce computational cost,and a lightweight Convolutional Block Attention Module based Convolution(CBAMConv)module is introduced after Grouped Spatial Convolution(GSConv)to preserve accuracy through attention mechanisms.The detection head is also optimized by removing medium and large-scale detection layers,thereby enhancing the model’s ability to detect small-scale defects and further reducing complexity.Experimental results show that,compared to the original YOLOv8n,the proposed CG-YOLOv8 reduces parameter count by 53.9%,improves mAP@0.5 by 2.2%,and increases precision and recall by 2.0%and 1.8%,respectively.These improvements demonstrate that CG-YOLOv8 offers an efficient and lightweight solution for PCB surface defect detection.展开更多
文摘In the aerospace field, residual stress directly affects the strength, fatigue life and dimensional stability of thin-walled structural components, and is a key factor to ensure flight safety and reliability. At present, research on residual stress at home and abroad mainly focuses on the optimization of traditional detection technology, stress control of manufacturing process and service performance evaluation, among which research on residual stress detection methods mainly focuses on the improvement of the accuracy, sensitivity, reliability and other performance of existing detection methods, but it still faces many challenges such as extremely small detection range, low efficiency, large error and limited application range.
基金financially supported by National Natural Science Foundation of China(32460627 and 32272359)the Special Research Fund of Natural Science(Special Post)of Guizhou University(2022)54。
文摘Legume foods are not only trendy but also rich in nutrients and offer unique health benefits.Nevertheless,allergies to soy and other legumes have emerged as critical issues in food safety,presenting significant challenges to the food processing industry and impacting consumer health.The complexity of legume allergens,coupled with inadequate allergen identification methods and the absence of robust detection and evaluation systems,complicates the management of these allergens.Here,we provide a comprehensive and critical review,mentioning various aspects related to legume allergies,including the types of legume allergens,the mechanisms behind these allergies,and the immunoglobulin E(Ig E)-binding epitopes involved,summarizing and discussing the detection techniques and the impact of different processing techniques on sensitization to legume proteins.Furthermore,this paper provides an overview of research advances in diagnostic and therapeutic strategies for legume allergens and discusses current challenges and prospects for studying legume allergens.
文摘Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.
基金supported by Science and Technology Project of State Grid Corporation of China(52094024003D).
文摘As modern power systems grow in complexity,accurate and efficient fault detection has become increasingly important.While many existing reviews focus on a single modality,this paper presents a comprehensive survey from a dual-modality perspective-infrared imaging and voiceprint analysis-two complementary,non-contact techniques that capture different fault characteristics.Infrared imaging excels at detecting thermal anomalies,while voiceprint signals provide insight into mechanical vibrations and internal discharge phenomena.We review both traditional signal processing and deep learning-based approaches for each modality,categorized by key processing stages such as feature extraction and classification.The paper highlights how these modalities address distinct fault types and how they may be fused to improve robustness and accuracy.Representative datasets are summarized,and practical challenges such as noise interference,limited fault samples,and deployment constraints are discussed.By offering a cross-modal,comparative analysis,this work aims to bridge fragmented research and guide future development in intelligent fault detection systems.The review concludes with research trends including multimodal fusion,lightweight models,and self-supervised learning.
文摘Purpose–For the commonly used concrete mix for railway tunnel linings,concrete model specimens were made,and springback and core drilling tests were conducted at different ages.The springback strength was measured to the compressive strength of the core sample with a diameter of 100mm and a height-to-diameter ratio of 1:1.By comparing the measured strength values,the relationship between the measured values under different strength measurement methods was analyzed.Design/methodology/approach–A comparative test of the core drilling method and the rebound method was conducted on the side walls of tunnel linings in some under-construction railways to study the feasibility of the rebound method in engineering quality supervision and inspection.Findings–Tests showed that the rebound strength was positively correlated with the core drill strength.The core drill test strength was significantly higher than the rebound test strength,and the strength still increased after 56 days of age.The rebound method is suitable for the general survey of concrete strength during the construction process and is not suitable for direct supervision and inspection.Originality/value–By studying the correlation of test strength of tunnel lining concrete using two methods,the differences in test results of different methods are proposed to provide a reference for the test and evaluation of tunnel lining strength in railway engineering.
文摘Since the beginning of the 21st century,modern medical technology has advanced rapidly,and the cryomedicine has also seen significant progress.Notable developments include the application of cryomedicine in assisted reproduction and the cryopreservation of sperm,eggs and embryos,as well as the preservation of skin,fingers,and other isolated tissues.However,cryopreservation of large and complex tissues or organs remains highly challenging.In addition to the damage caused by the freezing and rewarming processes and the inherent complexity of tissues and organs,there is an urgent need to address issues related to damage detection and the investigation of injury mechanisms.It provides a retrospective analysis of existing methods for assessing tissue and organ viability.Although current techniques can detect damage to some extent,they tend to be relatively simple,time-consuming,and limited in their ability to provide timely and comprehensive assessments of viability.By summarizing and evaluating these approaches,our study aims to contribute to the improvement of viability detection methods and to promote further development in this critical area.
文摘Pedestrian detection has been a hot spot in computer vision over the past decades due to the wide spectrum of promising applications,and the major challenge is false positives that occur during pedestrian detection.The emergence of various Convolutional Neural Network-based detection strategies substantially enhances pedestrian detection accuracy but still does not solve this problem well.This paper deeply analyzes the detection framework of the two-stage CNN detection methods and finds out false positives in detection results are due to its training strategy misclassifying some false proposals,thus weakening the classification capability of the following subnetwork and hardly suppressing false ones.To solve this problem,this paper proposes a pedestrian-sensitive training algorithm to help two-stage CNN detection methods effectively learn to distinguish the pedestrian and non-pedestrian samples and suppress the false positives in the final detection results.The core of the proposed algorithm is to redesign the training proposal generating scheme for the two-stage CNN detection methods,which can avoid a certain number of false ones that mislead its training process.With the help of the proposed algorithm,the detection accuracy of the MetroNext,a smaller and more accurate metro passenger detector,is further improved,which further decreases false ones in its metro passenger detection results.Based on various challenging benchmark datasets,experiment results have demonstrated that the feasibility of the proposed algorithm is effective in improving pedestrian detection accuracy by removing false positives.Compared with the existing state-of-the-art detection networks,PSTNet demonstrates better overall prediction performance in accuracy,total number of parameters,and inference time;thus,it can become a practical solution for hunting pedestrians on various hardware platforms,especially for mobile and edge devices.
基金supported by the National Natural Science Foundation of China(12473104 and U2031144).
文摘The detection of stellar flares is crucial to understanding dynamic processes at the stellar surface and their potential impact on surrounding exoplanetary systems.Extensive time series data acquired by the Transiting Exoplanet Survey Satellite(TESS)offer valuable opportunities for large-scale flare studies.A variety of methods is currently employed for flare detection,with machine learning(ML)approaches demonstrating strong potential for automated classification tasks,particularly for the analysis of astronomical time series.This review provides an overview of the methods used to detect stellar flares in TESS data and evaluates their performance and effectiveness.It includes our assessment of both traditional detection techniques and more recent methods,such as ML algorithms,highlighting their strengths and limitations.By addressing current challenges and identifying promising approaches,this manuscript aims to support further studies and promote the development of stellar flare research.
文摘Due to their high water content,stimulus responsiveness,and biocompatibility,hydrogels,which are functional materials with a three-dimensional network structure,are widely applied in fields such as biomedicine,environmental monitoring,and flexible electronics.This paper provides a systematic review of hydrogel charac-terization methods and their applications,focusing on primary evaluation techniques for physical properties(e.g.,mechanical strength,swelling behavior,and pore structure),chemical properties(e.g.,composition,crosslink density,and degradation behavior),biocompatibility,and functional properties(e.g.,drug release,environmental stimulus response,and conductivity).It analyzes the challenges currently faced by characterization methods,such as a lack of standardization,difficulties in dynamic monitoring,an insufficient micro-macro correlation,and poor adaptability to complex environments.It proposes solutions,such as a hierarchical standardization system,in situ imaging technology,cross-scale characterization,and biomimetic testing platforms.Looking ahead,hydrogel characterization techniques will evolve toward intelligent,real-time,multimodal coupling and standardized approaches.These techniques will provide superior technical support for precision medicine,environmental restoration,and flexible electronics.They will also offer systematic methodological guidance for the performance optimization and practical application of hydrogel materials.
文摘In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reveals a pronounced focus on refining diagnostic accuracy,personalising treatment plans,and optimising resource allocation to enhance clinical outcomes.Nonetheless,this domain faces unique challenges,such as irregular data collection,inconsistent data quality,and patient-specific structural variations.This paper proposed a novel hybrid approach that integrates heuristic and stochastic methods for anomaly detection in patient clinical data to address these challenges.The strategy combines HPO-based optimal Density-Based Spatial Clustering of Applications with Noise for clustering patient exercise data,facilitating efficient anomaly identification.Subsequently,a stochastic method based on the Interquartile Range filters unreliable data points,ensuring that medical tools and professionals receive only the most pertinent and accurate information.The primary objective of this study is to equip healthcare pro-fessionals and researchers with a robust tool for managing extensive,high-dimensional clinical datasets,enabling effective isolation and removal of aberrant data points.Furthermore,a sophisticated regression model has been developed using Automated Machine Learning(AutoML)to assess the impact of the ensemble abnormal pattern detection approach.Various statistical error estimation techniques validate the efficacy of the hybrid approach alongside AutoML.Experimental results show that implementing this innovative hybrid model on patient rehabilitation data leads to a notable enhance-ment in AutoML performance,with an average improvement of 0.041 in the R2 score,surpassing the effectiveness of traditional regression models.
基金funded by the Indian Council of Medical Research(ICMR),New Delhi,Government of India under Grant No.EM/SG/Dev.Res/124/0812-2023.
文摘Jaundice,common condition in newborns,is characterized by yellowing of the skin and eyes due to elevated levels of bilirubin in the blood.Timely detection and management of jaundice are crucial to prevent potential complications.Traditional jaundice assessment methods rely on visual inspection or invasive blood tests that are subjective and painful for infants,respectively.Although several automated methods for jaundice detection have been developed during the past few years,a limited number of reviews consolidating these developments have been presented till date,making it essential to systematically evaluate and present the existing advancements.This paper fills this gap by providing a thorough survey of automated methods for jaundice detection in neonates.The primary focus of the survey is to review the existing methodologies,techniques,and technologies used for neonatal jaundice detection.The key findings from the review indicate that image-based bilirubinometers and transcutaneous bilirubinometers are promising non-invasive alternatives,and provide a good trade-off between accuracy and ease of use.However,their effectiveness varies with factors like skin pigmentation,gestational age,and measurement site.Spectroscopic and biosensor-based techniques show high sensitivity but need further clinical validation.Despite advancements,several challenges including device calibration,large-scale validation,and regulatory barriers still haunt the researchers.Standardization,regulatory compliances,and seamless integration into healthcare workflows are the key hurdles to be addressed.By consolidating the current knowledge and discussing the challenges and opportunities in this field,this survey aims to contribute to the advancement of automatic jaundice detection and ultimately improve neonatal care.
文摘Attacks are growing more complex and dangerous as network capabilities improve at a rapid pace.Network intrusion detection is usually regarded as an efficient means of dealing with security attacks.Many ways have been presented,utilizing various strategies and focusing on different types of visitors.Anomaly-based network intrusion monitoring is an essential area of intrusion detection investigation and development.Despite extensive research on anomaly-based network detection,there is still a lack of comprehensive literature reviews covering current methodologies and datasets.Despite the substantial research into anomaly-based network intrusion detection algorithms,there is a dearth of a research evaluation of new methodologies and datasets.We explore and evaluate 50 highest publications on anomaly-based intrusion detection using an in-depth review of related literature techniques.Our work thoroughly explores the technological environment of the subject in order to help future research in this sector.Our examination is carried out from the relevant angles:application areas,data preprocessing and threat detection approaches,assessment measures,and datasets.We select unresolved research difficulties and underexplored research areas from every viewpoint recommendation of the study.Finally,we outline five potentially increased research areas for the future.
基金funded by A’Sharqiyah University,Sultanate of Oman,under Research Project grant number(BFP/RGP/ICT/22/490).
文摘Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.
基金supported financially by the National Nature Science Foundation of China(No.82373355,No.82172703,No.82303856,and No.82473505)the Discipline Leader Project of Shanghai Municipal Health Commission(No.2022XD013)the AoXiang Project of Shanghai Anti-Cancer Association(No.SACA-AX202302).
文摘The choice of biopsy method is critical in diagnosing prostate cancer(PCa).This retrospective cohort study compared systematic biopsy(SB)or cognitive fusion-targeted biopsy combined with SB(CB)in detecting PCa and clinically significant prostate cancer(csPCa).Data from 2572 men who underwent either SB or CB in Fudan University Shanghai Cancer Center(Shanghai,China)between January 2019 and December 2023 were analyzed.Propensity score matching(PSM)was used to balance baseline characteristics,and detection rates were compared before and after PSM.Subgroup analyses based on prostate-specific antigen(PSA)levels and Prostate Imaging-Reporting and Data System(PI-RADS)scores were performed.Primary and secondary outcomes were the detection rates of PCa and csPCa,respectively.Of 2572 men,1778 were included in the PSM analysis.Before PSM,CB had higher detection rates for both PCa(62.9%vs 52.4%,odds ratio[OR]:1.54,P<0.001)and csPCa(54.9%vs 43.3%,OR:1.60,P<0.001)compared to SB.After PSM,CB remained superior in detecting PCa(63.1%vs 47.9%,OR:1.86,P<0.001)and csPCa(55.0%vs 38.2%,OR:1.98,P<0.001).In patients with PSA 4–12 ng ml−1(>4 ng ml-1 and≤12 ng ml-1,which is also applicable to the following text),CB detected more PCa(59.8%vs 40.7%,OR:2.17,P<0.001)and csPCa(48.1%vs 27.7%,OR:2.42,P<0.001).CB also showed superior csPCa detection in those with PI-RADS 3 lesions(32.1%vs 18.0%,OR:2.15,P=0.038).Overall,CB significantly improves PCa and csPCa detection,especially in patients with PSA 4–12 ng ml−1 or PI-RADS 3 lesions.
文摘Seismic data plays a pivotal role in fault detection,offering critical insights into subsurface structures and seismic hazards.Understanding fault detection from seismic data is essential for mitigating seismic risks and guiding land-use plans.This paper presents a comprehensive review of existing methodologies for fault detection,focusing on the application of Machine Learning(ML)and Deep Learning(DL)techniques to enhance accuracy and efficiency.Various ML and DL approaches are analyzed with respect to fault segmentation,adaptive learning,and fault detection models.These techniques,benchmarked against established seismic datasets,reveal significant improvements over classical methods in terms of accuracy and computational efficiency.Additionally,this review highlights emerging trends,including hybrid model applications and the integration of real-time data processing for seismic fault detection.By providing a detailed comparative analysis of current methodologies,this review aims to guide future research and foster advancements in the effectiveness and reliability of seismic studies.Ultimately,the study seeks to bridge the gap between theoretical investigations and practical implementations in fault detection.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金supported by grants received by the first author and third author from the Institute of Eminence,Delhi University,Delhi,India,as part of the Faculty Research Program via Ref.No./IoE/2024-25/12/FRP.
文摘Software systems are vulnerable to security breaches as they expand in complexity and functionality.The confidentiality,integrity,and availability of data are gravely threatened by flaws in a system’s design,implementation,or configuration.To guarantee the durability&robustness of the software,vulnerability identification and fixation have become crucial areas of focus for developers,cybersecurity experts and industries.This paper presents a thorough multi-phase mathematical model for efficient patch management and vulnerability detection.To uniquely model these processes,the model incorporated the notion of the learning phenomenon in describing vulnerability fixation using a logistic learning function.Furthermore,the authors have used numerical methods to approximate the solution of the proposed framework where an analytical solution is difficult to attain.The suggested systematic architecture has been demonstrated through statistical analysis using patch datasets,which offers a solid basis for the research conclusions.According to computational research,learning dynamics improves security response and results in more effective vulnerability management.The suggested model offers a systematic approach to proactive vulnerability mitigation and has important uses in risk assessment,software maintenance,and cybersecurity.This study helps create more robust software systems by increasing patch management effectiveness,which benefits developers,cybersecurity experts,and sectors looking to reduce security threats in a growing digital world.
基金National Key Research and Development Program of China,No.2023YFC3206605,No.2021YFC3201102National Natural Science Foundation of China,No.41971035。
文摘Precipitation events,which follow a life cycle of initiation,development,and decay,represent the fundamental form of precipitation.Comprehensive and accurate detection of these events is crucial for effective water resource management and flood control.However,current investigations on their spatio-temporal patterns remain limited,largely because of the lack of systematic detection indices that are specifically designed for precipitation events,which constrains event-scale research.In this study,we defined a set of precipitation event detection indices(PEDI)that consists of five conventional and fourteen extreme indices to characterize precipitation events from the perspectives of intensity,duration,and frequency.Applications of the PEDI revealed the spatial patterns of hourly precipitation events in China and its first-and second-order river basins from 2008 to 2017.Both conventional and extreme precipitation events displayed spatial distribution patterns that gradually decreased in intensity,duration,and frequency from southeast to northwest China.Compared with those in northwest China,the average values of most PEDIs in southeast China were usually 2-10 times greater for first-order river basins and 3-15 times greater for second-order basins.The PEDI could serve as a reference method for investigating precipitation events at global,regional,and basin scales.
基金funded by Key research and development Program of Henan Province(No.251111211200)National Natural Science Foundation of China(Grant No.U2004163).
文摘Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakthroughs in this field,in the face of complex scenes,such as image blur and target occlusion,the traffic sign detection continues to exhibit limited accuracy,accompanied by false positives and missed detections.To address the above problems,a traffic sign detection algorithm,You Only Look Once-based Skip Dynamic Way(YOLO-SDW)based on You Only Look Once version 8 small(YOLOv8s),is proposed.Firstly,a Skip Connection Reconstruction(SCR)module is introduced to efficiently integrate fine-grained feature information and enhance the detection accuracy of the algorithm in complex scenes.Secondly,a C2f module based on Dynamic Snake Convolution(C2f-DySnake)is proposed to dynamically adjust the receptive field information,improve the algorithm’s feature extraction ability for blurred or occluded targets,and reduce the occurrence of false detections and missed detections.Finally,the Wise Powerful IoU v2(WPIoUv2)loss function is proposed to further improve the detection accuracy of the algorithm.Experimental results show that the average precision mAP@0.5 of YOLO-SDW on the TT100K dataset is 89.2%,and mAP@0.5:0.95 is 68.5%,which is 4%and 3.3%higher than the YOLOv8s baseline,respectively.YOLO-SDW ensures real-time performance while having higher accuracy.
基金funded by the Joint Funds of the National Natural Science Foundation of China(U2341223)the Beijing Municipal Natural Science Foundation(No.4232067).
文摘In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds in current intelligent inspection algorithms,this paper proposes CG-YOLOv8,a lightweight and improved model based on YOLOv8n for PCB surface defect detection.The proposed method optimizes the network architecture and compresses parameters to reduce model complexity while maintaining high detection accuracy,thereby enhancing the capability of identifying diverse defects under complex conditions.Specifically,a cascaded multi-receptive field(CMRF)module is adopted to replace the SPPF module in the backbone to improve feature perception,and an inverted residual mobile block(IRMB)is integrated into the C2f module to further enhance performance.Additionally,conventional convolution layers are replaced with GSConv to reduce computational cost,and a lightweight Convolutional Block Attention Module based Convolution(CBAMConv)module is introduced after Grouped Spatial Convolution(GSConv)to preserve accuracy through attention mechanisms.The detection head is also optimized by removing medium and large-scale detection layers,thereby enhancing the model’s ability to detect small-scale defects and further reducing complexity.Experimental results show that,compared to the original YOLOv8n,the proposed CG-YOLOv8 reduces parameter count by 53.9%,improves mAP@0.5 by 2.2%,and increases precision and recall by 2.0%and 1.8%,respectively.These improvements demonstrate that CG-YOLOv8 offers an efficient and lightweight solution for PCB surface defect detection.