Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex dataset...Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex datasets such as D3D-HOI and SYSU 3D HOI.The conventional architecture of CNNs restricts their ability to handle HOI scenarios with high complexity.HOI recognition requires improved feature extraction methods to overcome the current limitations in accuracy and scalability.This work proposes a Novel quantum gate-enabled hybrid CNN(QEH-CNN)for effectiveHOI recognition.Themodel enhancesCNNperformance by integrating quantumcomputing components.The framework begins with bilateral image filtering,followed bymulti-object tracking(MOT)and Felzenszwalb superpixel segmentation.A watershed algorithm refines object boundaries by cleaning merged superpixels.Feature extraction combines a histogram of oriented gradients(HOG),Global Image Statistics for Texture(GIST)descriptors,and a novel 23-joint keypoint extractionmethod using relative joint angles and joint proximitymeasures.A fuzzy optimization process refines the extracted features before feeding them into the QEH-CNNmodel.The proposed model achieves 95.06%accuracy on the 3D-D3D-HOI dataset and 97.29%on the SYSU3DHOI dataset.Theintegration of quantum computing enhances feature optimization,leading to improved accuracy and overall model efficiency.展开更多
Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To addre...Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To address this,we present SCENET-3D,a transformer-drivenmultimodal framework that unifies human-centric skeleton features with scene-object semantics for intelligent robotic vision through a three-stage pipeline.In the first stage,scene analysis,rich geometric and texture descriptors are extracted from RGB frames,including surface-normal histograms,angles between neighboring normals,Zernike moments,directional standard deviation,and Gabor-filter responses.In the second stage,scene-object analysis,non-human objects are segmented and represented using local feature descriptors and complementary surface-normal information.In the third stage,human-pose estimation,silhouettes are processed through an enhanced MoveNet to obtain 2D anatomical keypoints,which are fused with depth information and converted into RGB-based point clouds to construct pseudo-3D skeletons.Features from all three stages are fused and fed in a transformer encoder with multi-head attention to resolve visually similar activities.Experiments on UCLA(95.8%),ETRI-Activity3D(89.4%),andCAD-120(91.2%)demonstrate that combining pseudo-3D skeletonswith rich scene-object fusion significantly improves generalizable activity recognition,enabling safer elderly care,natural human–robot interaction,and robust context-aware robotic perception in real-world environments.展开更多
Objectives This study aimed to design and evaluate a detection system for the accidental dislodgement of head-and-neck medical supplies through hand position recognition and tracking in Intensive Care Unit(ICU)patient...Objectives This study aimed to design and evaluate a detection system for the accidental dislodgement of head-and-neck medical supplies through hand position recognition and tracking in Intensive Care Unit(ICU)patients.Methods We conducted a single-center,prospective,parallel-group feasibility randomized controlled trial.We recruited 80 participants using convenience sampling from the ICU of a hospital in Ningbo City,Zhejiang Province,between March 2025 and June 2025,and they were randomly assigned to either the control group(routine care)or the intervention group(routine care plus image recognition-based detection system).The system continuously tracked patients’hand positions via bedside cameras and generated real-time alarms when hands entered predefined risk zones,notifying on-duty nurses to enable early intervention.System stability was assessed by continuous system uptime;system performance and clinical feasibility were evaluated by the frequencies of risk actions and accidental dislodgement of medical supplies(ADMS).Results All 80 participants completed the intervention,with 40 patients in each group.The baseline characteristics and median observation time of the two groups were balanced(intervention group:48 h/patient vs.control group:49 h/patient).Compared with the control group,the intervention group showed fewer ADMS(2/40 vs.9/40)and detected more risk actions per 100 h(36 vs.25);all system-detected events had corroborating images with complete concordance on manual review,and all nurse-recorded hand-contact events were accurately captured.Conclusions The study demonstrated that the image recognition-based detection system can function stably in clinical settings,providing accurate and continuous surveillance while supporting the early detection of risk actions.By reducing the observation burden and offering real-time cognitive support,the system complements routine nursing care and serves as an additional safety measure in ICU practice.With further optimization and larger multicenter validation,this approach could have the potential to make a significant contribution to the development of smart ICUs and the broader digital transformation of nursing care.展开更多
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi...Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
Load time series analysis is critical for resource management and optimization decisions,especially automated analysis techniques.Existing research has insufficiently interpreted the overall characteristics of samples...Load time series analysis is critical for resource management and optimization decisions,especially automated analysis techniques.Existing research has insufficiently interpreted the overall characteristics of samples,leading to significant differences in load level detection conclusions for samples with different characteristics(trend,seasonality,cyclicality).Achieving automated,feature-adaptive,and quantifiable analysis methods remains a challenge.This paper proposes a Threshold Recognition-based Load Level Detection Algorithm(TRLLD),which effectively identifies different load level regions in samples of arbitrary size and distribution type based on sample characteristics.By utilizing distribution density uniformity,the algorithm classifies data points and ultimately obtains normalized load values.In the feature recognition step,the algorithm employs the Density Uniformity Index Based on Differences(DUID),High Load Level Concentration(HLLC),and Low Load Level Concentration(LLLC)to assess sample characteristics,which are independent of specific load values,providing a standardized perspective on features,ensuring high efficiency and strong interpretability.Compared to traditional methods,the proposed approach demonstrates better adaptive and real-time analysis capabilities.Experimental results indicate that it can effectively identify high load and low load regions in 16 groups of time series samples with different load characteristics,yielding highly interpretable results.The correlation between the DUID and sample density distribution uniformity reaches 98.08%.When introducing 10% MAD intensity noise,the maximum relative error is 4.72%,showcasing high robustness.Notably,it exhibits significant advantages in general and low sample scenarios.展开更多
In this work,a novel electrochemical sensor based on covalent organic framework@carbon black@molecularly imprinted polydopamine(COF@CB@MPDA)was developed for selective recognition and determination of ciprofloxacin(CF...In this work,a novel electrochemical sensor based on covalent organic framework@carbon black@molecularly imprinted polydopamine(COF@CB@MPDA)was developed for selective recognition and determination of ciprofloxacin(CF).COF@CB@MPDA possessed good water dispersibility and was synthesized by the selfpolymerization of dopamine under alkaline conditions in the presence of the COF,CB and CF.The high surface area COF enhanced the adsorption of CF,whilst CB gave the composites high electrical conductivity to improve the sensitivity of the proposed COF@CB@MPDA/glassy carbon electrode(GCE)sensor.The specific recognition of CF by COF@CB@MPDA involved hydrogen bonding and van der Waals interactions.Under optimized conditions,the sensor showed a good linear relationship with CF concentration over the range of 5.0×10^(–7)and 1.0×10^(–4)mol/L,with a limit of detection(LOD)of 9.53×10^(–8)mol/L.Further,the developed sensor exhibited high selectivity,repeatability and stability for CF detection in milk and milk powders.The method used to fabricate the COF@CB@MPDA/GCE sensor could be easily adapted for the selective recognition and detection of other antibacterial agents and organic pollutants in the environment.展开更多
Enantiomer identification is of paramount industrial value and physiological significance.Construction of sensitive chiral sensors with high enantiomeric discrimination ability is highly desirable.In this work,a chira...Enantiomer identification is of paramount industrial value and physiological significance.Construction of sensitive chiral sensors with high enantiomeric discrimination ability is highly desirable.In this work,a chiral covalent organic framework/anodic aluminum oxide(c-COF/AAO)membrane was prepared for electrochemical enantioselective recognition and sensing.Benefiting from the remarkable asymmetry,the asprepared nanofluidic c-COF/AAO presents a distinct ion current rectification(ICR)characteristic,enabling sensitive bioanalysis.In addition,owing to the large surface area,high chemical stability and perfect ion selectivity of chiral COF,the prepared c-COF/AAO membrane presents exceptionally selective mass transport and thereby enables excellent chiral discrimination for S-/R-Naproxen(S-/R-Npx)enantiomers.It is especially noteworthy that the detection limit is achieved as low as 3.88 pmol/L.These results raise the possibility for a facile,stable and low-cost method to carry out sensitive enantioselective recognition and detection.展开更多
The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children a...The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes.展开更多
In order to meet the requirements of accurate identification of surface defects on copper strip in industrial production,a detection model of surface defects based on machine vision,CSC-YOLO,is proposed.The model uses...In order to meet the requirements of accurate identification of surface defects on copper strip in industrial production,a detection model of surface defects based on machine vision,CSC-YOLO,is proposed.The model uses YOLOv4-tiny as the benchmark network.First,K-means clustering is introduced into the benchmark network to obtain anchor frames that match the self-built dataset.Second,a cross-region fusion module is introduced in the backbone network to solve the difficult target recognition problem by fusing contextual semantic information.Third,the spatial pyramid pooling-efficient channel attention network(SPP-E)module is introduced in the path aggregation network(PANet)to enhance the extraction of features.Fourth,to prevent the loss of channel information,a lightweight attention mechanism is introduced to improve the performance of the network.Finally,the performance of the model is improved by adding adjustment factors to correct the loss function for the dimensional characteristics of the surface defects.CSC-YOLO was tested on the self-built dataset of surface defects in copper strip,and the experimental results showed that the mAP of the model can reach 93.58%,which is a 3.37% improvement compared with the benchmark network,and FPS,although decreasing compared with the benchmark network,reached 104.CSC-YOLO takes into account the real-time requirements of copper strip production.The comparison experiments with Faster RCNN,SSD300,YOLOv3,YOLOv4,Resnet50-YOLOv4,YOLOv5s,YOLOv7,and other algorithms show that the algorithm obtains a faster computation speed while maintaining a higher detection accuracy.展开更多
During the test on transient pressure signal in explosion field,false trigger caused by field interference can lead to test failure.To improve the stability of test system,a signal detection and recognition technology...During the test on transient pressure signal in explosion field,false trigger caused by field interference can lead to test failure.To improve the stability of test system,a signal detection and recognition technology is proposed for transient pressure test system.In the process of signal acquisition,firstly,electrical levels are monitored in real time to find effective abrupt changes and mark them;then the effective data segments are detecdted totected;thus the effective signals can be acquired in turn finally.The experimental results show that the shock wave signal can be collected effectively and the reliability of the test system can be improved after removal of interferences.展开更多
To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transfo...To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transformation. Then, the colors of traffic lights are detected with color space transformation. Finally, self-associative memory is used to recognize the countdown characters of the traffic lights. Test results at 20 real intersections show that the ratio of correct stabling siding recognition reaches up to 90%;and the ratios of recognition of traffic lights and divided characters are 85% and 97%, respectively. The research proves that the method is efficient for the detection of stabling siding and is robust enough to recognize the characters from images with noise and broken edges.展开更多
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st...Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.展开更多
An object learning and recognition system is implemented for humanoid robots to discover and memorize objects only by simple interactions with non-expert users. When the object is presented, the system makes use of th...An object learning and recognition system is implemented for humanoid robots to discover and memorize objects only by simple interactions with non-expert users. When the object is presented, the system makes use of the motion information over consecutive frames to extract object features and implements machine learning based on the bag of visual words approach. Instead of using a local feature descriptor only, the proposed system uses the co-occurring local features in order to increase feature discriminative power for both object model learning and inference stages. For different objects with different textures, a hybrid sampling strategy is considered. This hybrid approach minimizes the consumption of computation resources and helps achieving good performances demonstrated on a set of a dozen different daily objects.展开更多
Behavior recognition of Hu sheep contributes to their intensive and intelligent farming.Due to the generally high density of Hu sheep farming,severe occlusion occurs among different behaviors and even among sheep perf...Behavior recognition of Hu sheep contributes to their intensive and intelligent farming.Due to the generally high density of Hu sheep farming,severe occlusion occurs among different behaviors and even among sheep performing the same behavior,leading to missing and false detection issues in existing behavior recognition methods.A high-low frequency aggregated attention and negative sample comprehensive score loss and comprehensive score soft non-maximum suppression-YOLO(HLNC-YOLO)was proposed for identifying the behavior of Hu sheep,addressing the issues of missed and erroneous detections caused by occlusion between Hu sheep in intensive farming.Firstly,images of four typical behaviors-standing,lying,eating,and drinking-were collected from the sheep farm to construct the Hu sheep behavior dataset(HSBD).Next,to solve the occlusion issues,during the training phase,the C2F-HLAtt module was integrated,which combined high-low frequency aggregation attention,into the YOLO v8 Backbone to perceive occluded objects and introduce an auxiliary reversible branch to retain more effective features.Using comprehensive score regression loss(CSLoss)to reduce the scores of suboptimal boxes and enhance the comprehensive scores of occluded object boxes.Finally,the soft comprehensive score non-maximal suppression(Soft-CS-NMS)algorithm filtered prediction boxes during the inferencing.Testing on the HSBD,HLNC-YOLO achieved a mean average precision(mAP@50)of 87.8%,with a memory footprint of 17.4 MB.This represented an improvement of 7.1,2.2,4.6,and 11 percentage points over YOLO v8,YOLO v9,YOLO v10,and Faster R-CNN,respectively.Research indicated that the HLNC-YOLO accurately identified the behavior of Hu sheep in intensive farming and possessed generalization capabilities,providing technical support for smart farming.展开更多
Online examinations have become a dominant assessment mode,increasing concerns over academic integrity.To address the critical challenge of detecting cheating behaviours,this study proposes a hybrid deep learning appr...Online examinations have become a dominant assessment mode,increasing concerns over academic integrity.To address the critical challenge of detecting cheating behaviours,this study proposes a hybrid deep learning approach that combines visual detection and temporal behaviour classification.The methodology utilises object detection models—You Only Look Once(YOLOv12),Faster Region-based Convolutional Neural Network(RCNN),and Single Shot Detector(SSD)MobileNet—integrated with classification models such as Convolutional Neural Networks(CNN),Bidirectional Gated Recurrent Unit(Bi-GRU),and CNN-LSTM(Long Short-Term Memory).Two distinct datasets were used:the Online Exam Proctoring(EOP)dataset from Michigan State University and the School of Computer Science,Duy Tan Unievrsity(SCS-DTU)dataset collected in a controlled classroom setting.A diverse set of cheating behaviours,including book usage,unauthorised interaction,internet access,and mobile phone use,was categorised.Comprehensive experiments evaluated the models based on accuracy,precision,recall,training time,inference speed,and memory usage.We evaluate nine detector-classifier pairings under a unified budget and score them via a calibrated harmonic mean of detection and classification accuracies,enabling deployment-oriented selection under latency and memory constraints.Macro-Precision/Recall/F1 and Receiver Operating Characteristic-Area Under the Curve(ROC-AUC)are reported for the top configurations,revealing consistent advantages of object-centric pipelines for fine-grained cheating cues.The highest overall score is achieved by YOLOv12+CNN(97.15%accuracy),while SSD-MobileNet+CNN provides the best speed-efficiency trade-off for edge devices.This research provides valuable insights into selecting and deploying appropriate deep learning models for maintaining exam integrity under varying resource constraints.展开更多
Next-generation fire safety systems demand precise detection and motion recognition of flames.In-sensor computing,which integrates sensing,memory,and processing capabilities,has emerged as a key technology in flame de...Next-generation fire safety systems demand precise detection and motion recognition of flames.In-sensor computing,which integrates sensing,memory,and processing capabilities,has emerged as a key technology in flame detection.However,the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet(UV)band(200-280 nm)is hindered by the weak detection capability.Here,we propose Ga_(2)O_(3)/In_(2)Se_(3) heterojunctions for the ferroelectric(abbreviation:Fe)optoelectronic sensor(abbreviation:OES)array(5×5 pixels),which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality.The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system.Moreover,the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware.Using this Fe-OES system and neuromorphic hardware,we successfully demonstrate three flame processing tasks:achieving efficient flame detection across all time periods with terminal and cloud-based alarms;flame motion recognition with a lightweight convolutional neural network achieving 96.47%accuracy;and flame light recognition with 90.51%accuracy by means of a photosensitive artificial neural system.This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.展开更多
Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of...Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of object detection,there are still many issues to be resolved in detecting small objects due to the inherent complexity and diversity of real-world visual scenes.In particular,the YOLO(You Only Look Once)series of detection models,renowned for their real-time performance,have undergone numerous adaptations aimed at improving the detection of small targets.In this survey,we summarize the state-of-the-art YOLO-based small object detection methods.This review presents a systematic categorization of YOLO-based approaches for small-object detection,organized into four methodological avenues,namely attention-based feature enhancement,detection-head optimization,loss function,and multi-scale feature fusion strategies.We then examine the principal challenges addressed by each category.Finally,we analyze the performance of thesemethods on public benchmarks and,by comparing current approaches,identify limitations and outline directions for future research.展开更多
The rapid proliferation of Internet of Things(IoT)devices in critical healthcare infrastructure has introduced significant security and privacy challenges that demand innovative,distributed architectural solutions.Thi...The rapid proliferation of Internet of Things(IoT)devices in critical healthcare infrastructure has introduced significant security and privacy challenges that demand innovative,distributed architectural solutions.This paper proposes FE-ACS(Fog-Edge Adaptive Cybersecurity System),a novel hierarchical security framework that intelligently distributes AI-powered anomaly detection algorithms across edge,fog,and cloud layers to optimize security efficacy,latency,and privacy.Our comprehensive evaluation demonstrates that FE-ACS achieves superior detection performance with an AUC-ROC of 0.985 and an F1-score of 0.923,while maintaining significantly lower end-to-end latency(18.7 ms)compared to cloud-centric(152.3 ms)and fog-only(34.5 ms)architectures.The system exhibits exceptional scalability,supporting up to 38,000 devices with logarithmic performance degradation—a 67×improvement over conventional cloud-based approaches.By incorporating differential privacy mechanisms with balanced privacy-utility tradeoffs(ε=1.0–1.5),FE-ACS maintains 90%–93%detection accuracy while ensuring strong privacy guarantees for sensitive healthcare data.Computational efficiency analysis reveals that our architecture achieves a detection rate of 12,400 events per second with only 12.3 mJ energy consumption per inference.In healthcare risk assessment,FE-ACS demonstrates robust operational viability with low patient safety risk(14.7%)and high system reliability(94.0%).The proposed framework represents a significant advancement in distributed security architectures,offering a scalable,privacy-preserving,and real-time solution for protecting healthcare IoT ecosystems against evolving cyber threats.展开更多
Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional comp...Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional computer-aided detection systems.Recent advances in deep learning have enabled more robust and scalable solutions for large-scale screening,yet a systematic comparison of modern object detection architectures on nationally representative datasets remains limited.This study presents a comprehensive quantitative comparison of prominent deep learning–based object detection architectures for Artificial Intelligence-assisted mammography analysis using the MammosighTR dataset,developed within the Turkish National Breast Cancer Screening Program.The dataset comprises 12,740 patient cases collected between 2016 and 2022,annotated with BI-RADS categories,breast density levels,and lesion localization labels.A total of 31 models were evaluated,including One-Stage,Two-Stage,and Transformer-based architectures,under a unified experimental framework at both patient and breast levels.The results demonstrate that Two-Stage architectures consistently outperform One-Stage models,achieving approximately 2%–4%higher Macro F1-Scores and more balanced precision–recall trade-offs,with Double-Head R-CNN and Dynamic R-CNN yielding the highest overall performance(Macro F1≈0.84–0.86).This advantage is primarily attributed to the region proposal mechanism and improved class balance inherent to Two-Stage designs.One-Stage detectors exhibited higher sensitivity and faster inference,reaching Recall values above 0.88,but experienced minor reductions in Precision and overall accuracy(≈1%–2%)compared with Two-Stage models.Among Transformer-based architectures,Deformable DEtection TRansformer demonstrated strong robustness and consistency across datasets,achieving Macro F1-Scores comparable to CNN-based detectors(≈0.83–0.85)while exhibiting minimal performance degradation under distributional shifts.Breast density–based analysis revealed increased misclassification rates in medium-density categories(types B and C),whereas Transformer-based architectures maintained more stable performance in high-density type D tissue.These findings quantitatively confirm that both architectural design and tissue characteristics play a decisive role in diagnostic accuracy.Overall,the study provides a reproducible benchmark and highlights the potential of hybrid approaches that combine the accuracy of Two-Stage detectors with the contextual modeling capability of Transformer architectures for clinically reliable breast cancer screening systems.展开更多
基金supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex datasets such as D3D-HOI and SYSU 3D HOI.The conventional architecture of CNNs restricts their ability to handle HOI scenarios with high complexity.HOI recognition requires improved feature extraction methods to overcome the current limitations in accuracy and scalability.This work proposes a Novel quantum gate-enabled hybrid CNN(QEH-CNN)for effectiveHOI recognition.Themodel enhancesCNNperformance by integrating quantumcomputing components.The framework begins with bilateral image filtering,followed bymulti-object tracking(MOT)and Felzenszwalb superpixel segmentation.A watershed algorithm refines object boundaries by cleaning merged superpixels.Feature extraction combines a histogram of oriented gradients(HOG),Global Image Statistics for Texture(GIST)descriptors,and a novel 23-joint keypoint extractionmethod using relative joint angles and joint proximitymeasures.A fuzzy optimization process refines the extracted features before feeding them into the QEH-CNNmodel.The proposed model achieves 95.06%accuracy on the 3D-D3D-HOI dataset and 97.29%on the SYSU3DHOI dataset.Theintegration of quantum computing enhances feature optimization,leading to improved accuracy and overall model efficiency.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To address this,we present SCENET-3D,a transformer-drivenmultimodal framework that unifies human-centric skeleton features with scene-object semantics for intelligent robotic vision through a three-stage pipeline.In the first stage,scene analysis,rich geometric and texture descriptors are extracted from RGB frames,including surface-normal histograms,angles between neighboring normals,Zernike moments,directional standard deviation,and Gabor-filter responses.In the second stage,scene-object analysis,non-human objects are segmented and represented using local feature descriptors and complementary surface-normal information.In the third stage,human-pose estimation,silhouettes are processed through an enhanced MoveNet to obtain 2D anatomical keypoints,which are fused with depth information and converted into RGB-based point clouds to construct pseudo-3D skeletons.Features from all three stages are fused and fed in a transformer encoder with multi-head attention to resolve visually similar activities.Experiments on UCLA(95.8%),ETRI-Activity3D(89.4%),andCAD-120(91.2%)demonstrate that combining pseudo-3D skeletonswith rich scene-object fusion significantly improves generalizable activity recognition,enabling safer elderly care,natural human–robot interaction,and robust context-aware robotic perception in real-world environments.
文摘Objectives This study aimed to design and evaluate a detection system for the accidental dislodgement of head-and-neck medical supplies through hand position recognition and tracking in Intensive Care Unit(ICU)patients.Methods We conducted a single-center,prospective,parallel-group feasibility randomized controlled trial.We recruited 80 participants using convenience sampling from the ICU of a hospital in Ningbo City,Zhejiang Province,between March 2025 and June 2025,and they were randomly assigned to either the control group(routine care)or the intervention group(routine care plus image recognition-based detection system).The system continuously tracked patients’hand positions via bedside cameras and generated real-time alarms when hands entered predefined risk zones,notifying on-duty nurses to enable early intervention.System stability was assessed by continuous system uptime;system performance and clinical feasibility were evaluated by the frequencies of risk actions and accidental dislodgement of medical supplies(ADMS).Results All 80 participants completed the intervention,with 40 patients in each group.The baseline characteristics and median observation time of the two groups were balanced(intervention group:48 h/patient vs.control group:49 h/patient).Compared with the control group,the intervention group showed fewer ADMS(2/40 vs.9/40)and detected more risk actions per 100 h(36 vs.25);all system-detected events had corroborating images with complete concordance on manual review,and all nurse-recorded hand-contact events were accurately captured.Conclusions The study demonstrated that the image recognition-based detection system can function stably in clinical settings,providing accurate and continuous surveillance while supporting the early detection of risk actions.By reducing the observation burden and offering real-time cognitive support,the system complements routine nursing care and serves as an additional safety measure in ICU practice.With further optimization and larger multicenter validation,this approach could have the potential to make a significant contribution to the development of smart ICUs and the broader digital transformation of nursing care.
文摘Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
文摘Load time series analysis is critical for resource management and optimization decisions,especially automated analysis techniques.Existing research has insufficiently interpreted the overall characteristics of samples,leading to significant differences in load level detection conclusions for samples with different characteristics(trend,seasonality,cyclicality).Achieving automated,feature-adaptive,and quantifiable analysis methods remains a challenge.This paper proposes a Threshold Recognition-based Load Level Detection Algorithm(TRLLD),which effectively identifies different load level regions in samples of arbitrary size and distribution type based on sample characteristics.By utilizing distribution density uniformity,the algorithm classifies data points and ultimately obtains normalized load values.In the feature recognition step,the algorithm employs the Density Uniformity Index Based on Differences(DUID),High Load Level Concentration(HLLC),and Low Load Level Concentration(LLLC)to assess sample characteristics,which are independent of specific load values,providing a standardized perspective on features,ensuring high efficiency and strong interpretability.Compared to traditional methods,the proposed approach demonstrates better adaptive and real-time analysis capabilities.Experimental results indicate that it can effectively identify high load and low load regions in 16 groups of time series samples with different load characteristics,yielding highly interpretable results.The correlation between the DUID and sample density distribution uniformity reaches 98.08%.When introducing 10% MAD intensity noise,the maximum relative error is 4.72%,showcasing high robustness.Notably,it exhibits significant advantages in general and low sample scenarios.
基金supported by the Project of Key R&D Program of Shandong Province(2023CXGC010712).Geoffrey I.N.
文摘In this work,a novel electrochemical sensor based on covalent organic framework@carbon black@molecularly imprinted polydopamine(COF@CB@MPDA)was developed for selective recognition and determination of ciprofloxacin(CF).COF@CB@MPDA possessed good water dispersibility and was synthesized by the selfpolymerization of dopamine under alkaline conditions in the presence of the COF,CB and CF.The high surface area COF enhanced the adsorption of CF,whilst CB gave the composites high electrical conductivity to improve the sensitivity of the proposed COF@CB@MPDA/glassy carbon electrode(GCE)sensor.The specific recognition of CF by COF@CB@MPDA involved hydrogen bonding and van der Waals interactions.Under optimized conditions,the sensor showed a good linear relationship with CF concentration over the range of 5.0×10^(–7)and 1.0×10^(–4)mol/L,with a limit of detection(LOD)of 9.53×10^(–8)mol/L.Further,the developed sensor exhibited high selectivity,repeatability and stability for CF detection in milk and milk powders.The method used to fabricate the COF@CB@MPDA/GCE sensor could be easily adapted for the selective recognition and detection of other antibacterial agents and organic pollutants in the environment.
基金supported by grants from the National Natural Science Foundation of China(Nos.22274076,22304084)the Primary Research&Development Plan of Jiangsu Province(No.BE2022793)+1 种基金the Natural Science Foundation of Jiangsu Province of China(No.BK20230377)Jiangsu Provincial Department of Education(No.211090B52303)。
文摘Enantiomer identification is of paramount industrial value and physiological significance.Construction of sensitive chiral sensors with high enantiomeric discrimination ability is highly desirable.In this work,a chiral covalent organic framework/anodic aluminum oxide(c-COF/AAO)membrane was prepared for electrochemical enantioselective recognition and sensing.Benefiting from the remarkable asymmetry,the asprepared nanofluidic c-COF/AAO presents a distinct ion current rectification(ICR)characteristic,enabling sensitive bioanalysis.In addition,owing to the large surface area,high chemical stability and perfect ion selectivity of chiral COF,the prepared c-COF/AAO membrane presents exceptionally selective mass transport and thereby enables excellent chiral discrimination for S-/R-Naproxen(S-/R-Npx)enantiomers.It is especially noteworthy that the detection limit is achieved as low as 3.88 pmol/L.These results raise the possibility for a facile,stable and low-cost method to carry out sensitive enantioselective recognition and detection.
基金supported by the IITP(Institute of Information&Communications Technology Planning&Evaluation)-ITRC(Information Technology Research Center)grant funded by the Korean government(Ministry of Science and ICT)(IITP-2025-RS-2024-00438056).
文摘The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes.
基金the Key Project of Basic Research of Yunnan Province(No.202101AS070016)。
文摘In order to meet the requirements of accurate identification of surface defects on copper strip in industrial production,a detection model of surface defects based on machine vision,CSC-YOLO,is proposed.The model uses YOLOv4-tiny as the benchmark network.First,K-means clustering is introduced into the benchmark network to obtain anchor frames that match the self-built dataset.Second,a cross-region fusion module is introduced in the backbone network to solve the difficult target recognition problem by fusing contextual semantic information.Third,the spatial pyramid pooling-efficient channel attention network(SPP-E)module is introduced in the path aggregation network(PANet)to enhance the extraction of features.Fourth,to prevent the loss of channel information,a lightweight attention mechanism is introduced to improve the performance of the network.Finally,the performance of the model is improved by adding adjustment factors to correct the loss function for the dimensional characteristics of the surface defects.CSC-YOLO was tested on the self-built dataset of surface defects in copper strip,and the experimental results showed that the mAP of the model can reach 93.58%,which is a 3.37% improvement compared with the benchmark network,and FPS,although decreasing compared with the benchmark network,reached 104.CSC-YOLO takes into account the real-time requirements of copper strip production.The comparison experiments with Faster RCNN,SSD300,YOLOv3,YOLOv4,Resnet50-YOLOv4,YOLOv5s,YOLOv7,and other algorithms show that the algorithm obtains a faster computation speed while maintaining a higher detection accuracy.
基金The 11th Postgraduate Technology Innovation Project of North University of China(No.20141142)
文摘During the test on transient pressure signal in explosion field,false trigger caused by field interference can lead to test failure.To improve the stability of test system,a signal detection and recognition technology is proposed for transient pressure test system.In the process of signal acquisition,firstly,electrical levels are monitored in real time to find effective abrupt changes and mark them;then the effective data segments are detecdted totected;thus the effective signals can be acquired in turn finally.The experimental results show that the shock wave signal can be collected effectively and the reliability of the test system can be improved after removal of interferences.
基金The Cultivation Fund of the Key Scientific and Technical Innovation Project of Higher Education of Ministry of Education (No.705020)
文摘To ensure revulsive driving of intelligent vehicles at intersections, a method is presented to detect and recognize the traffic lights. First, the stabling siding at intersections is detected by applying Hough transformation. Then, the colors of traffic lights are detected with color space transformation. Finally, self-associative memory is used to recognize the countdown characters of the traffic lights. Test results at 20 real intersections show that the ratio of correct stabling siding recognition reaches up to 90%;and the ratios of recognition of traffic lights and divided characters are 85% and 97%, respectively. The research proves that the method is efficient for the detection of stabling siding and is robust enough to recognize the characters from images with noise and broken edges.
基金supported by the National Natural Science Foundation of China(No.62276204)the Fundamental Research Funds for the Central Universities,China(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shaanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470)。
文摘Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.
基金The National Natural Science Foundation of China(No.60672094,60971098)
文摘An object learning and recognition system is implemented for humanoid robots to discover and memorize objects only by simple interactions with non-expert users. When the object is presented, the system makes use of the motion information over consecutive frames to extract object features and implements machine learning based on the bag of visual words approach. Instead of using a local feature descriptor only, the proposed system uses the co-occurring local features in order to increase feature discriminative power for both object model learning and inference stages. For different objects with different textures, a hybrid sampling strategy is considered. This hybrid approach minimizes the consumption of computation resources and helps achieving good performances demonstrated on a set of a dozen different daily objects.
文摘Behavior recognition of Hu sheep contributes to their intensive and intelligent farming.Due to the generally high density of Hu sheep farming,severe occlusion occurs among different behaviors and even among sheep performing the same behavior,leading to missing and false detection issues in existing behavior recognition methods.A high-low frequency aggregated attention and negative sample comprehensive score loss and comprehensive score soft non-maximum suppression-YOLO(HLNC-YOLO)was proposed for identifying the behavior of Hu sheep,addressing the issues of missed and erroneous detections caused by occlusion between Hu sheep in intensive farming.Firstly,images of four typical behaviors-standing,lying,eating,and drinking-were collected from the sheep farm to construct the Hu sheep behavior dataset(HSBD).Next,to solve the occlusion issues,during the training phase,the C2F-HLAtt module was integrated,which combined high-low frequency aggregation attention,into the YOLO v8 Backbone to perceive occluded objects and introduce an auxiliary reversible branch to retain more effective features.Using comprehensive score regression loss(CSLoss)to reduce the scores of suboptimal boxes and enhance the comprehensive scores of occluded object boxes.Finally,the soft comprehensive score non-maximal suppression(Soft-CS-NMS)algorithm filtered prediction boxes during the inferencing.Testing on the HSBD,HLNC-YOLO achieved a mean average precision(mAP@50)of 87.8%,with a memory footprint of 17.4 MB.This represented an improvement of 7.1,2.2,4.6,and 11 percentage points over YOLO v8,YOLO v9,YOLO v10,and Faster R-CNN,respectively.Research indicated that the HLNC-YOLO accurately identified the behavior of Hu sheep in intensive farming and possessed generalization capabilities,providing technical support for smart farming.
文摘Online examinations have become a dominant assessment mode,increasing concerns over academic integrity.To address the critical challenge of detecting cheating behaviours,this study proposes a hybrid deep learning approach that combines visual detection and temporal behaviour classification.The methodology utilises object detection models—You Only Look Once(YOLOv12),Faster Region-based Convolutional Neural Network(RCNN),and Single Shot Detector(SSD)MobileNet—integrated with classification models such as Convolutional Neural Networks(CNN),Bidirectional Gated Recurrent Unit(Bi-GRU),and CNN-LSTM(Long Short-Term Memory).Two distinct datasets were used:the Online Exam Proctoring(EOP)dataset from Michigan State University and the School of Computer Science,Duy Tan Unievrsity(SCS-DTU)dataset collected in a controlled classroom setting.A diverse set of cheating behaviours,including book usage,unauthorised interaction,internet access,and mobile phone use,was categorised.Comprehensive experiments evaluated the models based on accuracy,precision,recall,training time,inference speed,and memory usage.We evaluate nine detector-classifier pairings under a unified budget and score them via a calibrated harmonic mean of detection and classification accuracies,enabling deployment-oriented selection under latency and memory constraints.Macro-Precision/Recall/F1 and Receiver Operating Characteristic-Area Under the Curve(ROC-AUC)are reported for the top configurations,revealing consistent advantages of object-centric pipelines for fine-grained cheating cues.The highest overall score is achieved by YOLOv12+CNN(97.15%accuracy),while SSD-MobileNet+CNN provides the best speed-efficiency trade-off for edge devices.This research provides valuable insights into selecting and deploying appropriate deep learning models for maintaining exam integrity under varying resource constraints.
基金supported by the Major Program(JD)of Hubei Province under Grant No.2023BAA009the National Natural Science Foundation of China(Grant No.22105162)+1 种基金the Natural Science Foundation of Hubei Province(Grant No.2023AFB623)the Original Exploration Seed Fund of Hubei University。
文摘Next-generation fire safety systems demand precise detection and motion recognition of flames.In-sensor computing,which integrates sensing,memory,and processing capabilities,has emerged as a key technology in flame detection.However,the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet(UV)band(200-280 nm)is hindered by the weak detection capability.Here,we propose Ga_(2)O_(3)/In_(2)Se_(3) heterojunctions for the ferroelectric(abbreviation:Fe)optoelectronic sensor(abbreviation:OES)array(5×5 pixels),which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality.The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system.Moreover,the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware.Using this Fe-OES system and neuromorphic hardware,we successfully demonstrate three flame processing tasks:achieving efficient flame detection across all time periods with terminal and cloud-based alarms;flame motion recognition with a lightweight convolutional neural network achieving 96.47%accuracy;and flame light recognition with 90.51%accuracy by means of a photosensitive artificial neural system.This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.
基金supported in part by the by Chongqing Research Program of Basic Research and Frontier Technology under Grant CSTB2025NSCQ-GPX1309.
文摘Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of object detection,there are still many issues to be resolved in detecting small objects due to the inherent complexity and diversity of real-world visual scenes.In particular,the YOLO(You Only Look Once)series of detection models,renowned for their real-time performance,have undergone numerous adaptations aimed at improving the detection of small targets.In this survey,we summarize the state-of-the-art YOLO-based small object detection methods.This review presents a systematic categorization of YOLO-based approaches for small-object detection,organized into four methodological avenues,namely attention-based feature enhancement,detection-head optimization,loss function,and multi-scale feature fusion strategies.We then examine the principal challenges addressed by each category.Finally,we analyze the performance of thesemethods on public benchmarks and,by comparing current approaches,identify limitations and outline directions for future research.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01276).
文摘The rapid proliferation of Internet of Things(IoT)devices in critical healthcare infrastructure has introduced significant security and privacy challenges that demand innovative,distributed architectural solutions.This paper proposes FE-ACS(Fog-Edge Adaptive Cybersecurity System),a novel hierarchical security framework that intelligently distributes AI-powered anomaly detection algorithms across edge,fog,and cloud layers to optimize security efficacy,latency,and privacy.Our comprehensive evaluation demonstrates that FE-ACS achieves superior detection performance with an AUC-ROC of 0.985 and an F1-score of 0.923,while maintaining significantly lower end-to-end latency(18.7 ms)compared to cloud-centric(152.3 ms)and fog-only(34.5 ms)architectures.The system exhibits exceptional scalability,supporting up to 38,000 devices with logarithmic performance degradation—a 67×improvement over conventional cloud-based approaches.By incorporating differential privacy mechanisms with balanced privacy-utility tradeoffs(ε=1.0–1.5),FE-ACS maintains 90%–93%detection accuracy while ensuring strong privacy guarantees for sensitive healthcare data.Computational efficiency analysis reveals that our architecture achieves a detection rate of 12,400 events per second with only 12.3 mJ energy consumption per inference.In healthcare risk assessment,FE-ACS demonstrates robust operational viability with low patient safety risk(14.7%)and high system reliability(94.0%).The proposed framework represents a significant advancement in distributed security architectures,offering a scalable,privacy-preserving,and real-time solution for protecting healthcare IoT ecosystems against evolving cyber threats.
文摘Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional computer-aided detection systems.Recent advances in deep learning have enabled more robust and scalable solutions for large-scale screening,yet a systematic comparison of modern object detection architectures on nationally representative datasets remains limited.This study presents a comprehensive quantitative comparison of prominent deep learning–based object detection architectures for Artificial Intelligence-assisted mammography analysis using the MammosighTR dataset,developed within the Turkish National Breast Cancer Screening Program.The dataset comprises 12,740 patient cases collected between 2016 and 2022,annotated with BI-RADS categories,breast density levels,and lesion localization labels.A total of 31 models were evaluated,including One-Stage,Two-Stage,and Transformer-based architectures,under a unified experimental framework at both patient and breast levels.The results demonstrate that Two-Stage architectures consistently outperform One-Stage models,achieving approximately 2%–4%higher Macro F1-Scores and more balanced precision–recall trade-offs,with Double-Head R-CNN and Dynamic R-CNN yielding the highest overall performance(Macro F1≈0.84–0.86).This advantage is primarily attributed to the region proposal mechanism and improved class balance inherent to Two-Stage designs.One-Stage detectors exhibited higher sensitivity and faster inference,reaching Recall values above 0.88,but experienced minor reductions in Precision and overall accuracy(≈1%–2%)compared with Two-Stage models.Among Transformer-based architectures,Deformable DEtection TRansformer demonstrated strong robustness and consistency across datasets,achieving Macro F1-Scores comparable to CNN-based detectors(≈0.83–0.85)while exhibiting minimal performance degradation under distributional shifts.Breast density–based analysis revealed increased misclassification rates in medium-density categories(types B and C),whereas Transformer-based architectures maintained more stable performance in high-density type D tissue.These findings quantitatively confirm that both architectural design and tissue characteristics play a decisive role in diagnostic accuracy.Overall,the study provides a reproducible benchmark and highlights the potential of hybrid approaches that combine the accuracy of Two-Stage detectors with the contextual modeling capability of Transformer architectures for clinically reliable breast cancer screening systems.