Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes...Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyeffic...The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyefficient communication.This study explores the integration of Reconfigurable Intelligent Surfaces(RIS)into IoT networks to enhance communication performance.Unlike traditional passive reflector-based approaches,RIS is leveraged as an active optimization tool to improve both backscatter and direct communication modes,addressing critical IoT challenges such as energy efficiency,limited communication range,and double-fading effects in backscatter communication.We propose a novel computational framework that combines RIS functionality with Physical Layer Security(PLS)mechanisms,optimized through the algorithm known as Deep Deterministic Policy Gradient(DDPG).This framework adaptively adapts RIS configurations and transmitter beamforming to reduce key challenges,including imperfect channel state information(CSI)and hardware limitations like quantized RIS phase shifts.By optimizing both RIS settings and beamforming in real-time,our approach outperforms traditional methods by significantly increasing secrecy rates,improving spectral efficiency,and enhancing energy efficiency.Notably,this framework adapts more effectively to the dynamic nature of wireless channels compared to conventional optimization techniques,providing scalable solutions for large-scale RIS deployments.Our results demonstrate substantial improvements in communication performance setting a new benchmark for secure,efficient and scalable 6G communication.This work offers valuable insights for the future of IoT networks,with a focus on computational optimization,high spectral efficiency and energy-aware operations.展开更多
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi...Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.展开更多
The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilita...The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.展开更多
The word sustainable or green supply chain refers to the concept of incorporating sustainable environmental procedures into the traditional supply chain.Green supply chain management gives a chance to revise procedure...The word sustainable or green supply chain refers to the concept of incorporating sustainable environmental procedures into the traditional supply chain.Green supply chain management gives a chance to revise procedures,materials and operational ideas.Choosing the fuzziness of assessing data and the spiritual situations of experts in the decision-making procedure are two important issues.The main contribution of this analysis is to derive the theory of Archimedean Bonferroni mean operator for complex qrung orthopair fuzzy(CQROF)information,called the CQROF Archimedean Bonferroni mean and CQROF weighted Archimedean Bonferroni mean operators which are very valuable,dominant and classical type of aggregation operators used for examining the interrelationship among the finite number of attributes in modern data fusion theory.Inspirational and well-used properties of the initiated theories are also diagnosed with some special cases.Additionally,the theory of extended TODIM tool using the prospect theory based on CQROF information was discovered,which play an essential and critical role in the environment of fuzzy set theory.Finally,a real life problem by computing a green supply chain management based on the initiated CQROF operators was evaluated and fully illustrating the feasibility and efficiency of the diagnosed work with the help of a comparison between existing and prevailing theories.展开更多
The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent comm...The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance.展开更多
Cloud computing has emerged as a vital platform for processing resource-intensive workloads in smart manu-facturing environments,enabling scalable and flexible access to remote data centers over the internet.In these ...Cloud computing has emerged as a vital platform for processing resource-intensive workloads in smart manu-facturing environments,enabling scalable and flexible access to remote data centers over the internet.In these environments,Virtual Machines(VMs)are employed to manage workloads,with their optimal placement on Physical Machines(PMs)being crucial for maximizing resource utilization.However,achieving high resource utilization in cloud data centers remains a challenge due to multiple conflicting objectives,particularly in scenarios involving inter-VM communication dependencies,which are common in smart manufacturing applications.This manuscript presents an AI-driven approach utilizing a modified Multi-Objective Particle Swarm Optimization(MOPSO)algorithm,enhanced with improved mutation and crossover operators,to efficiently place VMs.This approach aims to minimize the impact on networking devices during inter-VM communication while enhancing resource utilization.The proposed algorithm is benchmarked against other multi-objective algorithms,such as Multi-Objective Evolutionary Algorithm with Decomposition(MOEA/D),demonstrating its superiority in optimizing resource allocation in cloud-based environments for smart manufacturing.展开更多
The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective se...The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective server module.Although IoTs are cornerstones in different application domains,the device’s authenticity,i.e.,of server(s)and ordinary devices,is the most crucial issue and must be resolved on a priority basis.Therefore,various field-proven methodologies were presented to streamline the verification process of the communicating devices;however,location-aware authentication has not been reported as per our knowledge,which is a crucial metric,especially in scenarios where devices are mobile.This paper presents a lightweight and location-aware device-to-server authentication technique where the device’s membership with the nearest server is subjected to its location information along with other measures.Initially,Media Access Control(MAC)address and Advance Encryption Scheme(AES)along with a secret shared key,i.e.,λ_(i) of 128 bits,have been utilized by Trusted Authority(TA)to generate MaskIDs,which are used instead of the original ID,for every device,i.e.,server and member,and are shared in the offline phase.Secondly,TA shares a list of authentic devices,i.e.,server S_(j) and members C_(i),with every device in the IoT for the onward verification process,which is required to be executed before the initialization of the actual communication process.Additionally,every device should be located such that it lies within the coverage area of a server,and this location information is used in the authentication process.A thorough analytical analysis was carried out to check the susceptibility of the proposed and existing authentication approaches against well-known intruder attacks,i.e.,man-in-the-middle,masquerading,device,and server impersonations,etc.,especially in the IoT domain.Moreover,proposed authentication and existing state-of-the-art approaches have been simulated in the real environment of IoT to verify their performance,particularly in terms of various evaluation metrics,i.e.,processing,communication,and storage overheads.These results have verified the superiority of the proposed scheme against existing state-of-the-art approaches,preferably in terms of communication,storage,and processing costs.展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech...Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.展开更多
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as o...This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems.展开更多
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue...In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.展开更多
Improving early diagnosis of autism spectrum disorder(ASD)in children increasingly relies on predictive models that are reliable and accessible to non-experts.This study aims to develop such models using Python-based ...Improving early diagnosis of autism spectrum disorder(ASD)in children increasingly relies on predictive models that are reliable and accessible to non-experts.This study aims to develop such models using Python-based tools to improve ASD diagnosis in clinical settings.We performed exploratory data analysis to ensure data quality and identify key patterns in pediatric ASD data.We selected the categorical boosting(CatBoost)algorithm to effectively handle the large number of categorical variables.We used the PyCaret automated machine learning(AutoML)tool to make the models user-friendly for clinicians without extensive machine learning expertise.In addition,we applied Shapley additive explanations(SHAP),an explainable artificial intelligence(XAI)technique,to improve the interpretability of the models.Models developed using CatBoost and other AI algorithms showed high accuracy in diagnosing ASD in children.SHAP provided clear insights into the influence of each variable on diagnostic outcomes,making model decisions transparent and understandable to healthcare professionals.By integrating robust machine learning methods with user-friendly tools such as PyCaret and leveraging XAI techniques such as SHAP,this study contributes to the development of reliable,interpretable,and accessible diagnostic tools for ASD.These advances hold great promise for supporting informed decision-making in clinical settings,ultimately improving early identification and intervention strategies for ASD in the pediatric population.However,the study is limited by the dataset’s demographic imbalance and the lack of external clinical validation,which should be addressed in future research.展开更多
The classification of respiratory sounds is crucial in diagnosing and monitoring respiratory diseases.However,auscultation is highly subjective,making it challenging to analyze respiratory sounds accurately.Although d...The classification of respiratory sounds is crucial in diagnosing and monitoring respiratory diseases.However,auscultation is highly subjective,making it challenging to analyze respiratory sounds accurately.Although deep learning has been increasingly applied to this task,most existing approaches have primarily relied on supervised learning.Since supervised learning requires large amounts of labeled data,recent studies have explored self-supervised and semi-supervised methods to overcome this limitation.However,these approaches have largely assumed a closedset setting,where the classes present in the unlabeled data are considered identical to those in the labeled data.In contrast,this study explores an open-set semi-supervised learning setting,where the unlabeled data may contain additional,unknown classes.To address this challenge,a distance-based prototype network is employed to classify respiratory sounds in an open-set setting.In the first stage,the prototype network is trained using labeled and unlabeled data to derive prototype representations of known classes.In the second stage,distances between unlabeled data and known class prototypes are computed,and samples exceeding an adaptive threshold are identified as unknown.A new prototype is then calculated for this unknown class.In the final stage,semi-supervised learning is employed to classify labeled and unlabeled data into known and unknown classes.Compared to conventional closed-set semisupervised learning approaches,the proposed method achieved an average classification accuracy improvement of 2%–5%.Additionally,in cases of data scarcity,utilizing unlabeled data further improved classification performance by 6%–8%.The findings of this study are expected to significantly enhance respiratory sound classification performance in practical clinical settings.展开更多
The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,...The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,device operations,and patterns of user behavior.Such data plays an essential role in criminal inves-tigations,highlighting the growing importance of specialized smart home forensics.Given the rapid advancement in smart home software and hardware technologies,many companies are introducing new devices and services that expand the market.Consequently,scalable and platform-specific forensic research is necessary to support efficient digital investigations across diverse smart home ecosystems.This study thoroughly examines the core components and structures of smart homes,proposing a generalized architecture that represents various operational environments.A three-stage smart home forensics framework is introduced:(1)analyzing application functions to infer relevant data,(2)extracting and processing data from interconnected devices,and(3)identifying data valuable for investigative purposes.The framework’s applicability is validated using testbeds from Samsung SmartThings and Xiaomi Mi Home platforms,offering practical insights for real-world forensic applications.The results demonstrate that the proposed forensic framework effectively acquires and classifies relevant digital evidence in smart home platforms,confirming its practical applicability in smart home forensic investigations.展开更多
Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embe...Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.展开更多
This study proposes an advanced vision-based technology for detecting glass products and identifying defects in a smart glass factory production environment.Leveraging artificial intelligence(AI)and computer vision,th...This study proposes an advanced vision-based technology for detecting glass products and identifying defects in a smart glass factory production environment.Leveraging artificial intelligence(AI)and computer vision,the research aims to automate glass detection processes and maximize production efficiency.The primary focus is on developing a precise glass detection and quality management system tailored to smart manufacturing environments.The proposed system utilizes the various YOLO(You Only Look Once)models for glass detection,comparing their performance to identify the most effective architecture.Input images are preprocessed using a Gaussian Mixture Model(GMM)to remove background noise present in factory environments.This approach minimizes distractions caused by varying backgrounds and enables accurate glass identification and defect detection.Traditional manual inspection methods often require skilled labor,are time-intensive,and may lack consistency.In contrast,the proposed vision-based system ensures high accuracy and reliability through non-contact inspection.The performance of the system was evaluated using video data collected from an actual glass factory.This assessment verified the accuracy,reliability,and practicality of the system,demonstrating its effectiveness in real-world production scenarios.Beyond automating glass detection and defect identification,the proposed system integrates into manufacturing environments to support data-driven decision-making.This enables real-time monitoring,defect prediction,and improved production efficiency.Moreover,this research is expected to serve as a model for enhancing quality control and productivity across various manufacturing industries,driving innovation in smart manufacturing.展开更多
Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide ...Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.展开更多
Intrusion attempts against Internet of Things(IoT)devices have significantly increased in the last few years.These devices are now easy targets for hackers because of their built-in security flaws.Combining a Self-Org...Intrusion attempts against Internet of Things(IoT)devices have significantly increased in the last few years.These devices are now easy targets for hackers because of their built-in security flaws.Combining a Self-Organizing Map(SOM)hybrid anomaly detection system for dimensionality reduction with the inherited nature of clustering and Extreme Gradient Boosting(XGBoost)for multi-class classification can improve network traffic intrusion detection.The proposed model is evaluated on the NSL-KDD dataset.The hybrid approach outperforms the baseline line models,Multilayer perceptron model,and SOM-KNN(k-nearest neighbors)model in precision,recall,and F1-score,highlighting the proposed approach’s scalability,potential,adaptability,and real-world applicability.Therefore,this paper proposes a highly efficient deployment strategy for resource-constrained network edges.The results reveal that Precision,Recall,and F1-scores rise 10%-30% for the benign,probing,and Denial of Service(DoS)classes.In particular,the DoS,probe,and benign classes improved their F1-scores by 7.91%,32.62%,and 12.45%,respectively.展开更多
基金funded by the Northern Border University,Arar,KSA,under the project number“NBU-FFR-2025-3555-07”.
文摘Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金funded by the deanship of scientific research(DSR),King Abdukaziz University,Jeddah,under grant No.(G-1436-611-225)。
文摘The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyefficient communication.This study explores the integration of Reconfigurable Intelligent Surfaces(RIS)into IoT networks to enhance communication performance.Unlike traditional passive reflector-based approaches,RIS is leveraged as an active optimization tool to improve both backscatter and direct communication modes,addressing critical IoT challenges such as energy efficiency,limited communication range,and double-fading effects in backscatter communication.We propose a novel computational framework that combines RIS functionality with Physical Layer Security(PLS)mechanisms,optimized through the algorithm known as Deep Deterministic Policy Gradient(DDPG).This framework adaptively adapts RIS configurations and transmitter beamforming to reduce key challenges,including imperfect channel state information(CSI)and hardware limitations like quantized RIS phase shifts.By optimizing both RIS settings and beamforming in real-time,our approach outperforms traditional methods by significantly increasing secrecy rates,improving spectral efficiency,and enhancing energy efficiency.Notably,this framework adapts more effectively to the dynamic nature of wireless channels compared to conventional optimization techniques,providing scalable solutions for large-scale RIS deployments.Our results demonstrate substantial improvements in communication performance setting a new benchmark for secure,efficient and scalable 6G communication.This work offers valuable insights for the future of IoT networks,with a focus on computational optimization,high spectral efficiency and energy-aware operations.
文摘Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.
基金supported by the Ongoing Research Funding program(ORF-2025-636),King Saud University,Riyadh,Saudi Arabia.
文摘The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.
基金Regional Innovation Strategy(RIS)through the National Research Foundation of Korea funded by the Ministry of Education,Grant/Award Number:2021RIS-001(1345341783)Brain Pool program funded by the Ministry of Science and ICT through the National Research Foundation of Korea,Grant/Award Number:NRF-2022H1D3A2A02060097。
文摘The word sustainable or green supply chain refers to the concept of incorporating sustainable environmental procedures into the traditional supply chain.Green supply chain management gives a chance to revise procedures,materials and operational ideas.Choosing the fuzziness of assessing data and the spiritual situations of experts in the decision-making procedure are two important issues.The main contribution of this analysis is to derive the theory of Archimedean Bonferroni mean operator for complex qrung orthopair fuzzy(CQROF)information,called the CQROF Archimedean Bonferroni mean and CQROF weighted Archimedean Bonferroni mean operators which are very valuable,dominant and classical type of aggregation operators used for examining the interrelationship among the finite number of attributes in modern data fusion theory.Inspirational and well-used properties of the initiated theories are also diagnosed with some special cases.Additionally,the theory of extended TODIM tool using the prospect theory based on CQROF information was discovered,which play an essential and critical role in the environment of fuzzy set theory.Finally,a real life problem by computing a green supply chain management based on the initiated CQROF operators was evaluated and fully illustrating the feasibility and efficiency of the diagnosed work with the help of a comparison between existing and prevailing theories.
基金supported by the Researcher Supporting Project number(RSPD2024R582),King Saud University,Riyadh,Saudi Arabia.
文摘The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance.
基金funded by Researchers Supporting Project Number(RSPD2025R 947),King Saud University,Riyadh,Saudi Arabia.
文摘Cloud computing has emerged as a vital platform for processing resource-intensive workloads in smart manu-facturing environments,enabling scalable and flexible access to remote data centers over the internet.In these environments,Virtual Machines(VMs)are employed to manage workloads,with their optimal placement on Physical Machines(PMs)being crucial for maximizing resource utilization.However,achieving high resource utilization in cloud data centers remains a challenge due to multiple conflicting objectives,particularly in scenarios involving inter-VM communication dependencies,which are common in smart manufacturing applications.This manuscript presents an AI-driven approach utilizing a modified Multi-Objective Particle Swarm Optimization(MOPSO)algorithm,enhanced with improved mutation and crossover operators,to efficiently place VMs.This approach aims to minimize the impact on networking devices during inter-VM communication while enhancing resource utilization.The proposed algorithm is benchmarked against other multi-objective algorithms,such as Multi-Objective Evolutionary Algorithm with Decomposition(MOEA/D),demonstrating its superiority in optimizing resource allocation in cloud-based environments for smart manufacturing.
文摘The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective server module.Although IoTs are cornerstones in different application domains,the device’s authenticity,i.e.,of server(s)and ordinary devices,is the most crucial issue and must be resolved on a priority basis.Therefore,various field-proven methodologies were presented to streamline the verification process of the communicating devices;however,location-aware authentication has not been reported as per our knowledge,which is a crucial metric,especially in scenarios where devices are mobile.This paper presents a lightweight and location-aware device-to-server authentication technique where the device’s membership with the nearest server is subjected to its location information along with other measures.Initially,Media Access Control(MAC)address and Advance Encryption Scheme(AES)along with a secret shared key,i.e.,λ_(i) of 128 bits,have been utilized by Trusted Authority(TA)to generate MaskIDs,which are used instead of the original ID,for every device,i.e.,server and member,and are shared in the offline phase.Secondly,TA shares a list of authentic devices,i.e.,server S_(j) and members C_(i),with every device in the IoT for the onward verification process,which is required to be executed before the initialization of the actual communication process.Additionally,every device should be located such that it lies within the coverage area of a server,and this location information is used in the authentication process.A thorough analytical analysis was carried out to check the susceptibility of the proposed and existing authentication approaches against well-known intruder attacks,i.e.,man-in-the-middle,masquerading,device,and server impersonations,etc.,especially in the IoT domain.Moreover,proposed authentication and existing state-of-the-art approaches have been simulated in the real environment of IoT to verify their performance,particularly in terms of various evaluation metrics,i.e.,processing,communication,and storage overheads.These results have verified the superiority of the proposed scheme against existing state-of-the-art approaches,preferably in terms of communication,storage,and processing costs.
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
基金Research Center of the College of Computer and Information Sciences,King Saud University,Grant/Award Number:RSPD2024R947King Saud University。
文摘Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.
基金funded by Woosong University Academic Research 2024.
文摘This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems.
文摘In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(No.RS-2023-00218176)the Soonchunhyang University Research Fund.
文摘Improving early diagnosis of autism spectrum disorder(ASD)in children increasingly relies on predictive models that are reliable and accessible to non-experts.This study aims to develop such models using Python-based tools to improve ASD diagnosis in clinical settings.We performed exploratory data analysis to ensure data quality and identify key patterns in pediatric ASD data.We selected the categorical boosting(CatBoost)algorithm to effectively handle the large number of categorical variables.We used the PyCaret automated machine learning(AutoML)tool to make the models user-friendly for clinicians without extensive machine learning expertise.In addition,we applied Shapley additive explanations(SHAP),an explainable artificial intelligence(XAI)technique,to improve the interpretability of the models.Models developed using CatBoost and other AI algorithms showed high accuracy in diagnosing ASD in children.SHAP provided clear insights into the influence of each variable on diagnostic outcomes,making model decisions transparent and understandable to healthcare professionals.By integrating robust machine learning methods with user-friendly tools such as PyCaret and leveraging XAI techniques such as SHAP,this study contributes to the development of reliable,interpretable,and accessible diagnostic tools for ASD.These advances hold great promise for supporting informed decision-making in clinical settings,ultimately improving early identification and intervention strategies for ASD in the pediatric population.However,the study is limited by the dataset’s demographic imbalance and the lack of external clinical validation,which should be addressed in future research.
基金supported by Innovative Human Resource Development for Local Intellectualization Programthrough the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(IITP-2025-RS-2022-00156360).
文摘The classification of respiratory sounds is crucial in diagnosing and monitoring respiratory diseases.However,auscultation is highly subjective,making it challenging to analyze respiratory sounds accurately.Although deep learning has been increasingly applied to this task,most existing approaches have primarily relied on supervised learning.Since supervised learning requires large amounts of labeled data,recent studies have explored self-supervised and semi-supervised methods to overcome this limitation.However,these approaches have largely assumed a closedset setting,where the classes present in the unlabeled data are considered identical to those in the labeled data.In contrast,this study explores an open-set semi-supervised learning setting,where the unlabeled data may contain additional,unknown classes.To address this challenge,a distance-based prototype network is employed to classify respiratory sounds in an open-set setting.In the first stage,the prototype network is trained using labeled and unlabeled data to derive prototype representations of known classes.In the second stage,distances between unlabeled data and known class prototypes are computed,and samples exceeding an adaptive threshold are identified as unknown.A new prototype is then calculated for this unknown class.In the final stage,semi-supervised learning is employed to classify labeled and unlabeled data into known and unknown classes.Compared to conventional closed-set semisupervised learning approaches,the proposed method achieved an average classification accuracy improvement of 2%–5%.Additionally,in cases of data scarcity,utilizing unlabeled data further improved classification performance by 6%–8%.The findings of this study are expected to significantly enhance respiratory sound classification performance in practical clinical settings.
文摘The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,device operations,and patterns of user behavior.Such data plays an essential role in criminal inves-tigations,highlighting the growing importance of specialized smart home forensics.Given the rapid advancement in smart home software and hardware technologies,many companies are introducing new devices and services that expand the market.Consequently,scalable and platform-specific forensic research is necessary to support efficient digital investigations across diverse smart home ecosystems.This study thoroughly examines the core components and structures of smart homes,proposing a generalized architecture that represents various operational environments.A three-stage smart home forensics framework is introduced:(1)analyzing application functions to infer relevant data,(2)extracting and processing data from interconnected devices,and(3)identifying data valuable for investigative purposes.The framework’s applicability is validated using testbeds from Samsung SmartThings and Xiaomi Mi Home platforms,offering practical insights for real-world forensic applications.The results demonstrate that the proposed forensic framework effectively acquires and classifies relevant digital evidence in smart home platforms,confirming its practical applicability in smart home forensic investigations.
基金supported by the researcher supporting Project number(RSPD2025R636),King Saud University,Riyadh,Saudi Arabia.
文摘Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.
基金supported by the Technology Development Program(RS-2024-00445393)funded by the Ministry of SMEs and Startups(MSS,Republic of Korea).
文摘This study proposes an advanced vision-based technology for detecting glass products and identifying defects in a smart glass factory production environment.Leveraging artificial intelligence(AI)and computer vision,the research aims to automate glass detection processes and maximize production efficiency.The primary focus is on developing a precise glass detection and quality management system tailored to smart manufacturing environments.The proposed system utilizes the various YOLO(You Only Look Once)models for glass detection,comparing their performance to identify the most effective architecture.Input images are preprocessed using a Gaussian Mixture Model(GMM)to remove background noise present in factory environments.This approach minimizes distractions caused by varying backgrounds and enables accurate glass identification and defect detection.Traditional manual inspection methods often require skilled labor,are time-intensive,and may lack consistency.In contrast,the proposed vision-based system ensures high accuracy and reliability through non-contact inspection.The performance of the system was evaluated using video data collected from an actual glass factory.This assessment verified the accuracy,reliability,and practicality of the system,demonstrating its effectiveness in real-world production scenarios.Beyond automating glass detection and defect identification,the proposed system integrates into manufacturing environments to support data-driven decision-making.This enables real-time monitoring,defect prediction,and improved production efficiency.Moreover,this research is expected to serve as a model for enhancing quality control and productivity across various manufacturing industries,driving innovation in smart manufacturing.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00460621,Developing BCI-Based Digital Health Technologies for Mental Illness and Pain Management).
文摘Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.
基金Researcher Supporting Project number(RSPD2025R582),King Saud University,Riyadh,Saudi Arabia.
文摘Intrusion attempts against Internet of Things(IoT)devices have significantly increased in the last few years.These devices are now easy targets for hackers because of their built-in security flaws.Combining a Self-Organizing Map(SOM)hybrid anomaly detection system for dimensionality reduction with the inherited nature of clustering and Extreme Gradient Boosting(XGBoost)for multi-class classification can improve network traffic intrusion detection.The proposed model is evaluated on the NSL-KDD dataset.The hybrid approach outperforms the baseline line models,Multilayer perceptron model,and SOM-KNN(k-nearest neighbors)model in precision,recall,and F1-score,highlighting the proposed approach’s scalability,potential,adaptability,and real-world applicability.Therefore,this paper proposes a highly efficient deployment strategy for resource-constrained network edges.The results reveal that Precision,Recall,and F1-scores rise 10%-30% for the benign,probing,and Denial of Service(DoS)classes.In particular,the DoS,probe,and benign classes improved their F1-scores by 7.91%,32.62%,and 12.45%,respectively.