This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper a...This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper aims to identify the most effective AI techniques,the most used datasets,the most widely used data preprocessing techniques,and the most common issues.After analyzing the literature,it has been found that convolutional neural networks(CNNs)and long short-term memory(LSTM)networks are deep learning models that have shown high accuracy in diabetes prediction.Recursive feature elimination(RFE)and SMOTE are feature selection techniques that have significantly improved model accuracy,training time,and interpretability.Amidst this technological advancement,some existing issues persist:data imbalance,the inapplicability of techniques,computational limitations,and a lack of real-time application in a healthcare environment.The literature review has also identified the need for robust,interpretable,and scalable AI systems capable of handling large volumes of data,including real-world data,in the healthcare industry.Furthermore,it has been identified that the benefits should be integrated with wearable health monitoring systems and the development of privacy-preserving models to ensure continuous,secure,and proactive diabetes management.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Background:Coronary artery disease(CAD)is a major global health concern requiring efficient and accurate diagnostic methods.Manual interpretation of coronary computed tomography angiography(CTA)images is time-consumin...Background:Coronary artery disease(CAD)is a major global health concern requiring efficient and accurate diagnostic methods.Manual interpretation of coronary computed tomography angiography(CTA)images is time-consuming and prone to interobserver variability,underscoring the need for automated segmentation and stenosis detection tools.Methods:This study presents a hybrid multi-scale 3D segmentation framework utilizing both 3D U-Net and Enhanced 3D U-Net architectures,designed to balance computational efficiency and anatomical precision.Processed CTA images from the ImageCAS dataset underwent data standardization,normalization,and augmentation.The framework applies ensemble learning to merge coarse and fine segmentation masks,followed by advanced post-processing techniques,including connected component analysis and centerline extraction,to refine vessel delineation.Stenosis regions are detected using the Enhanced 3D U-Net and morphological operations for accurate localization.Results:The proposed pipeline achieved near-perfect segmentation accuracy(0.9993)and a Dice similarity coefficient of 0.8539 for coronary artery delineation.Precision,recall,and F1 scores for stenosis detection were 0.8418,0.8289,and 0.8397,respectively.The dual-model approach demonstrated robust performance across varied anatomical structures and effectively localized stenotic regions,indicating clear superiority over conventional models.Conclusion:This hybrid framework enables highly reliable and automated coronary artery segmentation and stenosis detection from 3D CTA images.By reducing reliance on manual interpretation and enhancing diagnostic consistency,the proposed method holds strong potential to improve clinical workflows for CAD diagnosis and management.展开更多
Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present wi...Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.展开更多
The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which c...The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which can cripple critical infrastructure.The proposed framework presented in the current paper is a new hybrid scheme that induces deep learning-based traffic classification and blockchain-enabledmitigation tomake intelligent,decentralized,and real-time DDoS countermeasures in an IoT network.The proposed model fuses the extracted deep features with statistical features and trains them by using traditional machine-learning algorithms,which makes them more accurate in detection than statistical features alone,based on the Convolutional Neural Network(CNN)architecture,which can extract deep features.A permissioned blockchain will be included to record the threat cases immutably and automatically execute mitigation measures through smart contracts to provide transparency and resilience.When tested on two test sets,BoT-IoT and IoT-23,the framework obtains a maximum F1-score at 97.5 percent and only a 1.8 percent false positive rate,which compares favorably to other solutions regarding effectiveness and the amount of time required to respond.Our findings support the feasibility of our method as an extensible and secure paradigm of nextgeneration IoT security,which has constrictive utility in mission-critical or resource-constrained settings.The work is a substantial milestone in autonomous and trustful mitigation against DDoS attacks through intelligent learning and decentralized enforcement.展开更多
Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines A...Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines Ant Colony Optimization(ACO)and the Greedy Algorithm(GA).ACO finds smart paths while Greedy makes quick decisions.This improves energy use and performance.ACOGA outperforms Hybrid Energy-Efficient(HEE)and Adaptive Lossless Data Compression(ALDC)algorithms.After 500 rounds,only 5%of ACOGA’s nodes are dead,compared to 15%for HEE and 20%for ALDC.The network using ACOGA runs for 1200 rounds before the first nodes fail.HEE lasts 900 rounds and ALDC only 850.ACOGA saves at least 15%more energy by better distributing the load.It also achieves a 98%packet delivery rate.The method works well in mixed IoT networks like Smart Water Management Systems(SWMS).These systems have different power levels and communication ranges.The simulation of proposed model has been done in MATLAB simulator.The results show that that the proposed model outperform then the existing models.展开更多
“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of F...“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.展开更多
The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud d...The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.展开更多
Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability...Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.展开更多
During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farm...During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.展开更多
Cloud-based setups are intertwined with the Internet of Things and advanced,and technologies such as blockchain revolutionize conventional healthcare infrastructure.This digitization has major advantages,mainly enhanc...Cloud-based setups are intertwined with the Internet of Things and advanced,and technologies such as blockchain revolutionize conventional healthcare infrastructure.This digitization has major advantages,mainly enhancing the security barriers of the green tree infrastructure.In this study,we conducted a systematic review of over 150 articles that focused exclusively on blockchain-based healthcare systems,security vulnerabilities,cyberattacks,and system limitations.In addition,we considered several solutions proposed by thousands of researchers worldwide.Our results mostly delineate sustained threats and security concerns in blockchain-based medical health infrastructures for data management,transmission,and processing.Here,we describe 17 security threats that violate the privacy and data integrity of a system,over 21 cyber-attacks on security and QoS,and some system implementation problems such as node compromise,scalability,efficiency,regulatory issues,computation speed,and power consumption.We propose a multi-layered architecture for the future healthcare infrastructure.Second,we classify all threats and security concerns based on these layers and assess suggested solutions in terms of these contingencies.Our thorough theoretical examination of several performance criteria—including confidentiality,access control,interoperability problems,and energy efficiency—as well as mathematical verifications establishes the superiority of security,privacy maintenance,reliability,and efficiency over conventional systems.We conducted in-depth comparative studies on different interoperability parameters in the blockchain models.Our research justifies the use of various positive protocols and optimization methods to improve the quality of services in e-healthcare and overcome problems arising fromlaws and ethics.Determining the theoretical aspects,their scope,and future expectations encourages us to design reliable,secure,and privacy-preserving systems.展开更多
The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardio...The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.展开更多
The Sine and Wormhole Energy Whale Optimization Algorithm(SWEWOA)represents an advanced solution method for resolving Optimal Power Flow(OPF)problems in power systems equipped with Flexible AC Transmission System(FACT...The Sine and Wormhole Energy Whale Optimization Algorithm(SWEWOA)represents an advanced solution method for resolving Optimal Power Flow(OPF)problems in power systems equipped with Flexible AC Transmission System(FACTS)devices which include Thyristor-Controlled Series Compensator(TCSC),Thyristor-Controlled Phase Shifter(TCPS),and Static Var Compensator(SVC).SWEWOA expands Whale Optimization Algorithm(WOA)through the integration of sine and wormhole energy features thus improving exploration and exploitation capabilities for efficient convergence in complex non-linear OPF problems.A performance evaluation of SWEWOA takes place on the IEEE-30 bus test system through static and dynamic loading scenarios where it demonstrates better results than five contemporary algorithms:Adaptive Chaotic WOA(ACWOA),WOA,Chaotic WOA(CWOA),Sine Cosine Algorithm Differential Evolution(SCADE),and Hybrid Grey Wolf Optimization(HGWO).The research shows that SWEWOA delivers superior generation cost reduction than other algorithms by reaching a minimum of 0.9%better performance.SWEWOA demonstrates superior power loss performance by achieving(P_(loss,min))at the lowest level compared to all other tested algorithms which leads to better system energy efficiency.The dynamic loading performance of SWEWOA leads to a 4.38%reduction in gross costs which proves its capability to handle different operating conditions.The algorithm achieves top performance in Friedman Rank Test(FRT)assessments through multiple performance metrics which verifies its consistent reliability and strong stability during changing power demands.The repeated simulations show that SWEWOA generates mean costs(C_(gen,min))and mean power loss values(P_(loss,min))with small deviations which indicate its capability to maintain cost-effective solutions in each simulation run.SWEWOA demonstrates great potential as an advanced optimization solution for power system operations through the results presented in this study.展开更多
Skin cancer remains a significant global health challenge,and early detection is crucial to improving patient outcomes.This study presents a novel deep learning framework that combines Convolutional Neural Networks(CN...Skin cancer remains a significant global health challenge,and early detection is crucial to improving patient outcomes.This study presents a novel deep learning framework that combines Convolutional Neural Networks(CNNs),Transformers,and Gated Recurrent Units(GRUs)for robust skin cancer classification.To address data set imbalance,we employ StyleGAN3-based synthetic data augmentation alongside traditional techniques.The hybrid architecture effectively captures both local and global dependencies in dermoscopic images,while the GRU component models sequential patterns.Evaluated on the HAM10000 dataset,the proposed model achieves an accuracy of 90.61%,outperforming baseline architectures such as VGG16 and ResNet.Our system also demonstrates superior precision(91.11%),recall(95.28%),and AUC(0.97),highlighting its potential as a reliable diagnostic tool for the detection of melanoma.This work advances automated skin cancer diagnosis by addressing critical challenges related to class imbalance and limited generalization in medical imaging.展开更多
As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)system...As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)systems.These systems are essential for monitoring and controlling industrial operations,making their security paramount.A key threat arises from Shor’s algorithm,a powerful quantum computing tool that can compromise current hash functions,leading to significant concerns about data integrity and confidentiality.To tackle these issues,this article introduces a novel Quantum-Resistant Hash Algorithm(QRHA)known as the Modular Hash Learning Algorithm(MHLA).This algorithm is meticulously crafted to withstand potential quantum attacks by incorporating advanced mathematical and algorithmic techniques,enhancing its overall security framework.Our research delves into the effectiveness ofMHLA in defending against both traditional and quantum-based threats,with a particular emphasis on its resilience to Shor’s algorithm.The findings from our study demonstrate that MHLA significantly enhances the security of SCADA systems in the context of quantum technology.By ensuring that sensitive data remains protected and confidential,MHLA not only fortifies individual systems but also contributes to the broader efforts of safeguarding industrial and infrastructure control systems against future quantumthreats.Our evaluation demonstrates that MHLA improves security by 38%against quantumattack simulations compared to traditional hash functionswhilemaintaining a computational efficiency ofO(m⋅n⋅k+v+n).The algorithm achieved a 98%success rate in detecting data tampering during integrity testing.These findings underline MHLA’s effectiveness in enhancing SCADA system security amidst evolving quantum technologies.This research represents a crucial step toward developing more secure cryptographic systems that can adapt to the rapidly changing technological landscape,ultimately ensuring the reliability and integrity of critical infrastructure in an era where quantum computing poses a growing risk.展开更多
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha...The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.展开更多
Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscul...Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscultation of cardiovascular sounds is heavily reliant on clinical expertise and subject to high variability.To counter this limitation,this study proposes an AI-driven classification system for cardiovascular sounds whereby deep learning techniques are engaged to automate the detection of an abnormal heartbeat.We employ FastAI vision-learner-based convolutional neural networks(CNNs)that include ResNet,DenseNet,VGG,ConvNeXt,SqueezeNet,and AlexNet to classify heart sound recordings.Instead of raw waveform analysis,the proposed approach transforms preprocessed cardiovascular audio signals into spectrograms,which are suited for capturing temporal and frequency-wise patterns.The models are trained on the PASCAL Cardiovascular Challenge dataset while taking into consideration the recording variations,noise levels,and acoustic distortions.To demonstrate generalization,external validation using Google’s Audio set Heartbeat Sound dataset was performed using a dataset rich in cardiovascular sounds.Comparative analysis revealed that DenseNet-201,ConvNext Large,and ResNet-152 could deliver superior performance to the other architectures,achieving an accuracy of 81.50%,a precision of 85.50%,and an F1-score of 84.50%.In the process,we performed statistical significance testing,such as the Wilcoxon signed-rank test,to validate performance improvements over traditional classification methods.Beyond the technical contributions,the research underscores clinical integration,outlining a pathway in which the proposed system can augment conventional electronic stethoscopes and telemedicine platforms in the AI-assisted diagnostic workflows.We also discuss in detail issues of computational efficiency,model interpretability,and ethical considerations,particularly concerning algorithmic bias stemming from imbalanced datasets and the need for real-time processing in clinical settings.The study describes a scalable,automated system combining deep learning,feature extraction using spectrograms,and external validation that can assist healthcare providers in the early and accurate detection of cardiovascular disease.AI-driven solutions can be viable in improving access,reducing delays in diagnosis,and ultimately even the continued global burden of heart disease.展开更多
文摘This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper aims to identify the most effective AI techniques,the most used datasets,the most widely used data preprocessing techniques,and the most common issues.After analyzing the literature,it has been found that convolutional neural networks(CNNs)and long short-term memory(LSTM)networks are deep learning models that have shown high accuracy in diabetes prediction.Recursive feature elimination(RFE)and SMOTE are feature selection techniques that have significantly improved model accuracy,training time,and interpretability.Amidst this technological advancement,some existing issues persist:data imbalance,the inapplicability of techniques,computational limitations,and a lack of real-time application in a healthcare environment.The literature review has also identified the need for robust,interpretable,and scalable AI systems capable of handling large volumes of data,including real-world data,in the healthcare industry.Furthermore,it has been identified that the benefits should be integrated with wearable health monitoring systems and the development of privacy-preserving models to ensure continuous,secure,and proactive diabetes management.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
文摘Background:Coronary artery disease(CAD)is a major global health concern requiring efficient and accurate diagnostic methods.Manual interpretation of coronary computed tomography angiography(CTA)images is time-consuming and prone to interobserver variability,underscoring the need for automated segmentation and stenosis detection tools.Methods:This study presents a hybrid multi-scale 3D segmentation framework utilizing both 3D U-Net and Enhanced 3D U-Net architectures,designed to balance computational efficiency and anatomical precision.Processed CTA images from the ImageCAS dataset underwent data standardization,normalization,and augmentation.The framework applies ensemble learning to merge coarse and fine segmentation masks,followed by advanced post-processing techniques,including connected component analysis and centerline extraction,to refine vessel delineation.Stenosis regions are detected using the Enhanced 3D U-Net and morphological operations for accurate localization.Results:The proposed pipeline achieved near-perfect segmentation accuracy(0.9993)and a Dice similarity coefficient of 0.8539 for coronary artery delineation.Precision,recall,and F1 scores for stenosis detection were 0.8418,0.8289,and 0.8397,respectively.The dual-model approach demonstrated robust performance across varied anatomical structures and effectively localized stenotic regions,indicating clear superiority over conventional models.Conclusion:This hybrid framework enables highly reliable and automated coronary artery segmentation and stenosis detection from 3D CTA images.By reducing reliance on manual interpretation and enhancing diagnostic consistency,the proposed method holds strong potential to improve clinical workflows for CAD diagnosis and management.
基金the“Intelligent Recognition Industry Service Center”as part of the Featured Areas Research Center Program under the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan,and the National Science and Technology Council,Taiwan,under grants 113-2221-E-224-041 and 113-2622-E-224-002.Additionally,partial support was provided by Isuzu Optics Corporation.
文摘Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.
文摘The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which can cripple critical infrastructure.The proposed framework presented in the current paper is a new hybrid scheme that induces deep learning-based traffic classification and blockchain-enabledmitigation tomake intelligent,decentralized,and real-time DDoS countermeasures in an IoT network.The proposed model fuses the extracted deep features with statistical features and trains them by using traditional machine-learning algorithms,which makes them more accurate in detection than statistical features alone,based on the Convolutional Neural Network(CNN)architecture,which can extract deep features.A permissioned blockchain will be included to record the threat cases immutably and automatically execute mitigation measures through smart contracts to provide transparency and resilience.When tested on two test sets,BoT-IoT and IoT-23,the framework obtains a maximum F1-score at 97.5 percent and only a 1.8 percent false positive rate,which compares favorably to other solutions regarding effectiveness and the amount of time required to respond.Our findings support the feasibility of our method as an extensible and secure paradigm of nextgeneration IoT security,which has constrictive utility in mission-critical or resource-constrained settings.The work is a substantial milestone in autonomous and trustful mitigation against DDoS attacks through intelligent learning and decentralized enforcement.
文摘Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines Ant Colony Optimization(ACO)and the Greedy Algorithm(GA).ACO finds smart paths while Greedy makes quick decisions.This improves energy use and performance.ACOGA outperforms Hybrid Energy-Efficient(HEE)and Adaptive Lossless Data Compression(ALDC)algorithms.After 500 rounds,only 5%of ACOGA’s nodes are dead,compared to 15%for HEE and 20%for ALDC.The network using ACOGA runs for 1200 rounds before the first nodes fail.HEE lasts 900 rounds and ALDC only 850.ACOGA saves at least 15%more energy by better distributing the load.It also achieves a 98%packet delivery rate.The method works well in mixed IoT networks like Smart Water Management Systems(SWMS).These systems have different power levels and communication ranges.The simulation of proposed model has been done in MATLAB simulator.The results show that that the proposed model outperform then the existing models.
基金support the findings of this study are openly available in(Scopus database)at www.scopus.com(accessed on 07 January 2025).
文摘“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.
文摘The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.
文摘Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.
基金funded by UKRI EPSRC Grant EP/W020408/1 Project SPRITE+2:The Security,Privacy,Identity,and Trust Engagement Network plus(phase 2)for this studyfunded by PhD project RS718 on Explainable AI through the UKRI EPSRC Grant-funded Doctoral Training Centre at Swansea University.
文摘During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.
文摘Cloud-based setups are intertwined with the Internet of Things and advanced,and technologies such as blockchain revolutionize conventional healthcare infrastructure.This digitization has major advantages,mainly enhancing the security barriers of the green tree infrastructure.In this study,we conducted a systematic review of over 150 articles that focused exclusively on blockchain-based healthcare systems,security vulnerabilities,cyberattacks,and system limitations.In addition,we considered several solutions proposed by thousands of researchers worldwide.Our results mostly delineate sustained threats and security concerns in blockchain-based medical health infrastructures for data management,transmission,and processing.Here,we describe 17 security threats that violate the privacy and data integrity of a system,over 21 cyber-attacks on security and QoS,and some system implementation problems such as node compromise,scalability,efficiency,regulatory issues,computation speed,and power consumption.We propose a multi-layered architecture for the future healthcare infrastructure.Second,we classify all threats and security concerns based on these layers and assess suggested solutions in terms of these contingencies.Our thorough theoretical examination of several performance criteria—including confidentiality,access control,interoperability problems,and energy efficiency—as well as mathematical verifications establishes the superiority of security,privacy maintenance,reliability,and efficiency over conventional systems.We conducted in-depth comparative studies on different interoperability parameters in the blockchain models.Our research justifies the use of various positive protocols and optimization methods to improve the quality of services in e-healthcare and overcome problems arising fromlaws and ethics.Determining the theoretical aspects,their scope,and future expectations encourages us to design reliable,secure,and privacy-preserving systems.
基金funded by Researchers Supporting ProjectNumber(RSPD2025R947),King Saud University,Riyadh,Saudi Arabia.
文摘The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.
文摘The Sine and Wormhole Energy Whale Optimization Algorithm(SWEWOA)represents an advanced solution method for resolving Optimal Power Flow(OPF)problems in power systems equipped with Flexible AC Transmission System(FACTS)devices which include Thyristor-Controlled Series Compensator(TCSC),Thyristor-Controlled Phase Shifter(TCPS),and Static Var Compensator(SVC).SWEWOA expands Whale Optimization Algorithm(WOA)through the integration of sine and wormhole energy features thus improving exploration and exploitation capabilities for efficient convergence in complex non-linear OPF problems.A performance evaluation of SWEWOA takes place on the IEEE-30 bus test system through static and dynamic loading scenarios where it demonstrates better results than five contemporary algorithms:Adaptive Chaotic WOA(ACWOA),WOA,Chaotic WOA(CWOA),Sine Cosine Algorithm Differential Evolution(SCADE),and Hybrid Grey Wolf Optimization(HGWO).The research shows that SWEWOA delivers superior generation cost reduction than other algorithms by reaching a minimum of 0.9%better performance.SWEWOA demonstrates superior power loss performance by achieving(P_(loss,min))at the lowest level compared to all other tested algorithms which leads to better system energy efficiency.The dynamic loading performance of SWEWOA leads to a 4.38%reduction in gross costs which proves its capability to handle different operating conditions.The algorithm achieves top performance in Friedman Rank Test(FRT)assessments through multiple performance metrics which verifies its consistent reliability and strong stability during changing power demands.The repeated simulations show that SWEWOA generates mean costs(C_(gen,min))and mean power loss values(P_(loss,min))with small deviations which indicate its capability to maintain cost-effective solutions in each simulation run.SWEWOA demonstrates great potential as an advanced optimization solution for power system operations through the results presented in this study.
文摘Skin cancer remains a significant global health challenge,and early detection is crucial to improving patient outcomes.This study presents a novel deep learning framework that combines Convolutional Neural Networks(CNNs),Transformers,and Gated Recurrent Units(GRUs)for robust skin cancer classification.To address data set imbalance,we employ StyleGAN3-based synthetic data augmentation alongside traditional techniques.The hybrid architecture effectively captures both local and global dependencies in dermoscopic images,while the GRU component models sequential patterns.Evaluated on the HAM10000 dataset,the proposed model achieves an accuracy of 90.61%,outperforming baseline architectures such as VGG16 and ResNet.Our system also demonstrates superior precision(91.11%),recall(95.28%),and AUC(0.97),highlighting its potential as a reliable diagnostic tool for the detection of melanoma.This work advances automated skin cancer diagnosis by addressing critical challenges related to class imbalance and limited generalization in medical imaging.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R343),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia for funding this research work through the project number NBU-FFR-2025-1092-10.
文摘As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)systems.These systems are essential for monitoring and controlling industrial operations,making their security paramount.A key threat arises from Shor’s algorithm,a powerful quantum computing tool that can compromise current hash functions,leading to significant concerns about data integrity and confidentiality.To tackle these issues,this article introduces a novel Quantum-Resistant Hash Algorithm(QRHA)known as the Modular Hash Learning Algorithm(MHLA).This algorithm is meticulously crafted to withstand potential quantum attacks by incorporating advanced mathematical and algorithmic techniques,enhancing its overall security framework.Our research delves into the effectiveness ofMHLA in defending against both traditional and quantum-based threats,with a particular emphasis on its resilience to Shor’s algorithm.The findings from our study demonstrate that MHLA significantly enhances the security of SCADA systems in the context of quantum technology.By ensuring that sensitive data remains protected and confidential,MHLA not only fortifies individual systems but also contributes to the broader efforts of safeguarding industrial and infrastructure control systems against future quantumthreats.Our evaluation demonstrates that MHLA improves security by 38%against quantumattack simulations compared to traditional hash functionswhilemaintaining a computational efficiency ofO(m⋅n⋅k+v+n).The algorithm achieved a 98%success rate in detecting data tampering during integrity testing.These findings underline MHLA’s effectiveness in enhancing SCADA system security amidst evolving quantum technologies.This research represents a crucial step toward developing more secure cryptographic systems that can adapt to the rapidly changing technological landscape,ultimately ensuring the reliability and integrity of critical infrastructure in an era where quantum computing poses a growing risk.
基金funded by Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah under grant No.(RG-6-611-43)the authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.
基金funded by the deanship of scientific research(DSR),King Abdulaziz University,Jeddah,under grant No.(G-1436-611-309).
文摘Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscultation of cardiovascular sounds is heavily reliant on clinical expertise and subject to high variability.To counter this limitation,this study proposes an AI-driven classification system for cardiovascular sounds whereby deep learning techniques are engaged to automate the detection of an abnormal heartbeat.We employ FastAI vision-learner-based convolutional neural networks(CNNs)that include ResNet,DenseNet,VGG,ConvNeXt,SqueezeNet,and AlexNet to classify heart sound recordings.Instead of raw waveform analysis,the proposed approach transforms preprocessed cardiovascular audio signals into spectrograms,which are suited for capturing temporal and frequency-wise patterns.The models are trained on the PASCAL Cardiovascular Challenge dataset while taking into consideration the recording variations,noise levels,and acoustic distortions.To demonstrate generalization,external validation using Google’s Audio set Heartbeat Sound dataset was performed using a dataset rich in cardiovascular sounds.Comparative analysis revealed that DenseNet-201,ConvNext Large,and ResNet-152 could deliver superior performance to the other architectures,achieving an accuracy of 81.50%,a precision of 85.50%,and an F1-score of 84.50%.In the process,we performed statistical significance testing,such as the Wilcoxon signed-rank test,to validate performance improvements over traditional classification methods.Beyond the technical contributions,the research underscores clinical integration,outlining a pathway in which the proposed system can augment conventional electronic stethoscopes and telemedicine platforms in the AI-assisted diagnostic workflows.We also discuss in detail issues of computational efficiency,model interpretability,and ethical considerations,particularly concerning algorithmic bias stemming from imbalanced datasets and the need for real-time processing in clinical settings.The study describes a scalable,automated system combining deep learning,feature extraction using spectrograms,and external validation that can assist healthcare providers in the early and accurate detection of cardiovascular disease.AI-driven solutions can be viable in improving access,reducing delays in diagnosis,and ultimately even the continued global burden of heart disease.