Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscul...Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscultation of cardiovascular sounds is heavily reliant on clinical expertise and subject to high variability.To counter this limitation,this study proposes an AI-driven classification system for cardiovascular sounds whereby deep learning techniques are engaged to automate the detection of an abnormal heartbeat.We employ FastAI vision-learner-based convolutional neural networks(CNNs)that include ResNet,DenseNet,VGG,ConvNeXt,SqueezeNet,and AlexNet to classify heart sound recordings.Instead of raw waveform analysis,the proposed approach transforms preprocessed cardiovascular audio signals into spectrograms,which are suited for capturing temporal and frequency-wise patterns.The models are trained on the PASCAL Cardiovascular Challenge dataset while taking into consideration the recording variations,noise levels,and acoustic distortions.To demonstrate generalization,external validation using Google’s Audio set Heartbeat Sound dataset was performed using a dataset rich in cardiovascular sounds.Comparative analysis revealed that DenseNet-201,ConvNext Large,and ResNet-152 could deliver superior performance to the other architectures,achieving an accuracy of 81.50%,a precision of 85.50%,and an F1-score of 84.50%.In the process,we performed statistical significance testing,such as the Wilcoxon signed-rank test,to validate performance improvements over traditional classification methods.Beyond the technical contributions,the research underscores clinical integration,outlining a pathway in which the proposed system can augment conventional electronic stethoscopes and telemedicine platforms in the AI-assisted diagnostic workflows.We also discuss in detail issues of computational efficiency,model interpretability,and ethical considerations,particularly concerning algorithmic bias stemming from imbalanced datasets and the need for real-time processing in clinical settings.The study describes a scalable,automated system combining deep learning,feature extraction using spectrograms,and external validation that can assist healthcare providers in the early and accurate detection of cardiovascular disease.AI-driven solutions can be viable in improving access,reducing delays in diagnosis,and ultimately even the continued global burden of heart disease.展开更多
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha...The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.展开更多
Detecting cyber attacks in networks connected to the Internet of Things(IoT)is of utmost importance because of the growing vulnerabilities in the smart environment.Conventional models,such as Naive Bayes and support v...Detecting cyber attacks in networks connected to the Internet of Things(IoT)is of utmost importance because of the growing vulnerabilities in the smart environment.Conventional models,such as Naive Bayes and support vector machine(SVM),as well as ensemble methods,such as Gradient Boosting and eXtreme gradient boosting(XGBoost),are often plagued by high computational costs,which makes it challenging for them to perform real-time detection.In this regard,we suggested an attack detection approach that integrates Visual Geometry Group 16(VGG16),Artificial Rabbits Optimizer(ARO),and Random Forest Model to increase detection accuracy and operational efficiency in Internet of Things(IoT)networks.In the suggested model,the extraction of features from malware pictures was accomplished with the help of VGG16.The prediction process is carried out by the random forest model using the extracted features from the VGG16.Additionally,ARO is used to improve the hyper-parameters of the random forest model of the random forest.With an accuracy of 96.36%,the suggested model outperforms the standard models in terms of accuracy,F1-score,precision,and recall.The comparative research highlights our strategy’s success,which improves performance while maintaining a lower computational cost.This method is ideal for real-time applications,but it is effective.展开更多
As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)system...As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)systems.These systems are essential for monitoring and controlling industrial operations,making their security paramount.A key threat arises from Shor’s algorithm,a powerful quantum computing tool that can compromise current hash functions,leading to significant concerns about data integrity and confidentiality.To tackle these issues,this article introduces a novel Quantum-Resistant Hash Algorithm(QRHA)known as the Modular Hash Learning Algorithm(MHLA).This algorithm is meticulously crafted to withstand potential quantum attacks by incorporating advanced mathematical and algorithmic techniques,enhancing its overall security framework.Our research delves into the effectiveness ofMHLA in defending against both traditional and quantum-based threats,with a particular emphasis on its resilience to Shor’s algorithm.The findings from our study demonstrate that MHLA significantly enhances the security of SCADA systems in the context of quantum technology.By ensuring that sensitive data remains protected and confidential,MHLA not only fortifies individual systems but also contributes to the broader efforts of safeguarding industrial and infrastructure control systems against future quantumthreats.Our evaluation demonstrates that MHLA improves security by 38%against quantumattack simulations compared to traditional hash functionswhilemaintaining a computational efficiency ofO(m⋅n⋅k+v+n).The algorithm achieved a 98%success rate in detecting data tampering during integrity testing.These findings underline MHLA’s effectiveness in enhancing SCADA system security amidst evolving quantum technologies.This research represents a crucial step toward developing more secure cryptographic systems that can adapt to the rapidly changing technological landscape,ultimately ensuring the reliability and integrity of critical infrastructure in an era where quantum computing poses a growing risk.展开更多
Phishing attacks seriously threaten information privacy and security within the Internet of Things(IoT)ecosystem.Numerous phishing attack detection solutions have been developed for IoT;however,many of these are eithe...Phishing attacks seriously threaten information privacy and security within the Internet of Things(IoT)ecosystem.Numerous phishing attack detection solutions have been developed for IoT;however,many of these are either not optimally efficient or lack the lightweight characteristics needed for practical application.This paper proposes and optimizes a lightweight deep-learning model for phishing attack detection.Our model employs a two-fold optimization approach:first,it utilizes the analysis of the variance(ANOVA)F-test to select the optimal features for phishing detection,and second,it applies the Cuckoo Search algorithm to tune the hyperparameters(learning rate and dropout rate)of the deep learning model.Additionally,our model is trained in only five epochs,making it more lightweight than other deep learning(DL)and machine learning(ML)models.The proposed model achieved a phishing detection accuracy of 91%,with a precision of 92%for the’normal’class and 91%for the‘attack’class.Moreover,the model’s recall and F1-score are 91%for both classes.We also compared our approach with traditional DL/ML models and past literature,demonstrating that our model is more accurate.This study enhances the security of sensitive information and IoT devices by offering a novel and effective approach to phishing detection.展开更多
Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional ...Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional Encoder Representations from Transformers(BERT)for feature extraction and CNN for classification,specifically designed for enterprise information systems.BERT’s linguistic capabilities are used to extract key features from email content,which are then processed by a convolutional neural network(CNN)model optimized for phishing detection.Achieving an accuracy of 97.5%,our proposed model demonstrates strong proficiency in identifying phishing emails.This approach represents a significant advancement in applying deep learning to cybersecurity,setting a new benchmark for email security by effectively addressing the increasing complexity of phishing attacks.展开更多
In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading...In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess.展开更多
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec...Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
This study introduces a long-short-term memory(LSTM)-based neural network model developed for detecting anomaly events in care-independent smart homes,focusing on the critical application of elderly fall detection.It ...This study introduces a long-short-term memory(LSTM)-based neural network model developed for detecting anomaly events in care-independent smart homes,focusing on the critical application of elderly fall detection.It balances the dataset using the Synthetic Minority Over-sampling Technique(SMOTE),effectively neutralizing bias to address the challenge of unbalanced datasets prevalent in time-series classification tasks.The proposed LSTM model is trained on the enriched dataset,capturing the temporal dependencies essential for anomaly recognition.The model demonstrated a significant improvement in anomaly detection,with an accuracy of 84%.The results,detailed in the comprehensive classification and confusion matrices,showed the model’s proficiency in distinguishing between normal activities and falls.This study contributes to the advancement of smart home safety,presenting a robust framework for real-time anomaly monitoring.展开更多
Phishing attacks are more than two-decade-old attacks that attackers use to steal passwords related to financial services.After the first reported incident in 1995,its impact keeps on increasing.Also,during COVID-19,d...Phishing attacks are more than two-decade-old attacks that attackers use to steal passwords related to financial services.After the first reported incident in 1995,its impact keeps on increasing.Also,during COVID-19,due to the increase in digitization,there is an exponential increase in the number of victims of phishing attacks.Many deep learning and machine learning techniques are available to detect phishing attacks.However,most of the techniques did not use efficient optimization techniques.In this context,our proposed model used random forest-based techniques to select the best features,and then the Brown-Bear optimization algorithm(BBOA)was used to fine-tune the hyper-parameters of the convolutional neural network(CNN)model.To test our model,we used a dataset from Kaggle comprising 11,000+websites.In addition to that,the dataset also consists of the 30 features that are extracted from the website uniform resource locator(URL).The target variable has two classes:“Safe”and“Phishing.”Due to the use of BBOA,our proposed model detects malicious URLs with an accuracy of 93%and a precision of 92%.In addition,comparing our model with standard techniques,such as GRU(Gated Recurrent Unit),LSTM(Long Short-Term Memory),RNN(Recurrent Neural Network),ANN(Artificial Neural Network),SVM(Support Vector Machine),and LR(Logistic Regression),presents the effectiveness of our proposed model.Also,the comparison with past literature showcases the contribution and novelty of our proposed model.展开更多
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金funded by the deanship of scientific research(DSR),King Abdulaziz University,Jeddah,under grant No.(G-1436-611-309).
文摘Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscultation of cardiovascular sounds is heavily reliant on clinical expertise and subject to high variability.To counter this limitation,this study proposes an AI-driven classification system for cardiovascular sounds whereby deep learning techniques are engaged to automate the detection of an abnormal heartbeat.We employ FastAI vision-learner-based convolutional neural networks(CNNs)that include ResNet,DenseNet,VGG,ConvNeXt,SqueezeNet,and AlexNet to classify heart sound recordings.Instead of raw waveform analysis,the proposed approach transforms preprocessed cardiovascular audio signals into spectrograms,which are suited for capturing temporal and frequency-wise patterns.The models are trained on the PASCAL Cardiovascular Challenge dataset while taking into consideration the recording variations,noise levels,and acoustic distortions.To demonstrate generalization,external validation using Google’s Audio set Heartbeat Sound dataset was performed using a dataset rich in cardiovascular sounds.Comparative analysis revealed that DenseNet-201,ConvNext Large,and ResNet-152 could deliver superior performance to the other architectures,achieving an accuracy of 81.50%,a precision of 85.50%,and an F1-score of 84.50%.In the process,we performed statistical significance testing,such as the Wilcoxon signed-rank test,to validate performance improvements over traditional classification methods.Beyond the technical contributions,the research underscores clinical integration,outlining a pathway in which the proposed system can augment conventional electronic stethoscopes and telemedicine platforms in the AI-assisted diagnostic workflows.We also discuss in detail issues of computational efficiency,model interpretability,and ethical considerations,particularly concerning algorithmic bias stemming from imbalanced datasets and the need for real-time processing in clinical settings.The study describes a scalable,automated system combining deep learning,feature extraction using spectrograms,and external validation that can assist healthcare providers in the early and accurate detection of cardiovascular disease.AI-driven solutions can be viable in improving access,reducing delays in diagnosis,and ultimately even the continued global burden of heart disease.
基金funded by Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah under grant No.(RG-6-611-43)the authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.
基金funded by Institutional Fund Projects under grant no.(IFPDP-261-22)。
文摘Detecting cyber attacks in networks connected to the Internet of Things(IoT)is of utmost importance because of the growing vulnerabilities in the smart environment.Conventional models,such as Naive Bayes and support vector machine(SVM),as well as ensemble methods,such as Gradient Boosting and eXtreme gradient boosting(XGBoost),are often plagued by high computational costs,which makes it challenging for them to perform real-time detection.In this regard,we suggested an attack detection approach that integrates Visual Geometry Group 16(VGG16),Artificial Rabbits Optimizer(ARO),and Random Forest Model to increase detection accuracy and operational efficiency in Internet of Things(IoT)networks.In the suggested model,the extraction of features from malware pictures was accomplished with the help of VGG16.The prediction process is carried out by the random forest model using the extracted features from the VGG16.Additionally,ARO is used to improve the hyper-parameters of the random forest model of the random forest.With an accuracy of 96.36%,the suggested model outperforms the standard models in terms of accuracy,F1-score,precision,and recall.The comparative research highlights our strategy’s success,which improves performance while maintaining a lower computational cost.This method is ideal for real-time applications,but it is effective.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R343),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia for funding this research work through the project number NBU-FFR-2025-1092-10.
文摘As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)systems.These systems are essential for monitoring and controlling industrial operations,making their security paramount.A key threat arises from Shor’s algorithm,a powerful quantum computing tool that can compromise current hash functions,leading to significant concerns about data integrity and confidentiality.To tackle these issues,this article introduces a novel Quantum-Resistant Hash Algorithm(QRHA)known as the Modular Hash Learning Algorithm(MHLA).This algorithm is meticulously crafted to withstand potential quantum attacks by incorporating advanced mathematical and algorithmic techniques,enhancing its overall security framework.Our research delves into the effectiveness ofMHLA in defending against both traditional and quantum-based threats,with a particular emphasis on its resilience to Shor’s algorithm.The findings from our study demonstrate that MHLA significantly enhances the security of SCADA systems in the context of quantum technology.By ensuring that sensitive data remains protected and confidential,MHLA not only fortifies individual systems but also contributes to the broader efforts of safeguarding industrial and infrastructure control systems against future quantumthreats.Our evaluation demonstrates that MHLA improves security by 38%against quantumattack simulations compared to traditional hash functionswhilemaintaining a computational efficiency ofO(m⋅n⋅k+v+n).The algorithm achieved a 98%success rate in detecting data tampering during integrity testing.These findings underline MHLA’s effectiveness in enhancing SCADA system security amidst evolving quantum technologies.This research represents a crucial step toward developing more secure cryptographic systems that can adapt to the rapidly changing technological landscape,ultimately ensuring the reliability and integrity of critical infrastructure in an era where quantum computing poses a growing risk.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia for funding this research work through the Project number“NBU-FFR-2024-1092-09”.
文摘Phishing attacks seriously threaten information privacy and security within the Internet of Things(IoT)ecosystem.Numerous phishing attack detection solutions have been developed for IoT;however,many of these are either not optimally efficient or lack the lightweight characteristics needed for practical application.This paper proposes and optimizes a lightweight deep-learning model for phishing attack detection.Our model employs a two-fold optimization approach:first,it utilizes the analysis of the variance(ANOVA)F-test to select the optimal features for phishing detection,and second,it applies the Cuckoo Search algorithm to tune the hyperparameters(learning rate and dropout rate)of the deep learning model.Additionally,our model is trained in only five epochs,making it more lightweight than other deep learning(DL)and machine learning(ML)models.The proposed model achieved a phishing detection accuracy of 91%,with a precision of 92%for the’normal’class and 91%for the‘attack’class.Moreover,the model’s recall and F1-score are 91%for both classes.We also compared our approach with traditional DL/ML models and past literature,demonstrating that our model is more accurate.This study enhances the security of sensitive information and IoT devices by offering a novel and effective approach to phishing detection.
基金supported by a grant from Hong Kong Metropolitan University (RD/2023/2.3)supported Princess Nourah bint Abdulrah-man University Researchers Supporting Project number (PNURSP2024R 343)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia for funding this research work through the project number“NBU-FFR-2024-1092-09”.
文摘Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional Encoder Representations from Transformers(BERT)for feature extraction and CNN for classification,specifically designed for enterprise information systems.BERT’s linguistic capabilities are used to extract key features from email content,which are then processed by a convolutional neural network(CNN)model optimized for phishing detection.Achieving an accuracy of 97.5%,our proposed model demonstrates strong proficiency in identifying phishing emails.This approach represents a significant advancement in applying deep learning to cybersecurity,setting a new benchmark for email security by effectively addressing the increasing complexity of phishing attacks.
基金This project was funded by Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah underGrant No.(IFPIP-1127-611-1443)the authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343)PrincessNourah bint Abdulrahman University,Riyadh,Saudi ArabiaDeanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia,for funding this researchwork through the project number“NBU-FFR-2024-1092-02”.
文摘Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2024-1092-04”.
文摘This study introduces a long-short-term memory(LSTM)-based neural network model developed for detecting anomaly events in care-independent smart homes,focusing on the critical application of elderly fall detection.It balances the dataset using the Synthetic Minority Over-sampling Technique(SMOTE),effectively neutralizing bias to address the challenge of unbalanced datasets prevalent in time-series classification tasks.The proposed LSTM model is trained on the enriched dataset,capturing the temporal dependencies essential for anomaly recognition.The model demonstrated a significant improvement in anomaly detection,with an accuracy of 84%.The results,detailed in the comprehensive classification and confusion matrices,showed the model’s proficiency in distinguishing between normal activities and falls.This study contributes to the advancement of smart home safety,presenting a robust framework for real-time anomaly monitoring.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the project number NBU-FFR-2024-1092-18.
文摘Phishing attacks are more than two-decade-old attacks that attackers use to steal passwords related to financial services.After the first reported incident in 1995,its impact keeps on increasing.Also,during COVID-19,due to the increase in digitization,there is an exponential increase in the number of victims of phishing attacks.Many deep learning and machine learning techniques are available to detect phishing attacks.However,most of the techniques did not use efficient optimization techniques.In this context,our proposed model used random forest-based techniques to select the best features,and then the Brown-Bear optimization algorithm(BBOA)was used to fine-tune the hyper-parameters of the convolutional neural network(CNN)model.To test our model,we used a dataset from Kaggle comprising 11,000+websites.In addition to that,the dataset also consists of the 30 features that are extracted from the website uniform resource locator(URL).The target variable has two classes:“Safe”and“Phishing.”Due to the use of BBOA,our proposed model detects malicious URLs with an accuracy of 93%and a precision of 92%.In addition,comparing our model with standard techniques,such as GRU(Gated Recurrent Unit),LSTM(Long Short-Term Memory),RNN(Recurrent Neural Network),ANN(Artificial Neural Network),SVM(Support Vector Machine),and LR(Logistic Regression),presents the effectiveness of our proposed model.Also,the comparison with past literature showcases the contribution and novelty of our proposed model.