期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
1
作者 Kwok Tai Chui varsha arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Cardiovascular Sound Classification Using Neural Architectures and Deep Learning for Advancing Cardiac Wellness
2
作者 Deepak Mahto Sudhakar Kumar +6 位作者 Sunil KSingh Amit Chhabra Irfan Ahmad Khan varsha arya Wadee Alhalabi Brij B.Gupta Bassma Saleh Alsulami 《Computer Modeling in Engineering & Sciences》 2025年第6期3743-3767,共25页
Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscul... Cardiovascular diseases(CVDs)remain one of the foremost causes of death globally;hence,the need for several must-have,advanced automated diagnostic solutions towards early detection and intervention.Traditional auscultation of cardiovascular sounds is heavily reliant on clinical expertise and subject to high variability.To counter this limitation,this study proposes an AI-driven classification system for cardiovascular sounds whereby deep learning techniques are engaged to automate the detection of an abnormal heartbeat.We employ FastAI vision-learner-based convolutional neural networks(CNNs)that include ResNet,DenseNet,VGG,ConvNeXt,SqueezeNet,and AlexNet to classify heart sound recordings.Instead of raw waveform analysis,the proposed approach transforms preprocessed cardiovascular audio signals into spectrograms,which are suited for capturing temporal and frequency-wise patterns.The models are trained on the PASCAL Cardiovascular Challenge dataset while taking into consideration the recording variations,noise levels,and acoustic distortions.To demonstrate generalization,external validation using Google’s Audio set Heartbeat Sound dataset was performed using a dataset rich in cardiovascular sounds.Comparative analysis revealed that DenseNet-201,ConvNext Large,and ResNet-152 could deliver superior performance to the other architectures,achieving an accuracy of 81.50%,a precision of 85.50%,and an F1-score of 84.50%.In the process,we performed statistical significance testing,such as the Wilcoxon signed-rank test,to validate performance improvements over traditional classification methods.Beyond the technical contributions,the research underscores clinical integration,outlining a pathway in which the proposed system can augment conventional electronic stethoscopes and telemedicine platforms in the AI-assisted diagnostic workflows.We also discuss in detail issues of computational efficiency,model interpretability,and ethical considerations,particularly concerning algorithmic bias stemming from imbalanced datasets and the need for real-time processing in clinical settings.The study describes a scalable,automated system combining deep learning,feature extraction using spectrograms,and external validation that can assist healthcare providers in the early and accurate detection of cardiovascular disease.AI-driven solutions can be viable in improving access,reducing delays in diagnosis,and ultimately even the continued global burden of heart disease. 展开更多
关键词 Healthy society cardiovascular system SPECTROGRAM FastAI audio signals computer vision neural network
在线阅读 下载PDF
Enhancing Healthcare Data Privacy in Cloud IoT Networks Using Anomaly Detection and Optimization with Explainable AI (ExAI)
3
作者 Jitendra Kumar Samriya Virendra Singh +4 位作者 Gourav Bathla Meena Malik varsha arya Wadee Alhalabi Brij B.Gupta 《Computers, Materials & Continua》 2025年第8期3893-3910,共18页
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha... The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings. 展开更多
关键词 Healthcare data privacy analysis anomaly detection cloud IoT network explainable artificial intelligence temporal fuzzy network
在线阅读 下载PDF
AI-Driven Malware Detection with VGG Feature Extraction and Artificial Rabbits Optimized Random Forest Model
4
作者 Brij B.Gupta Akshat Gaurav +3 位作者 Wadee Alhalabi varsha arya Shavi Bansal Ching-Hsien Hsu 《Computers, Materials & Continua》 2025年第9期4755-4772,共18页
Detecting cyber attacks in networks connected to the Internet of Things(IoT)is of utmost importance because of the growing vulnerabilities in the smart environment.Conventional models,such as Naive Bayes and support v... Detecting cyber attacks in networks connected to the Internet of Things(IoT)is of utmost importance because of the growing vulnerabilities in the smart environment.Conventional models,such as Naive Bayes and support vector machine(SVM),as well as ensemble methods,such as Gradient Boosting and eXtreme gradient boosting(XGBoost),are often plagued by high computational costs,which makes it challenging for them to perform real-time detection.In this regard,we suggested an attack detection approach that integrates Visual Geometry Group 16(VGG16),Artificial Rabbits Optimizer(ARO),and Random Forest Model to increase detection accuracy and operational efficiency in Internet of Things(IoT)networks.In the suggested model,the extraction of features from malware pictures was accomplished with the help of VGG16.The prediction process is carried out by the random forest model using the extracted features from the VGG16.Additionally,ARO is used to improve the hyper-parameters of the random forest model of the random forest.With an accuracy of 96.36%,the suggested model outperforms the standard models in terms of accuracy,F1-score,precision,and recall.The comparative research highlights our strategy’s success,which improves performance while maintaining a lower computational cost.This method is ideal for real-time applications,but it is effective. 展开更多
关键词 Malware detection VGG feature extraction artificial rabbits OPTIMIZATION random forest model
在线阅读 下载PDF
Quantum-Resistant Cryptographic Primitives Using Modular Hash Learning Algorithms for Enhanced SCADA System Security
5
作者 Sunil K.Singh Sudhakar Kumar +5 位作者 Manraj Singh Savita Gupta Razaz Waheeb Attar varsha arya Ahmed Alhomoud Brij B.Gupta 《Computers, Materials & Continua》 2025年第8期3927-3941,共15页
As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)system... As quantum computing continues to advance,traditional cryptographic methods are increasingly challenged,particularly when it comes to securing critical systems like Supervisory Control andData Acquisition(SCADA)systems.These systems are essential for monitoring and controlling industrial operations,making their security paramount.A key threat arises from Shor’s algorithm,a powerful quantum computing tool that can compromise current hash functions,leading to significant concerns about data integrity and confidentiality.To tackle these issues,this article introduces a novel Quantum-Resistant Hash Algorithm(QRHA)known as the Modular Hash Learning Algorithm(MHLA).This algorithm is meticulously crafted to withstand potential quantum attacks by incorporating advanced mathematical and algorithmic techniques,enhancing its overall security framework.Our research delves into the effectiveness ofMHLA in defending against both traditional and quantum-based threats,with a particular emphasis on its resilience to Shor’s algorithm.The findings from our study demonstrate that MHLA significantly enhances the security of SCADA systems in the context of quantum technology.By ensuring that sensitive data remains protected and confidential,MHLA not only fortifies individual systems but also contributes to the broader efforts of safeguarding industrial and infrastructure control systems against future quantumthreats.Our evaluation demonstrates that MHLA improves security by 38%against quantumattack simulations compared to traditional hash functionswhilemaintaining a computational efficiency ofO(m⋅n⋅k+v+n).The algorithm achieved a 98%success rate in detecting data tampering during integrity testing.These findings underline MHLA’s effectiveness in enhancing SCADA system security amidst evolving quantum technologies.This research represents a crucial step toward developing more secure cryptographic systems that can adapt to the rapidly changing technological landscape,ultimately ensuring the reliability and integrity of critical infrastructure in an era where quantum computing poses a growing risk. 展开更多
关键词 Hash functions post-quantum cryptography quantum-resistant hash functions network security supervisory control and data acquisition(SCADA)
在线阅读 下载PDF
Cuckoo Search-Optimized Deep CNN for Enhanced Cyber Security in IoT Networks
6
作者 Brij B.Gupta Akshat Gaurav +4 位作者 varsha arya Razaz Waheeb Attar Shavi Bansal Ahmed Alhomoud Kwok Tai Chui 《Computers, Materials & Continua》 SCIE EI 2024年第12期4109-4124,共16页
Phishing attacks seriously threaten information privacy and security within the Internet of Things(IoT)ecosystem.Numerous phishing attack detection solutions have been developed for IoT;however,many of these are eithe... Phishing attacks seriously threaten information privacy and security within the Internet of Things(IoT)ecosystem.Numerous phishing attack detection solutions have been developed for IoT;however,many of these are either not optimally efficient or lack the lightweight characteristics needed for practical application.This paper proposes and optimizes a lightweight deep-learning model for phishing attack detection.Our model employs a two-fold optimization approach:first,it utilizes the analysis of the variance(ANOVA)F-test to select the optimal features for phishing detection,and second,it applies the Cuckoo Search algorithm to tune the hyperparameters(learning rate and dropout rate)of the deep learning model.Additionally,our model is trained in only five epochs,making it more lightweight than other deep learning(DL)and machine learning(ML)models.The proposed model achieved a phishing detection accuracy of 91%,with a precision of 92%for the’normal’class and 91%for the‘attack’class.Moreover,the model’s recall and F1-score are 91%for both classes.We also compared our approach with traditional DL/ML models and past literature,demonstrating that our model is more accurate.This study enhances the security of sensitive information and IoT devices by offering a novel and effective approach to phishing detection. 展开更多
关键词 Deep learning PHISHING Cuckoo Search cable news network(CNN) IOT ANOVA F-test
在线阅读 下载PDF
Advanced BERT and CNN-Based Computational Model for Phishing Detection in Enterprise Systems
7
作者 Brij B.Gupta Akshat Gaurav +4 位作者 varsha arya Razaz Waheeb Attar Shavi Bansal Ahmed Alhomoud Kwok Tai Chui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第12期2165-2183,共19页
Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional ... Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional Encoder Representations from Transformers(BERT)for feature extraction and CNN for classification,specifically designed for enterprise information systems.BERT’s linguistic capabilities are used to extract key features from email content,which are then processed by a convolutional neural network(CNN)model optimized for phishing detection.Achieving an accuracy of 97.5%,our proposed model demonstrates strong proficiency in identifying phishing emails.This approach represents a significant advancement in applying deep learning to cybersecurity,setting a new benchmark for email security by effectively addressing the increasing complexity of phishing attacks. 展开更多
关键词 PHISHING BERT convolutional neural networks email security deep learning
在线阅读 下载PDF
Unleashing the Power of Multi-Agent Reinforcement Learning for Algorithmic Trading in the Digital Financial Frontier and Enterprise Information Systems
8
作者 Saket Sarin Sunil K.Singh +4 位作者 Sudhakar Kumar Shivam Goyal Brij Bhooshan Gupta Wadee Alhalabi varsha arya 《Computers, Materials & Continua》 SCIE EI 2024年第8期3123-3138,共16页
In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading... In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess. 展开更多
关键词 Neurodynamic Fintech multi-agent reinforcement learning algorithmic trading digital financial frontier
在线阅读 下载PDF
Optimized Phishing Detection with Recurrent Neural Network and Whale Optimizer Algorithm
9
作者 Brij Bhooshan Gupta Akshat Gaurav +3 位作者 Razaz Waheeb Attar varsha arya Ahmed Alhomoud Kwok Tai Chui 《Computers, Materials & Continua》 SCIE EI 2024年第9期4895-4916,共22页
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec... Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection. 展开更多
关键词 Phishing detection Recurrent Neural Network(RNN) Whale Optimization Algorithm(WOA) CYBERSECURITY machine learning optimization
在线阅读 下载PDF
Selective and Adaptive Incremental Transfer Learning with Multiple Datasets for Machine Fault Diagnosis
10
作者 Kwok Tai Chui Brij B.Gupta +1 位作者 varsha arya Miguel Torres-Ruiz 《Computers, Materials & Continua》 SCIE EI 2024年第1期1363-1379,共17页
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo... The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains. 展开更多
关键词 Deep learning incremental learning machine fault diagnosis negative transfer transfer learning
在线阅读 下载PDF
LSTM Based Neural Network Model for Anomaly Event Detection in Care-Independent Smart Homes
11
作者 Brij B.Gupta Akshat Gaurav +3 位作者 Razaz Waheeb Attar varsha arya Ahmed Alhomoud Kwok Tai Chui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2689-2706,共18页
This study introduces a long-short-term memory(LSTM)-based neural network model developed for detecting anomaly events in care-independent smart homes,focusing on the critical application of elderly fall detection.It ... This study introduces a long-short-term memory(LSTM)-based neural network model developed for detecting anomaly events in care-independent smart homes,focusing on the critical application of elderly fall detection.It balances the dataset using the Synthetic Minority Over-sampling Technique(SMOTE),effectively neutralizing bias to address the challenge of unbalanced datasets prevalent in time-series classification tasks.The proposed LSTM model is trained on the enriched dataset,capturing the temporal dependencies essential for anomaly recognition.The model demonstrated a significant improvement in anomaly detection,with an accuracy of 84%.The results,detailed in the comprehensive classification and confusion matrices,showed the model’s proficiency in distinguishing between normal activities and falls.This study contributes to the advancement of smart home safety,presenting a robust framework for real-time anomaly monitoring. 展开更多
关键词 LSTM neural networks anomaly detection smart home health-care elderly fall prevention
在线阅读 下载PDF
A Hybrid CNN-Brown-Bear Optimization Framework for Enhanced Detection of URL Phishing Attacks
12
作者 Brij B.Gupta Akshat Gaurav +4 位作者 Razaz Waheeb Attar varsha arya Shavi Bansal Ahmed Alhomoud Kwok Tai Chui 《Computers, Materials & Continua》 SCIE EI 2024年第12期4853-4874,共22页
Phishing attacks are more than two-decade-old attacks that attackers use to steal passwords related to financial services.After the first reported incident in 1995,its impact keeps on increasing.Also,during COVID-19,d... Phishing attacks are more than two-decade-old attacks that attackers use to steal passwords related to financial services.After the first reported incident in 1995,its impact keeps on increasing.Also,during COVID-19,due to the increase in digitization,there is an exponential increase in the number of victims of phishing attacks.Many deep learning and machine learning techniques are available to detect phishing attacks.However,most of the techniques did not use efficient optimization techniques.In this context,our proposed model used random forest-based techniques to select the best features,and then the Brown-Bear optimization algorithm(BBOA)was used to fine-tune the hyper-parameters of the convolutional neural network(CNN)model.To test our model,we used a dataset from Kaggle comprising 11,000+websites.In addition to that,the dataset also consists of the 30 features that are extracted from the website uniform resource locator(URL).The target variable has two classes:“Safe”and“Phishing.”Due to the use of BBOA,our proposed model detects malicious URLs with an accuracy of 93%and a precision of 92%.In addition,comparing our model with standard techniques,such as GRU(Gated Recurrent Unit),LSTM(Long Short-Term Memory),RNN(Recurrent Neural Network),ANN(Artificial Neural Network),SVM(Support Vector Machine),and LR(Logistic Regression),presents the effectiveness of our proposed model.Also,the comparison with past literature showcases the contribution and novelty of our proposed model. 展开更多
关键词 Phishing attack CNN brown-bear optimization
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部