期刊文献+
共找到8,946篇文章
< 1 2 250 >
每页显示 20 50 100
A comprehensive analysis of artificial intelligence,machine learning,deep learning and computer vision in food science
1
作者 Premkumar Borugadda Hemantha Kumar Kalluri 《Journal of Future Foods》 2026年第6期975-991,共17页
Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce... Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development. 展开更多
关键词 Artificial intelligence Computer vision Deep learning Food quality Food recognition Machine learning
在线阅读 下载PDF
Intrusion Detection and Security Attacks Mitigation in Smart Cities with Integration of Human-Computer Interaction
2
作者 Abeer Alnuaim 《Computers, Materials & Continua》 2026年第1期711-743,共33页
The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)... The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments. 展开更多
关键词 Anomaly detection smart cities Internet of Things(IoT) HCI CNN LSTM random forest intelligent secure solutions
在线阅读 下载PDF
Integration of data science with the intelligent IoT(IIoT):Current challenges and future perspectives 被引量:4
3
作者 Inam Ullah Deepak Adhikari +3 位作者 Xin Su Francesco Palmieri Celimuge Wu Chang Choi 《Digital Communications and Networks》 2025年第2期280-298,共19页
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s... The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions. 展开更多
关键词 Data science Internet of things(IoT) Big data Communication systems Networks Security Data science analytics
在线阅读 下载PDF
Forecasting Budget Estimated Using Time-Series—Case Study on College of Computer Science and Information Technology 被引量:1
4
作者 Foriaa Ahmed Elbasheer Samani A. Talab 《Intelligent Information Management》 2014年第3期142-148,共7页
The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can brin... The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon. 展开更多
关键词 Budgets Information ACCOUNTING PREDICT Time SERIES Analysis
暂未订购
Complex adaptive systems science in the era of global sustainability crisis
5
作者 Li An B.L.Turner II +4 位作者 Jianguo Liu Volker Grimm Qi Zhang Zhangyang Wang Ruihong Huang 《Geography and Sustainability》 2025年第1期14-24,共11页
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,... A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms. 展开更多
关键词 Social-environmental systems Complex adaptive systems Sustainability science Agent-based models Artificial intelligence Data science
在线阅读 下载PDF
Prerequisite Relations among Knowledge Units:A Case Study of Computer Science Domain
6
作者 Fatema Nafa Amal Babour Austin Melton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第12期639-652,共14页
The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instruct... The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instructors are expected to present knowledge units in a semantically well-organized manner to facilitate students’understanding of the material.The proposed model reveals how inner concepts of a knowledge unit are dependent on each other and on concepts not in the knowledge unit.To help understand the complexity of the inner concepts themselves,WordNet is included as an external knowledge base in thismodel.The goal is to develop a model that will enable instructors to evaluate whether or not a learning regime has hidden relationships which might hinder students’ability to understand the material.The evaluation,employing three textbooks,shows that the proposed model succeeds in discovering hidden relationships among knowledge units in learning resources and in exposing the knowledge gaps in some knowledge units. 展开更多
关键词 Knowledge graph text mining knowledge unit graph mining
在线阅读 下载PDF
Laboratory or Department?Exploration and Creation in Computer Science and Technology
7
作者 Ann Copestake 《计算机教育》 2024年第3期13-16,共4页
In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in it... In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem. 展开更多
关键词 Laboratory or department University of Cambridge Boundaries Exploration and creativity
在线阅读 下载PDF
E2ETCA:End-to-end training of CNN and attention ensembles for rice disease diagnosis
8
作者 Md.Zasim Uddin Md.Nadim Mahamood +3 位作者 Ausrukona Ray Md.Ileas Pramanik Fady Alnajjar Md Atiqur Rahman Ahad 《Journal of Integrative Agriculture》 2026年第2期756-768,共13页
Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates dise... Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates disease severity,supports effective treatment strategies,and reduces reliance on excessive pesticide use.Traditional machine learning approaches have been applied for automated rice disease diagnosis;however,these methods depend heavily on manual image preprocessing and handcrafted feature extraction,which are labor-intensive and time-consuming and often require domain expertise.Recently,end-to-end deep learning(DL) models have been introduced for this task,but they often lack robustness and generalizability across diverse datasets.To address these limitations,we propose a novel end-toend training framework for convolutional neural network(CNN) and attention-based model ensembles(E2ETCA).This framework integrates features from two state-of-the-art(SOTA) CNN models,Inception V3 and DenseNet-201,and an attention-based vision transformer(ViT) model.The fused features are passed through an additional fully connected layer with softmax activation for final classification.The entire process is trained end-to-end,enhancing its suitability for realworld deployment.Furthermore,we extract and analyze the learned features using a support vector machine(SVM),a traditional machine learning classifier,to provide comparative insights.We evaluate the proposed E2ETCA framework on three publicly available datasets,the Mendeley Rice Leaf Disease Image Samples dataset,the Kaggle Rice Diseases Image dataset,the Bangladesh Rice Research Institute dataset,and a combined version of all three.Using standard evaluation metrics(accuracy,precision,recall,and F1-score),our framework demonstrates superior performance compared to existing SOTA methods in rice disease diagnosis,with potential applicability to other agricultural disease detection tasks. 展开更多
关键词 rice disease diagnosis ensemble method CNN-based model end-to-end model Inception model DenseNet model vision transformer model attention-based model support vector machine
在线阅读 下载PDF
A Survey of Federated Learning:Advances in Architecture,Synchronization,and Security Threats
9
作者 Faisal Mahmud Fahim Mahmud Rashedur M.Rahman 《Computers, Materials & Continua》 2026年第3期1-87,共87页
Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitiv... Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption. 展开更多
关键词 Federated learning(FL) horizontal federated learning(HFL) vertical federated learning(VFL) federated transfer learning(FTL) personalized federated learning synchronous federated learning(SFL) asynchronous federated learning(AFL) data leakage poisoning attacks privacy-preserving machine learning
在线阅读 下载PDF
A brief review on comparative analysis of IoT-based healthcare system for breast cancer prediction
10
作者 Krishna Murari Rajiv Ranjan Suman 《Medical Data Mining》 2026年第1期46-58,共13页
The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare I... The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts. 展开更多
关键词 IOT healthcare system machine learning breast cancer prediction medical data mining security challenges
在线阅读 下载PDF
A Comparative Benchmark of Machine and Deep Learning for Cyberattack Detection in IoT Networks
11
作者 Enzo Hoummady Fehmi Jaafar 《Computers, Materials & Continua》 2026年第4期1070-1092,共23页
With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and ... With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and diversity of IoT network traffic.This paper presents a comparative benchmark of classic machine learning(ML)and state-of-the-art deep learning(DL)algorithms for IoT intrusion detection.Our methodology employs a twophased approach:a preliminary pilot study using a custom-generated dataset to establish baselines,followed by a comprehensive evaluation on the large-scale CICIoTDataset2023.We benchmarked algorithms including Random Forest,XGBoost,CNN,and StackedLSTM.The results indicate that while top-performingmodels frombothcategories achieve over 99%classification accuracy,this metric masks a crucial performance trade-off.We demonstrate that treebased ML ensembles exhibit superior precision(91%)in identifying benign traffic,making them effective at reducing false positives.Conversely,DL models demonstrate superior recall(96%),making them better suited for minimizing the interruption of legitimate traffic.We conclude that the selection of an optimal model is not merely a matter of maximizing accuracy but is a strategic choice dependent on the specific security priority either minimizing false alarms or ensuring service availability.Thiswork provides a practical framework for deploying context-aware security solutions in diverse IoT environments. 展开更多
关键词 Internet of Things deep learning abnormal network traffic cyberattacks machine learning
在线阅读 下载PDF
AI Agents in Finance and Fintech: A Scientific Review of Agent-Based Systems, Applications, and Future Horizons
12
作者 Maryan Rizinski Dimitar Trajanov 《Computers, Materials & Continua》 2026年第1期173-206,共34页
Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review s... Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem. 展开更多
关键词 Artificial intelligence AI agents agentic architectures FINANCE fintech financial services
在线阅读 下载PDF
BearFusionNet:A Multi-Stream Attention-Based Deep Learning Framework with Explainable AI for Accurate Detection of Bearing Casting Defects
13
作者 Md.Ehsanul Haque Md.Nurul Absur +3 位作者 Fahmid Al Farid Md Kamrul Siam Jia Uddin Hezerul Abdul Karim 《Computers, Materials & Continua》 2026年第3期845-871,共27页
Manual inspection of onba earing casting defects is not realistic and unreliable,particularly in the case of some micro-level anomalies which lead to major defects on a large scale.To address these challenges,we propo... Manual inspection of onba earing casting defects is not realistic and unreliable,particularly in the case of some micro-level anomalies which lead to major defects on a large scale.To address these challenges,we propose BearFusionNet,an attention-based deep learning architecture with multi-stream,which merges both DenseNet201 and MobileNetV2 for feature extraction with a classification head inspired by VGG19.This hybrid design,figuratively beaming from one layer to another,extracts the enormity of representations on different scales,backed by a prepreprocessing pipeline that brings defect saliency to the fore through contrast adjustment,denoising,and edge detection.The use of multi-head self-attention enhances feature fusion,enabling the model to capture both large and small spatial features.BearFusionNet achieves an accuracy of 99.66%and Cohen’s kappa score of 0.9929 in Kaggle’s Real-life Industrial Casting Defects dataset.Both McNemar’s and Wilcoxon signed-rank statistical tests,as well as fivefold cross-validation,are employed to assess the robustness of our proposed model.To interpret the model,we adopt Grad-Cam visualizations,which are the state of the art standard.Furthermore,we deploy BearFusionNet as a webbased system for near real-time inference(5-6 s per prediction),which enables the quickest yet accurate detection with visual explanations.Overall,BearFusionNet is an interpretable,accurate,and deployable solution that can automatically detect casting defects,leading to significant advances in the innovative industrial environment. 展开更多
关键词 Bearing casting defects defects classification fault detection quality inspection of bearing Industry 4.0
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
14
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Towards Decentralized IoT Security: Optimized Detection of Zero-Day Multi-Class Cyber-Attacks Using Deep Federated Learning
15
作者 Misbah Anwer Ghufran Ahmed +3 位作者 Maha Abdelhaq Raed Alsaqour Shahid Hussain Adnan Akhunzada 《Computers, Materials & Continua》 2026年第1期744-758,共15页
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an... The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security. 展开更多
关键词 Cyber-attack intrusion detection system(IDS) deep federated learning(DFL) zero-day attack distributed denial of services(DDoS) MULTI-CLASS Internet of Things(IoT)
在线阅读 下载PDF
Hybrid Malware Detection Model for Internet of Things Environment
16
作者 Abdul Rahaman Wahab Sait Yazeed Alkhurayyif 《Computers, Materials & Continua》 2026年第3期1867-1894,共28页
Malware poses a significant threat to the Internet of Things(IoT).It enables unauthorized access to devices in the IoT environment.The lack of unique architectural standards causes challenges in developing robust malw... Malware poses a significant threat to the Internet of Things(IoT).It enables unauthorized access to devices in the IoT environment.The lack of unique architectural standards causes challenges in developing robust malware detection(MD)models.The existing models demand substantial computational resources.This study intends to build a lightweight MD model to detect anomalies in IoT networks.The authors develop a transformation technique,converting the malware binaries into images.MobileNet V2 is fine-tuned using improved grey wolf optimization(IGWO)to extract crucial features of malicious and benign samples.The ResNeXt model is combined with the Linformer’s attention mechanism to identify Malware features.A fully connected layer is integrated with gradientweighted class activation mapping(Grad-CAM)in order to facilitate an interpretable classification model.The proposed model is evaluated using the IoT malware and the IoT-23 datasets.The model performs well on the two datasets with an accuracy of 98.94%,precision of 98.46%,recall of 98.11%,and F1-score of 98.28%on the IoT malware dataset,and an accuracy of 98.23%,precision of 96.80%,recall of 96.64%,and F1-score of 96.71%on the IoT-23 dataset,respectively.The findings indicate that the model has a high standard of classification.The lightweight architecture enables efficient deployment with an inference time of 1.42 s.Inference time has no direct impact on accuracy,precision,recall,or F1-score.However,the inference speed would warrant timely detection in latency-sensitive IoT applications.By achieving a remarkable result,the proposed study offers a comprehensive solution:a scalable,interpretable,and computationally efficient MD model for the evolving IoT landscape. 展开更多
关键词 Deep learning MALWARE convolutional neural network ResNeXt IoT malware image classification
在线阅读 下载PDF
An Integrated DNN-FEA Approach for Inverse Identification of Passive,Heterogeneous Material Parameters of Left Ventricular Myocardium
17
作者 Zhuofan Li Daniel HPak +2 位作者 James SDuncan Liang Liang Minliang Liu 《Computer Modeling in Engineering & Sciences》 2026年第1期319-344,共26页
Patient-specific finite element analysis(FEA)is a promising tool for noninvasive quantification of cardiac and vascular structural mechanics in vivo.However,inverse material property identification using FEA,which req... Patient-specific finite element analysis(FEA)is a promising tool for noninvasive quantification of cardiac and vascular structural mechanics in vivo.However,inverse material property identification using FEA,which requires iteratively solving nonlinear hyperelasticity problems,is computationally expensive which limits the ability to provide timely patient-specific insights to clinicians.In this study,we present an inverse material parameter identification strategy that integrates deep neural networks(DNNs)with FEA,namely inverse DNN-FEA.In this framework,a DNN encodes the spatial distribution of material parameters and effectively regularizes the inverse solution,which aims to reduce susceptibility to local optima that often arise in heterogeneous nonlinear hyperelastic problems.Consequently,inverse DNN-FEA enables identification of material parameters at the element level.For validation,we applied DNN-FEA to identify four spatially varying passive Holzapfel-Ogden material parameters of the left ventricular myocardium in synthetic benchmark cases with a clinically-derived geometry.To evaluate the benefit of DNN integration,a baseline FEA-only solver implemented in PyTorch was used for comparison.Results demonstrated that DNN-FEA achieved substantially lower average errors in parameter identification compared to FEA(case 1,DNN-FEA:0.37%~2.15%vs.FEA:2.64%~12.91%).The results also demonstrate that the same DNN architecture is capable of identifying a different spatial material property distribution(case 2,DNN-FEA:0.03%~0.60%vs.FEA:0.93%~16.25%).These findings suggest that DNN-FEA provides an accurate framework for inverse identification of heterogeneous myocardial material properties.This approach may facilitate future applications in patient-specific modeling based on in vivo clinical imaging and could be extended to other biomechanical simulation problems. 展开更多
关键词 Inverse method deep neural network finite element analysis left ventricular MYOCARDIUM
暂未订购
Model Agnostic Meta Learning Ensemble Based Prediction of Motor Imagery Tasks Using EEG Signals
18
作者 Fazal Ur Rehman Yazeed Alkhrijah +1 位作者 Syed Muhammad Usman Muhammad Irfan 《Computer Modeling in Engineering & Sciences》 2026年第2期1018-1042,共25页
Automated detection of Motor Imagery(MI)tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation.Prediction of MI tasks can be performed with the help of Electroencephalogram(... Automated detection of Motor Imagery(MI)tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation.Prediction of MI tasks can be performed with the help of Electroencephalogram(EEG)signals recorded by placing electrodes on the scalp of subjects;however,accurate prediction of MI tasks remains a challenge due to noise that is incurred during the EEG signal recording process,the extraction of a feature vector with high interclass variance,and accurate classification.The proposed method consists of preprocessing,feature extraction,and classification.First,EEG signals are denoised using a bandpass filter followed by Independent Component Analysis(ICA).Multiple channels are combined to form a single surrogate channel.Short Time Fourier Transform(STFT)is then applied to convert time domain EEG signals into the frequency domain.Handcrafted and automated features are extracted from EEG signals and then concatenated to form a single feature vector.We propose a customized two-dimensional Convolutional Neural Network(CNN)for automated feature extraction with high interclass variance.Feature selection is performed using Particle Swarm Optimization(PSO)to obtain optimal features.The final feature vector is passed to three different classifiers:Support Vector Machine(SVM),Random Forest(RF),and Long Short-Term Memory(LSTM).The final decision is made using the Model-Agnostic Meta Learning(MAML).The Proposed method has been tested on two datasets,including PhysioNet and BCI Competition IV-2a,and it achieved better results in terms of accuracy and F1 score than existing state-of-the-art methods.The proposed framework achieved an accuracy and F1 score of 96%on the PhysioNet dataset and 95.5%on the BCI Competition IV,dataset 2a.We also present SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM)explainable techniques to enhance model interpretability in a clinical setting. 展开更多
关键词 Motor imagery(MI) electroencephalogram(EEG) 2D-CNN feature selection explainable artificial intelligence(XAI) particle swarm optimization(PSO)
在线阅读 下载PDF
Performance Analysis of Bandwidth Aware Hybrid Powered 5G Cloud Radio Access Network
19
作者 Md.Al-Hasan Mst.Rubina Aktar +3 位作者 Fahmid Al Farid Md.Shamim Anower Abu Saleh Musa Miah Md.Hezerul Abdul Karim 《Computers, Materials & Continua》 2026年第4期2146-2160,共15页
The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary gro... The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach. 展开更多
关键词 5G BANDWIDTH renewable energy energy efficiency spectral efficiency C-RAN
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部