期刊文献+
共找到479篇文章
< 1 2 24 >
每页显示 20 50 100
A Survey of Link Failure Detection and Recovery in Software-Defined Networks
1
作者 Suheib Alhiyari Siti Hafizah AB Hamid Nur Nasuha Daud 《Computers, Materials & Continua》 SCIE EI 2025年第1期103-137,共35页
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance... Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods. 展开更多
关键词 Software defined networking failure detection failure recovery RESTORATION PROTECTION
在线阅读 下载PDF
Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models:Experiences and Challenges
2
作者 Ksenia Kharitonova David Pérez-Fernández +1 位作者 Zoraida Callejas David Griol 《Computers, Materials & Continua》 2026年第5期1021-1062,共42页
Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate unt... Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate until the assistant behaves consistently.This paper investigates how far large language models(LLMs)can automate this development.In this paper,we use two reference corpora,Let’s Go(English,public transport)and MEDIA(French,hotel booking),to prompt four LLM families(GPT-4o,Claude,Gemini,Mistral Small)and generate the core specifications required by the rasa platform.These include intent sets with example utterances,entity definitions with slot mappings,response templates,and basic dialog flows.To structure this process,we introduce a model-and platform-agnostic pipelinewith two phases.The first normalizes and validates LLM-generated artifacts,enforcing crossfile consistency andmaking slot usage explicit.The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably.Across eight projects,all models required some targeted repairs before training.After applying our pipeline,all reached≥70%task completion(many above 84%),while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth.These results show that,with modest guidance,current LLMs can produce workable end-to-end dialog prototypes directly fromraw transcripts.Our main contributions are:(i)a reusable bootstrap method aligned with industry domain-specific languages(DSLs),(ii)a small set of high-impact corrective patterns,and(iii)a simple but effective harness for closed-loop refinement across conversational platforms. 展开更多
关键词 Task-oriented dialog systems large language models(LLMs) RASA dialog automation natural language understanding(NLU) slot filling conversational AI human-in-the-loop NLP
在线阅读 下载PDF
An Overview of Segmentation Techniques in Breast Cancer Detection:From Classical to Hybrid Model
3
作者 Hanifah Rahmi Fajrin Se Dong Min 《Computers, Materials & Continua》 2026年第3期230-265,共36页
Accurate segmentation of breast cancer in mammogram images plays a critical role in early diagnosis and treatment planning.As research in this domain continues to expand,various segmentation techniques have been propo... Accurate segmentation of breast cancer in mammogram images plays a critical role in early diagnosis and treatment planning.As research in this domain continues to expand,various segmentation techniques have been proposed across classical image processing,machine learning(ML),deep learning(DL),and hybrid/ensemble models.This study conducts a systematic literature review using the PRISMA methodology,analyzing 57 selected articles to explore how these methods have evolved and been applied.The review highlights the strengths and limitations of each approach,identifies commonly used public datasets,and observes emerging trends in model integration and clinical relevance.By synthesizing current findings,this work provides a structured overview of segmentation strategies and outlines key considerations for developing more adaptable and explainable tools for breast cancer detection.Overall,our synthesis suggests that classical and ML methods are suitable for limited labels and computing resources,while DL models are preferable when pixel-level annotations and resources are available,and hybrid pipelines are most appropriate when fine-grained clinical precision is required. 展开更多
关键词 Breast cancer mammogram segmentation deep learning machine learning hybrid model
在线阅读 下载PDF
SCAN:Structural Clustering with Adaptive Thresholds for Intelligent and Robust Android Malware Detection under Concept Drift
4
作者 Kyoungmin Roh Seungmin Lee +2 位作者 Seong-je Cho Youngsup Hwang Dongjae Kim 《Computer Modeling in Engineering & Sciences》 2026年第3期1124-1163,共40页
Many machine learning-based Android malware detection often suffers from concept drift,where models trained on historical data fail to generalize to evolving threats.This paper proposes SCAN(Structural Clustering with... Many machine learning-based Android malware detection often suffers from concept drift,where models trained on historical data fail to generalize to evolving threats.This paper proposes SCAN(Structural Clustering with Adaptive thresholds for iNtelligent Android malware detection),a hybrid intelligent framework designed to mitigate concept drift without retraining.SCAN integrates Gaussian Mixture Models(GMMs)-based clustering with cluster-wise adaptive thresholding and supervised classifiers tailored to each cluster.A key challenge in clusteringbased malware detection is cluster-wise class imbalance,where clusters contain disproportionate distributions of benign and malicious samples.SCAN addresses this issue through adaptive thresholding,which dynamically adjusts the decision boundary of each cluster according to its malicious-to-benign ratio.In the final training stage,four supervised learning algorithms—Random Forest(RF),Support Vector Machine(SVM),k-NN,and XGBoost—are applied within the GMM-defined clusters.We train SCAN on Android applications collected from 2014-2017 and test it with applications from 2018-2023.Experimental results demonstrate that SCAN combined with RF consistently achieves superior performance,with both average accuracy and average F1-score exceeding 91%.These findings confirm SCAN’s robustness to concept drift and highlight its potential as a sustainable and intelligent solution for long-term Android malware detection in the real world. 展开更多
关键词 Android malware detection concept drift intelligent hybrid framework gaussian mixture model(GMM) class imbalance adaptive thresholding
在线阅读 下载PDF
Blockchain-Enabled AI Recommendation Systems Using IoT-Asisted Trusted Networks
5
作者 Mekhled Alharbi Khalid Haseeb Mamoona Humayun 《Computers, Materials & Continua》 2026年第5期527-539,共13页
The Internet of Things(IoT)and cloud computing have significantly contributed to the development of smart cities,enabling real-time monitoring,intelligent decision-making,and efficient resource management.These system... The Internet of Things(IoT)and cloud computing have significantly contributed to the development of smart cities,enabling real-time monitoring,intelligent decision-making,and efficient resource management.These systems,particularly in IoT networks,rely on numerous interconnected devices that handle time-sensitive data for critical applications.In related approaches,trusted communication and reliable device interaction have been overlooked,thereby lowering security when sharing sensitive IoT data.Moreover,it incurs additional energy consumption and overhead while addressing potential threats in the dynamic environment.In this research,an Artificial Intelligence(AI)recommended fault-tolerant framework is proposed that leverages blockchain technology,aiming to enhance device trustworthiness and ensure data privacy.In addition,the intelligence of the proposed framework enables more authentic and authorized device involvement in data routing,thereby enabling seamless transmission in smart cities integrated with lightweight computing.To evaluate dynamic network conditions,the proposed framework offers a timely decision-making system to ensure robust delivery of IoT-assisted services.Using simulations,the efficacy of the proposed framework is validated by comparing it with existing approaches across various network metrics,demonstrating remarkable performance while achieving energy efficiency and optimizing network resources. 展开更多
关键词 Artificial intelligence blockchain data security IOT recommendation systems
在线阅读 下载PDF
A Novel Signature-Based Secure Intrusion Detection for Smart Transportation Systems
6
作者 Hanaa Nafea Awais Qasim +3 位作者 Sana Abdul Sattar Adeel Munawar Muhammad Nadeem Ali Byung-Seo Kim 《Computers, Materials & Continua》 2026年第3期1309-1324,共16页
The increased connectivity and reliance on digital technologies have exposed smart transportation systems to various cyber threats,making intrusion detection a critical aspect of ensuring their secure operation.Tradit... The increased connectivity and reliance on digital technologies have exposed smart transportation systems to various cyber threats,making intrusion detection a critical aspect of ensuring their secure operation.Traditional intrusion detection systems have limitations in terms of centralized architecture,lack of transparency,and vulnerability to single points of failure.This is where the integration of blockchain technology with signature-based intrusion detection can provide a robust and decentralized solution for securing smart transportation systems.This study tackles the issue of database manipulation attacks in smart transportation networks by proposing a signaturebased intrusion detection system.The introduced signature facilitates accurate detection and systematic classification of attacks,enabling categorization according to their severity levels within the transportation infrastructure.Through comparative analysis,the research demonstrates that the blockchain-based IDS outperforms traditional approaches in terms of security,resilience,and data integrity. 展开更多
关键词 Smart transportation intrusion detection network security blockchain smart contract
在线阅读 下载PDF
LLM-Powered Multimodal Reasoning for Fake News Detection
7
作者 Md.Ahsan Habib Md.Anwar Hussen Wadud +1 位作者 M.F.Mridha Md.Jakir Hossen 《Computers, Materials & Continua》 2026年第4期1821-1864,共44页
The problem of fake news detection(FND)is becoming increasingly important in the field of natural language processing(NLP)because of the rapid dissemination of misleading information on the web.Large language models(L... The problem of fake news detection(FND)is becoming increasingly important in the field of natural language processing(NLP)because of the rapid dissemination of misleading information on the web.Large language models(LLMs)such as GPT-4.Zero excels in natural language understanding tasks but can still struggle to distinguish between fact and fiction,particularly when applied in the wild.However,a key challenge of existing FND methods is that they only consider unimodal data(e.g.,images),while more detailed multimodal data(e.g.,user behaviour,temporal dynamics)is neglected,and the latter is crucial for full-context understanding.To overcome these limitations,we introduce M3-FND(Multimodal Misinformation Mitigation for False News Detection),a novel methodological framework that integrates LLMs with multimodal data sources to perform context-aware veracity assessments.Our method proposes a hybrid system that combines image-text alignment,user credibility profiling,and temporal pattern recognition,which is also strengthened through a natural feedback loop that provides real-time feedback for correcting downstream errors.We use contextual reinforcement learning to schedule prompt updating and update the classifier threshold based on the latest multimodal input,which enables the model to better adapt to changing misinformation attack strategies.M3-FND is tested on three diverse datasets,FakeNewsNet,Twitter15,andWeibo,which contain both text and visual socialmedia content.Experiments showthatM3-FND significantly outperforms conventional and LLMbased baselines in terms of accuracy,F1-score,and AUC on all benchmarks.Our results indicate the importance of employing multimodal cues and adaptive learning for effective and timely detection of fake news. 展开更多
关键词 Fake news detection multimodal learning large language models prompt engineering instruction tuning reinforcement learning misinformation mitigation
在线阅读 下载PDF
Big Data-Driven Federated Learning Model for Scalable and Privacy-Preserving Cyber Threat Detection in IoT-Enabled Healthcare Systems
8
作者 Noura Mohammed Alaskar Muzammil Hussain +3 位作者 Saif Jasim Almheiri Atta-ur-Rahman Adnan Khan Khan M.Adnan 《Computers, Materials & Continua》 2026年第4期793-816,共24页
The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threa... The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threats is both necessary and complex,yet these interconnected healthcare settings generate enormous amounts of heterogeneous data.Traditional Intrusion Detection Systems(IDS),which are generally centralized and machine learning-based,often fail to address the rapidly changing nature of cyberattacks and are challenged by ethical concerns related to patient data privacy.Moreover,traditional AI-driven IDS usually face challenges in handling large-scale,heterogeneous healthcare data while ensuring data privacy and operational efficiency.To address these issues,emerging technologies such as Big Data Analytics(BDA)and Federated Learning(FL)provide a hybrid framework for scalable,adaptive intrusion detection in IoT-driven healthcare systems.Big data techniques enable processing large-scale,highdimensional healthcare data,and FL can be used to train a model in a decentralized manner without transferring raw data,thereby maintaining privacy between institutions.This research proposes a privacy-preserving Federated Learning–based model that efficiently detects cyber threats in connected healthcare systems while ensuring distributed big data processing,privacy,and compliance with ethical regulations.To strengthen the reliability of the reported findings,the resultswere validated using cross-dataset testing and 95%confidence intervals derived frombootstrap analysis,confirming consistent performance across heterogeneous healthcare data distributions.This solution takes a significant step toward securing next-generation healthcare infrastructure by combining scalability,privacy,adaptability,and earlydetection capabilities.The proposed global model achieves a test accuracy of 99.93%±0.03(95%CI)and amiss-rate of only 0.07%±0.02,representing state-of-the-art performance in privacy-preserving intrusion detection.The proposed FL-driven IDS framework offers an efficient,privacy-preserving,and scalable solution for securing next-generation healthcare infrastructures by combining adaptability,early detection,and ethical data management. 展开更多
关键词 Intrusion detection systems cyber threat detection explainable AI big data analytics federated learning
在线阅读 下载PDF
Personalized Recommendation System Using Deep Learning with Bayesian Personalized Ranking
9
作者 Sophort Siet Sony Peng +1 位作者 Ilkhomjon Sadriddinov Kyuwon Park 《Computers, Materials & Continua》 2026年第3期1423-1443,共21页
Recommendation systems have become indispensable for providing tailored suggestions and capturing evolving user preferences based on interaction histories.The collaborative filtering(CF)model,which depends exclusively... Recommendation systems have become indispensable for providing tailored suggestions and capturing evolving user preferences based on interaction histories.The collaborative filtering(CF)model,which depends exclusively on user-item interactions,commonly encounters challenges,including the cold-start problem and an inability to effectively capture the sequential and temporal characteristics of user behavior.This paper introduces a personalized recommendation system that combines deep learning techniques with Bayesian Personalized Ranking(BPR)optimization to address these limitations.With the strong support of Long Short-Term Memory(LSTM)networks,we apply it to identify sequential dependencies of user behavior and then incorporate an attention mechanism to improve the prioritization of relevant items,thereby enhancing recommendations based on the hybrid feedback of the user and its interaction patterns.The proposed system is empirically evaluated using publicly available datasets from movie and music,and we evaluate the performance against standard recommendation models,including Popularity,BPR,ItemKNN,FPMC,LightGCN,GRU4Rec,NARM,SASRec,and BERT4Rec.The results demonstrate that our proposed framework consistently achieves high outcomes in terms of HitRate,NDCG,MRR,and Precision at K=100,with scores of(0.6763,0.1892,0.0796,0.0068)on MovieLens-100K,(0.6826,0.1920,0.0813,0.0068)on MovieLens-1M,and(0.7937,0.3701,0.2756,0.0078)on Last.fm.The results show an average improvement of around 15%across all metrics compared to existing sequence models,proving that our framework ranks and recommends items more accurately. 展开更多
关键词 Recommendation systems traditional collaborative filtering Bayesian personalized ranking
在线阅读 下载PDF
Empowering Edge Computing:Public Edge as a Service for Performance and Cost Optimization
10
作者 Ateeqa Jalal Umar Farooq +4 位作者 Ihsan Rabbi Afzal Badshah Aurangzeb Khan Muhammad Mansoor Alam Mazliham Mohd Su’ud 《Computers, Materials & Continua》 2026年第2期1784-1802,共19页
The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despit... The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despite its high processing and storage capacity,suffers from increased network latency,network congestion,and high operational costs,making it unsuitable for latency-sensitive applications.Edge computing addresses these issues by processing data near the source but faces scalability challenges and elevated Total Cost of Ownership(TCO).Hybrid solutions,such as fog computing,cloudlets,and Mobile Edge Computing(MEC),attempt to balance cost and performance;however,they still struggle with limited resource sharing and high deployment expenses.This paper proposes Public Edge as a Service(PEaaS),a novel paradigm that utilizes idle resources contributed by universities,enterprises,cellular operators,and individuals under a collaborative service model.By decentralizing computation and enabling multi-tenant resource sharing,PEaaS reduces reliance on centralized cloud infrastructure,minimizes communication costs,and enhances scalability.The proposed framework is evaluated using EdgeCloudSim under varying workloads,for keymetrics such as latency,communication cost,server utilization,and task failure rate.Results reveal that while cloud has a task failure rate rising sharply to 12.3%at 2000 devices,PEaaS maintains a low rate of 2.5%,closely matching edge computing.Furthermore,communication costs remain 25% lower than cloud and latency remains below 0.3,even under peak load.These findings demonstrate that PEaaS achieves near-edge performance with reduced costs and enhanced scalability,offering a sustainable and economically viable solution for next-generation computing environments. 展开更多
关键词 Big data edge as a service edge computing
在线阅读 下载PDF
IoT-Driven Pollution Detection System for Indoor and Outdoor Environments
11
作者 Fatima Khan Amna Khan +5 位作者 Tariq Ali Tariq Shahzad Tehseen Mazhar Sunawar Khan Muhammad Adnan Khan Habib Hamam 《Computers, Materials & Continua》 2026年第2期1447-1473,共27页
The rise in noise and air pollution poses severe risks to human health and the environment.Industrial and vehicular emissions release harmful pollutants such as CO_(2),SO_(2),CO,CH_(4),and noise,leading to significant... The rise in noise and air pollution poses severe risks to human health and the environment.Industrial and vehicular emissions release harmful pollutants such as CO_(2),SO_(2),CO,CH_(4),and noise,leading to significant environmental degradation.Monitoring and analyzing pollutant concentrations in real-time is crucial for mitigating these risks.However,existing systems often lack the capacity to monitor both indoor and outdoor environments effectively.This study presents a low-cost,Io'T-based pollution detection system that integrates gas sensors(MQ-135and M(Q-4),a noise sensor(LM393),and a humidity sensor(DHT-22),all connected to a Node MCU(ESP8266)microcontroller.The system leverages cloud-based storage and real-time analytics to monitor harmful gas levels and sound pollution.Sensor data is processed using decision tree algorithms for classification,enabling threshold-based detection with environmental context.A Progressive Web Application(PWA)interface provides tusers with accessible,cross-platform visualizations.Experimental validation demonstrated the system’s ability to detect pollutant concentration variations across both indoor and outdoor settings,with real-time alerts triggered when thresholds were exceeded.The collected data showed consistent classification of normal,warning,and critical states for methane,CO_(2),temperature,humidity,and noise levels.These results confirm the system's reliability in dynamic environmental conditions.The proposed framework offers ascalable,energy-efficient,and user-friendly solution for pollution detectionand public awareness.Future enhancements will focus on extending the sensor suite,improving machine learning accuracy,and integrating meteorological data for predictive pollution modeling. 展开更多
关键词 Noise sensor harmful air pollutants PWA gases and noise
在线阅读 下载PDF
Unveiling Zero-Click Attacks: Mapping MITRE ATT&CK Framework for Enhanced Cybersecurity
12
作者 Md Shohel Rana Tonmoy Ghosh +2 位作者 Mohammad Nur Nobi Anichur Rahman Andrew HSung 《Computers, Materials & Continua》 2026年第1期29-66,共38页
Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulner... Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks. 展开更多
关键词 Bluebugging bluesnarfing CYBERSECURITY MITRE ATT&CK PEGASUS simjacker zero-click attacks
在线阅读 下载PDF
Human Activity Recognition Using Weighted Average Ensemble by Selected Deep Learning Models
13
作者 Waseem Akhtar Mahwish Ilyas +3 位作者 Romana Aziz Ghadah Aldehim Tassawar Iqbal Muhammad Ramzan 《Computer Modeling in Engineering & Sciences》 2026年第2期971-989,共19页
Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in ... Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in many applications,such as smart home,healthcare,human computer interaction,sports analysis,and especially,intelligent surveillance.In this paper,we propose a robust and efficient HAR system by leveraging deep learning paradigms,including pre-trained models,CNN architectures,and their average-weighted fusion.However,due to the diversity of human actions and various environmental influences,as well as a lack of data and resources,achieving high recognition accuracy remain elusive.In this work,a weighted average ensemble technique is employed to fuse three deep learning models:EfficientNet,ResNet50,and a custom CNN.The results of this study indicate that using a weighted average ensemble strategy for developing more effective HAR models may be a promising idea for detection and classification of human activities.Experiments by using the benchmark dataset proved that the proposed weighted ensemble approach outperformed existing approaches in terms of accuracy and other key performance measures.The combined average-weighted ensemble of pre-trained and CNN models obtained an accuracy of 98%,compared to 97%,96%,and 95%for the customized CNN,EfficientNet,and ResNet50 models,respectively. 展开更多
关键词 Artificial intelligence computer vision deep learning RECOGNITION human activity classification image processing
在线阅读 下载PDF
A Comprehensive Literature Review of AI-Driven Application Mapping and Scheduling Techniques for Network-on-Chip Systems
14
作者 Naveed Ahmad Muhammad Kaleem +5 位作者 Mourad Elloumi Muhammad Azhar Mushtaq Ahlem Fatnassi Mohd Fazil Anas Bilal Abdulbasit A.Darem 《Computer Modeling in Engineering & Sciences》 2026年第1期118-155,共38页
Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance ... Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance and scalability,as current trends require the distribution of computation across network nodes/points.In this paper,we survey a large number of mapping and scheduling techniques designed for NoC architectures.This time,we concentrated on 3D systems.We take a systematic literature review approach to analyze existing methods across static,dynamic,hybrid,and machine-learning-based approaches,alongside preliminary AI-based dynamic models in recent works.We classify them into several main aspects covering power-aware mapping,fault tolerance,load-balancing,and adaptive for dynamic workloads.Also,we assess the efficacy of each method against performance parameters,such as latency,throughput,response time,and error rate.Key challenges,including energy efficiency,real-time adaptability,and reinforcement learning integration,are highlighted as well.To the best of our knowledge,this is one of the recent reviews that identifies both traditional and AI-based algorithms for mapping over a modern NoC,and opens research challenges.Finally,we provide directions for future work toward improved adaptability and scalability via lightweight learned models and hierarchical mapping frameworks. 展开更多
关键词 Application mapping mapping techniques NETWORK-ON-CHIP system on chip optimisation
在线阅读 下载PDF
Optimizing UCS Prediction Models through XAI-Based Feature Selection in Soil Stabilization
15
作者 Ahmed Mohammed Awad Mohammed Omayma Husain +5 位作者 Mosab Hamdan Abdalmomen Mohammed Abdullah Ansari Atef Badr Abubakar Elsafi Abubakr Siddig 《Computer Modeling in Engineering & Sciences》 2026年第2期524-549,共26页
Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an in... Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an interpretable machine learning approach to UCS prediction is presented,pairing five models(Random Forest(RF),Gradient Boosting(GB),Extreme Gradient Boosting(XGB),CatBoost,and K-Nearest Neighbors(KNN))with SHapley Additive exPlanations(SHAP)for enhanced interpretability and to guide feature removal.A complete dataset of 12 geotechnical and chemical parameters,i.e.,Atterberg limits,compaction properties,stabilizer chemistry,dosage,curing time,was used to train and test the models.R2,RMSE,MSE,and MAE were used to assess performance.Initial results with all 12 features indicated that boosting-based models(GB,XGB,CatBoost)exhibited the highest predictive accuracy(R^(2)=0.93)with satisfactory generalization on test data,followed by RF and KNN.SHAP analysis consistently picked CaO content,curing time,stabilizer dosage,and compaction parameters as the most important features,aligning with established soil stabilization mechanisms.Models were then re-trained on the top 8 and top 5 SHAP-ranked features.Interestingly,GB,XGB,and CatBoost maintained comparable accuracy with reduced input sets,while RF was moderately sensitive and KNN was somewhat better owing to reduced dimensionality.The findings confirm that feature reduction through SHAP enables cost-effective UCS prediction through the reduction of laboratory test requirements without significant accuracy loss.The suggested hybrid approach offers an explainable,interpretable,and cost-effective tool for geotechnical engineering practice. 展开更多
关键词 Explainable AI feature selection machine learning SHAP analysis soil stabilization unconfined compressive strength
在线阅读 下载PDF
Multi-Objective Enhanced Cheetah Optimizer for Joint Optimization of Computation Offloading and Task Scheduling in Fog Computing
16
作者 Ahmad Zia Nazia Azim +5 位作者 Bekarystankyzy Akbayan Khalid J.Alzahrani Ateeq Ur Rehman Faheem Ullah Khan Nouf Al-Kahtani Hend Khalid Alkahtani 《Computers, Materials & Continua》 2026年第3期1559-1588,共30页
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c... The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods. 展开更多
关键词 Computation offloading task scheduling cheetah optimizer fog computing optimization resource allocation internet of things
在线阅读 下载PDF
Federated Deep Learning in Intelligent Urban Ecosystems:A Systematic Review of Advancements and Applications in Smart Cities,Homes,Buildings,and Healthcare Systems
17
作者 Muhammad Adnan Tariq Sunawar Khan +5 位作者 Tehseen Mazhar Tariq Shahzad Sahar Arooj Khmaies Ouahada Muhammad Adnan Khan Habib Hamam 《Computer Modeling in Engineering & Sciences》 2026年第3期218-267,共50页
The contemporary smart cities,smart homes,smart buildings,and smart health care systems are the results of the explosive growth of Internet of Things(IoT)devices and deep learning.Yet the centralized training paradigm... The contemporary smart cities,smart homes,smart buildings,and smart health care systems are the results of the explosive growth of Internet of Things(IoT)devices and deep learning.Yet the centralized training paradigms have fundamental issues in data privacy,regulatory compliance,and ownership silo alongside the scaled limitations of the real-life application.The concept of Federated Deep Learning(FDL)is a privacy-by-design method that will enable the distributed training of machine learning models among distributed clients without sharing raw data and is suitable in heterogeneous urban settings.It is an overview of the privacy-preserving developments in FDL as of 2018-2025 with a narrow scope on its usage in smart cities(traffic prediction,environmental monitoring,energy grids),smart homes/buildings/IoT(non-intrusive load monitoring,HVAC optimization,anomaly detection)and the healthcare application(medical imaging,Electronic Health Records(EHR)analysis,remote monitoring).It gives coherent taxonomy,domain pipelines,comparative analyses of privacy mechanisms(differential privacy,secure aggregation,Homomorphic Encryption(HE),Trusted Execution Environments(TEEs),blockchain enhanced and hybrids),system structures,security/robustness defense,deployment/Machine Learning Operation(MLOps)issues,and the longstanding challenges(non-IID heterogeneity,communication efficiency,fairness,and sustainability).Some of the contributions made are structured comparisons of privacy threats,practical design advice on urban areas,recognition of open problems,and a research roadmap into the future up to 2035.The paper brings out the transformational worth of FDL in building credible,scalable,and sustainable intelligent urban ecosystems and the need to do further interdisciplinary research in standardization,real-world testbeds,and ethical governance. 展开更多
关键词 Federated deep learning(FDL) privacy-preserving AI smart cities smart homes/buildings federated healthcare intelligent urban ecosystems IOT
在线阅读 下载PDF
Mining Software Repository for Cleaning Bugs Using Data Mining Technique 被引量:1
18
作者 Nasir Mahmood Yaser Hafeez +4 位作者 Khalid Iqbal Shariq Hussain Muhammad Aqib Muhammad Jamal Oh-Young Song 《Computers, Materials & Continua》 SCIE EI 2021年第10期873-893,共21页
Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting... Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development. 展开更多
关键词 Fault prediction association rule data mining frequent pattern mining
在线阅读 下载PDF
Recommender System for Configuration Management Process of Entrepreneurial Software Designing Firms 被引量:1
19
作者 Muhammad Wajeeh Uz Zaman Yaser Hafeez +5 位作者 Shariq Hussain Haris Anwaar Shunkun Yang Sadia Ali Aaqif Afzaal Abbasi Oh-Young Song 《Computers, Materials & Continua》 SCIE EI 2021年第5期2373-2391,共19页
The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by softwar... The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology. 展开更多
关键词 Feature selection recommender system software reuse configuration management
在线阅读 下载PDF
A Novel Features Prioritization Mechanism for Controllers in Software-Defined Networking 被引量:1
20
作者 Jehad Ali Byungkyu Lee +2 位作者 Jimyung Oh Jungtae Lee Byeong-hee Roh 《Computers, Materials & Continua》 SCIE EI 2021年第10期267-282,共16页
The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the O... The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers. 展开更多
关键词 Software-defined networking controllers feature-based selection QUALITY-OF-SERVICE analytical network process analytical hierarchy process
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部