期刊文献+
共找到476篇文章
< 1 2 24 >
每页显示 20 50 100
A Survey of Link Failure Detection and Recovery in Software-Defined Networks
1
作者 Suheib Alhiyari Siti Hafizah AB Hamid Nur Nasuha Daud 《Computers, Materials & Continua》 SCIE EI 2025年第1期103-137,共35页
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance... Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods. 展开更多
关键词 Software defined networking failure detection failure recovery RESTORATION PROTECTION
在线阅读 下载PDF
Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models:Experiences and Challenges
2
作者 Ksenia Kharitonova David Pérez-Fernández +1 位作者 Zoraida Callejas David Griol 《Computers, Materials & Continua》 2026年第5期1021-1062,共42页
Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate unt... Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate until the assistant behaves consistently.This paper investigates how far large language models(LLMs)can automate this development.In this paper,we use two reference corpora,Let’s Go(English,public transport)and MEDIA(French,hotel booking),to prompt four LLM families(GPT-4o,Claude,Gemini,Mistral Small)and generate the core specifications required by the rasa platform.These include intent sets with example utterances,entity definitions with slot mappings,response templates,and basic dialog flows.To structure this process,we introduce a model-and platform-agnostic pipelinewith two phases.The first normalizes and validates LLM-generated artifacts,enforcing crossfile consistency andmaking slot usage explicit.The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably.Across eight projects,all models required some targeted repairs before training.After applying our pipeline,all reached≥70%task completion(many above 84%),while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth.These results show that,with modest guidance,current LLMs can produce workable end-to-end dialog prototypes directly fromraw transcripts.Our main contributions are:(i)a reusable bootstrap method aligned with industry domain-specific languages(DSLs),(ii)a small set of high-impact corrective patterns,and(iii)a simple but effective harness for closed-loop refinement across conversational platforms. 展开更多
关键词 Task-oriented dialog systems large language models(LLMs) RASA dialog automation natural language understanding(NLU) slot filling conversational AI human-in-the-loop NLP
在线阅读 下载PDF
An Overview of Segmentation Techniques in Breast Cancer Detection:From Classical to Hybrid Model
3
作者 Hanifah Rahmi Fajrin Se Dong Min 《Computers, Materials & Continua》 2026年第3期230-265,共36页
Accurate segmentation of breast cancer in mammogram images plays a critical role in early diagnosis and treatment planning.As research in this domain continues to expand,various segmentation techniques have been propo... Accurate segmentation of breast cancer in mammogram images plays a critical role in early diagnosis and treatment planning.As research in this domain continues to expand,various segmentation techniques have been proposed across classical image processing,machine learning(ML),deep learning(DL),and hybrid/ensemble models.This study conducts a systematic literature review using the PRISMA methodology,analyzing 57 selected articles to explore how these methods have evolved and been applied.The review highlights the strengths and limitations of each approach,identifies commonly used public datasets,and observes emerging trends in model integration and clinical relevance.By synthesizing current findings,this work provides a structured overview of segmentation strategies and outlines key considerations for developing more adaptable and explainable tools for breast cancer detection.Overall,our synthesis suggests that classical and ML methods are suitable for limited labels and computing resources,while DL models are preferable when pixel-level annotations and resources are available,and hybrid pipelines are most appropriate when fine-grained clinical precision is required. 展开更多
关键词 Breast cancer mammogram segmentation deep learning machine learning hybrid model
在线阅读 下载PDF
A Comprehensive Literature Review of AI-Driven Application Mapping and Scheduling Techniques for Network-on-Chip Systems
4
作者 Naveed Ahmad Muhammad Kaleem +5 位作者 Mourad Elloumi Muhammad Azhar Mushtaq Ahlem Fatnassi Mohd Fazil Anas Bilal Abdulbasit A.Darem 《Computer Modeling in Engineering & Sciences》 2026年第1期118-155,共38页
Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance ... Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance and scalability,as current trends require the distribution of computation across network nodes/points.In this paper,we survey a large number of mapping and scheduling techniques designed for NoC architectures.This time,we concentrated on 3D systems.We take a systematic literature review approach to analyze existing methods across static,dynamic,hybrid,and machine-learning-based approaches,alongside preliminary AI-based dynamic models in recent works.We classify them into several main aspects covering power-aware mapping,fault tolerance,load-balancing,and adaptive for dynamic workloads.Also,we assess the efficacy of each method against performance parameters,such as latency,throughput,response time,and error rate.Key challenges,including energy efficiency,real-time adaptability,and reinforcement learning integration,are highlighted as well.To the best of our knowledge,this is one of the recent reviews that identifies both traditional and AI-based algorithms for mapping over a modern NoC,and opens research challenges.Finally,we provide directions for future work toward improved adaptability and scalability via lightweight learned models and hierarchical mapping frameworks. 展开更多
关键词 Application mapping mapping techniques NETWORK-ON-CHIP system on chip optimisation
在线阅读 下载PDF
Multi-Objective Enhanced Cheetah Optimizer for Joint Optimization of Computation Offloading and Task Scheduling in Fog Computing
5
作者 Ahmad Zia Nazia Azim +5 位作者 Bekarystankyzy Akbayan Khalid J.Alzahrani Ateeq Ur Rehman Faheem Ullah Khan Nouf Al-Kahtani Hend Khalid Alkahtani 《Computers, Materials & Continua》 2026年第3期1559-1588,共30页
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c... The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods. 展开更多
关键词 Computation offloading task scheduling cheetah optimizer fog computing optimization resource allocation internet of things
在线阅读 下载PDF
Federated Deep Learning in Intelligent Urban Ecosystems:A Systematic Review of Advancements and Applications in Smart Cities,Homes,Buildings,and Healthcare Systems
6
作者 Muhammad Adnan Tariq Sunawar Khan +5 位作者 Tehseen Mazhar Tariq Shahzad Sahar Arooj Khmaies Ouahada Muhammad Adnan Khan Habib Hamam 《Computer Modeling in Engineering & Sciences》 2026年第3期218-267,共50页
The contemporary smart cities,smart homes,smart buildings,and smart health care systems are the results of the explosive growth of Internet of Things(IoT)devices and deep learning.Yet the centralized training paradigm... The contemporary smart cities,smart homes,smart buildings,and smart health care systems are the results of the explosive growth of Internet of Things(IoT)devices and deep learning.Yet the centralized training paradigms have fundamental issues in data privacy,regulatory compliance,and ownership silo alongside the scaled limitations of the real-life application.The concept of Federated Deep Learning(FDL)is a privacy-by-design method that will enable the distributed training of machine learning models among distributed clients without sharing raw data and is suitable in heterogeneous urban settings.It is an overview of the privacy-preserving developments in FDL as of 2018-2025 with a narrow scope on its usage in smart cities(traffic prediction,environmental monitoring,energy grids),smart homes/buildings/IoT(non-intrusive load monitoring,HVAC optimization,anomaly detection)and the healthcare application(medical imaging,Electronic Health Records(EHR)analysis,remote monitoring).It gives coherent taxonomy,domain pipelines,comparative analyses of privacy mechanisms(differential privacy,secure aggregation,Homomorphic Encryption(HE),Trusted Execution Environments(TEEs),blockchain enhanced and hybrids),system structures,security/robustness defense,deployment/Machine Learning Operation(MLOps)issues,and the longstanding challenges(non-IID heterogeneity,communication efficiency,fairness,and sustainability).Some of the contributions made are structured comparisons of privacy threats,practical design advice on urban areas,recognition of open problems,and a research roadmap into the future up to 2035.The paper brings out the transformational worth of FDL in building credible,scalable,and sustainable intelligent urban ecosystems and the need to do further interdisciplinary research in standardization,real-world testbeds,and ethical governance. 展开更多
关键词 Federated deep learning(FDL) privacy-preserving AI smart cities smart homes/buildings federated healthcare intelligent urban ecosystems IOT
在线阅读 下载PDF
Crowdsourced Requirements Engineering Challenges and Solutions:A Software Industry Perspective 被引量:2
7
作者 Huma Hayat Khan Muhammad Noman Malik +2 位作者 Youseef Alotaibi Abdulmajeed Alsufyani Saleh Alghamdi 《Computer Systems Science & Engineering》 SCIE EI 2021年第11期221-236,共16页
Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Beca... Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS. 展开更多
关键词 Software crowdsourced requirements engineering software industry software development SURVEY CHALLENGES
在线阅读 下载PDF
A Novel Features Prioritization Mechanism for Controllers in Software-Defined Networking 被引量:1
8
作者 Jehad Ali Byungkyu Lee +2 位作者 Jimyung Oh Jungtae Lee Byeong-hee Roh 《Computers, Materials & Continua》 SCIE EI 2021年第10期267-282,共16页
The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the O... The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers. 展开更多
关键词 Software-defined networking controllers feature-based selection QUALITY-OF-SERVICE analytical network process analytical hierarchy process
在线阅读 下载PDF
Mining Software Repository for Cleaning Bugs Using Data Mining Technique 被引量:1
9
作者 Nasir Mahmood Yaser Hafeez +4 位作者 Khalid Iqbal Shariq Hussain Muhammad Aqib Muhammad Jamal Oh-Young Song 《Computers, Materials & Continua》 SCIE EI 2021年第10期873-893,共21页
Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting... Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development. 展开更多
关键词 Fault prediction association rule data mining frequent pattern mining
在线阅读 下载PDF
Recommender System for Configuration Management Process of Entrepreneurial Software Designing Firms 被引量:1
10
作者 Muhammad Wajeeh Uz Zaman Yaser Hafeez +5 位作者 Shariq Hussain Haris Anwaar Shunkun Yang Sadia Ali Aaqif Afzaal Abbasi Oh-Young Song 《Computers, Materials & Continua》 SCIE EI 2021年第5期2373-2391,共19页
The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by softwar... The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology. 展开更多
关键词 Feature selection recommender system software reuse configuration management
在线阅读 下载PDF
Towards Improving the Quality of Requirement and Testing Process in Agile Software Development:An Empirical Study 被引量:1
11
作者 Irum Ilays Yaser Hafeez +4 位作者 Nabil Almashfi Sadia Ali Mamoona Humayun Muhammad Aqib Ghadah Alwakid 《Computers, Materials & Continua》 SCIE EI 2024年第9期3761-3784,共24页
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re... Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept. 展开更多
关键词 Requirement prediction software testing agile software development semantic analysis case-based reasoning
在线阅读 下载PDF
A Tutorial on Federated Learning from Theory to Practice:Foundations,Software Frameworks,Exemplary Use Cases,and Selected Trends 被引量:1
12
作者 M.Victoria Luzón Nuria Rodríguez-Barroso +5 位作者 Alberto Argente-Garrido Daniel Jiménez-López Jose M.Moyano Javier Del Ser Weiping Ding Francisco Herrera 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期824-850,共27页
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ... When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications. 展开更多
关键词 Data privacy distributed machine learning federated learning software frameworks
在线阅读 下载PDF
Finding a Practical IT Solution-Open Source Accounting Software 被引量:1
13
作者 Manar Abu Talib Adel Khelifi +4 位作者 Osama El-Temtamy Fatima Ismaeel Mahra Rashed Najah Hasan Summaya Khaled 《通讯和计算机(中英文版)》 2012年第4期406-413,共8页
关键词 会计软件 开放源码 IT 小型企业 开源软件 阿联酋 研究论文 挑战性
在线阅读 下载PDF
Software Measurement Methods: An Analysis of Two Designs 被引量:1
14
作者 Jean-Marc Desharnais Alain Abran 《Journal of Software Engineering and Applications》 2012年第10期797-809,共13页
In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in diff... In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in different measurement contexts and to interpret, either as base measures or in combination as derived measures. The lack of consistency when using base measures in data collection can affect both data preparation and data analysis. This paper analyzes the similarities and differences across three different views of measurement methods (ISO International Vocabulary on Metrology, ISO 15939, and ISO 25021), and uses a process proposed for the design of software measurement methods to analyze two examples of such methods selected from the literature. 展开更多
关键词 SOFTWARE Measures BASE Measures DERIVED Measures Measurement Method Attributes SOFTWARE QUALITY Model METROLOGY SOFTWARE Metrics
在线阅读 下载PDF
New Theoretical Aspects of Software Engineering for Development Applications and E-Learning 被引量:1
15
作者 Ekaterina Lavrischeva Alexei Ostrovski 《Journal of Software Engineering and Applications》 2013年第9期34-40,共7页
This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of progra... This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students. 展开更多
关键词 SOFTWARE Engineering Theory DISCIPLINES Technologies INTEROPERABILITY Applied Systems SOFTWARE Industry FABRICS E-LEARNING
在线阅读 下载PDF
Intelligent Resource Allocations for Software-Defined Mission-Critical IoT Services
16
作者 Chaebeen Nam Sa Math +1 位作者 Prohim Tam Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第11期4087-4102,共16页
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b... Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods. 展开更多
关键词 Mobile edge computing Internet of Things software defined networks traffic classification machine learning resource allocation
在线阅读 下载PDF
Fuzzy coloured petri nets‐based method to analyse and verify the functionality of software
17
作者 Mina Chavoshi Seyed Morteza Babamir 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第3期863-879,共17页
Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzz... Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzzy rules are used for the specification of non‐deterministic behaviour and they contain a large number of variables,they constitute a complex form that is difficult to understand and infer.A solution is to visualise the system specification with the capability of automatic rule inference.In this study,by representing a high‐level system specification,the authors visualise rule representation and firing using fuzzy coloured Petri‐nets.Already,several fuzzy Petri‐nets‐based methods have been presented,but they either do not support a large number of rules and variables or do not consider significant cases like(a)the weight of the premise's propositions in the occurrence of the rule conclusion,(b)the weight of conclusion's proposition,(c)threshold values for premise and conclusion's propositions of the rule,and(d)the certainty factor(CF)for the rule or the conclusion's proposition.By considering cases(a)-(d),a wider variety of fuzzy rules are supported.The authors applied their model to the analysis of attacks against a part of a real secure water treatment system.In another real experiment,the authors applied the model to the two scenarios from their previous work and analysed the results. 展开更多
关键词 fuzzy logic software engineering VERIFICATION
在线阅读 下载PDF
Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT
18
作者 Prohim Tam Sa Math +1 位作者 Ahyoung Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第5期3319-3335,共17页
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ... Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput. 展开更多
关键词 Deep Q-networks federated learning network functions virtualization quality of service software-defined networking
在线阅读 下载PDF
Intelligent Framework for Secure Transportation Systems Using Software-Defined-Internet of Vehicles
19
作者 Mohana Priya Pitchai Manikandan Ramachandran +1 位作者 Fadi Al-Turjman Leonardo Mostarda 《Computers, Materials & Continua》 SCIE EI 2021年第9期3947-3966,共20页
The Internet of Things plays a predominant role in automating all real-time applications.One such application is the Internet of Vehicles which monitors the roadside traffic for automating traffic rules.As vehicles ar... The Internet of Things plays a predominant role in automating all real-time applications.One such application is the Internet of Vehicles which monitors the roadside traffic for automating traffic rules.As vehicles are connected to the internet through wireless communication technologies,the Internet of Vehicles network infrastructure is susceptible to flooding attacks.Reconfiguring the network infrastructure is difficult as network customization is not possible.As Software Defined Network provide a flexible programming environment for network customization,detecting flooding attacks on the Internet of Vehicles is integrated on top of it.The basic methodology used is crypto-fuzzy rules,in which cryptographic standard is incorporated in the traditional fuzzy rules.In this research work,an intelligent framework for secure transportation is proposed with the basic ideas of security attacks on the Internet of Vehicles integrated with software-defined networking.The intelligent framework is proposed to apply for the smart city application.The proposed cognitive framework is integrated with traditional fuzzy,cryptofuzzy and Restricted Boltzmann Machine algorithm to detect malicious traffic flows in Software-Defined-Internet of Vehicles.It is inferred from the result interpretations that an intelligent framework for secure transportation system achieves better attack detection accuracy with less delay and also prevents buffer overflow attacks.The proposed intelligent framework for secure transportation system is not compared with existing methods;instead,it is tested with crypto and machine learning algorithms. 展开更多
关键词 Internet of things smart cities software-defined network intelligent transportation system fuzzy inference system
在线阅读 下载PDF
A Parallel Hybrid Testing Technique for Tri-Programming Model-Based Software Systems
20
作者 Huda Basloom Mohamed Dahab +3 位作者 Abdullah Saad AL-Ghamdi Fathy Eassa Ahmed Mohammed Alghamdi Seif Haridi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4501-4530,共30页
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ... Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation. 展开更多
关键词 Software testing hybrid testing technique OpenACC OPENMP MPI tri-programming model exascale computing
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部