Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting...Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development.展开更多
The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by softwar...The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology.展开更多
The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the O...The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers.展开更多
In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in diff...In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in different measurement contexts and to interpret, either as base measures or in combination as derived measures. The lack of consistency when using base measures in data collection can affect both data preparation and data analysis. This paper analyzes the similarities and differences across three different views of measurement methods (ISO International Vocabulary on Metrology, ISO 15939, and ISO 25021), and uses a process proposed for the design of software measurement methods to analyze two examples of such methods selected from the literature.展开更多
This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of progra...This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.展开更多
Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Beca...Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS.展开更多
Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzz...Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzzy rules are used for the specification of non‐deterministic behaviour and they contain a large number of variables,they constitute a complex form that is difficult to understand and infer.A solution is to visualise the system specification with the capability of automatic rule inference.In this study,by representing a high‐level system specification,the authors visualise rule representation and firing using fuzzy coloured Petri‐nets.Already,several fuzzy Petri‐nets‐based methods have been presented,but they either do not support a large number of rules and variables or do not consider significant cases like(a)the weight of the premise's propositions in the occurrence of the rule conclusion,(b)the weight of conclusion's proposition,(c)threshold values for premise and conclusion's propositions of the rule,and(d)the certainty factor(CF)for the rule or the conclusion's proposition.By considering cases(a)-(d),a wider variety of fuzzy rules are supported.The authors applied their model to the analysis of attacks against a part of a real secure water treatment system.In another real experiment,the authors applied the model to the two scenarios from their previous work and analysed the results.展开更多
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b...Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.展开更多
There is an emerging interest in using agile methodologies in Global Software Development(GSD)to get the mutual benefits of both methods.Scrum is currently admired by many development teams as an agile most known meth...There is an emerging interest in using agile methodologies in Global Software Development(GSD)to get the mutual benefits of both methods.Scrum is currently admired by many development teams as an agile most known meth-odology and considered adequate for collocated teams.At the same time,stake-holders in GSD are dispersed by geographical,temporal,and socio-cultural distances.Due to the controversial nature of Scrum and GSD,many significant challenges arise that might restrict the use of Scrum in GSD.We conducted a Sys-tematic Literature Review(SLR)by following Kitchenham guidelines to identify the challenges that limit the use of Scrum in GSD and to explore the mitigation strategies adopted by practitioners to resolve the challenges.To validate our reviewfindings,we conducted an industrial survey of 305 practitioners.The results of our study are consolidated into a research framework.The framework represents current best practices and recommendations to mitigate the identified distributed scrum challenges and is validated byfive experts of distributed Scrum.Results of the expert review were found supportive,reflecting that the framework will help the stakeholders deliver sustainable products by effectively mitigating the identified challenges.展开更多
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ...Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.展开更多
The Internet of Things plays a predominant role in automating all real-time applications.One such application is the Internet of Vehicles which monitors the roadside traffic for automating traffic rules.As vehicles ar...The Internet of Things plays a predominant role in automating all real-time applications.One such application is the Internet of Vehicles which monitors the roadside traffic for automating traffic rules.As vehicles are connected to the internet through wireless communication technologies,the Internet of Vehicles network infrastructure is susceptible to flooding attacks.Reconfiguring the network infrastructure is difficult as network customization is not possible.As Software Defined Network provide a flexible programming environment for network customization,detecting flooding attacks on the Internet of Vehicles is integrated on top of it.The basic methodology used is crypto-fuzzy rules,in which cryptographic standard is incorporated in the traditional fuzzy rules.In this research work,an intelligent framework for secure transportation is proposed with the basic ideas of security attacks on the Internet of Vehicles integrated with software-defined networking.The intelligent framework is proposed to apply for the smart city application.The proposed cognitive framework is integrated with traditional fuzzy,cryptofuzzy and Restricted Boltzmann Machine algorithm to detect malicious traffic flows in Software-Defined-Internet of Vehicles.It is inferred from the result interpretations that an intelligent framework for secure transportation system achieves better attack detection accuracy with less delay and also prevents buffer overflow attacks.The proposed intelligent framework for secure transportation system is not compared with existing methods;instead,it is tested with crypto and machine learning algorithms.展开更多
According to the abstract and practical characteristics of introduction to software engineering,the mixed flipped classroom teaching method is used in the teaching process.It can stimulate students’interest in learni...According to the abstract and practical characteristics of introduction to software engineering,the mixed flipped classroom teaching method is used in the teaching process.It can stimulate students’interest in learning.Taking the SPOC course“Introduction to software engineering”offered by Chongqing University as an example,this study uses the blended flipped classroom teaching method of“learning before teaching”.Online teaching resources design,teaching process design and assessment design were devised and practiced.Through the practice of blended flipped classroom teaching based on SPOC,the students’autonomous learning ability is improved.The effective combination of online teaching and offline classroom is realized.The teaching effect of this course has improved.展开更多
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software c...Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most...The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most of the existing literature on offshore outsourcing deals with the outsourcing of software development only.Several frameworks have been developed focusing on guiding software systemmanagers concerning offshore software outsourcing.However,none of these studies delivered comprehensive guidelines for managing the whole process of OSMO.There is a considerable lack of research working on managing OSMO from a vendor’s perspective.Therefore,to find the best practices for managing an OSMO process,it is necessary to further investigate such complex and multifaceted phenomena from the vendor’s perspective.This study validated the preliminary OSMO process model via a case study research approach.The results showed that the OSMO process model is applicable in an industrial setting with few changes.The industrial data collected during the case study enabled this paper to extend the preliminary OSMO process model.The refined version of the OSMO processmodel has four major phases including(i)Project Assessment,(ii)SLA(iii)Execution,and(iv)Risk.展开更多
Nowadays software is taking a very important role in almost all aspects of our daily lives which gave great importance to the study field of Software Engineering. However, most of the current Software Engineering grad...Nowadays software is taking a very important role in almost all aspects of our daily lives which gave great importance to the study field of Software Engineering. However, most of the current Software Engineering graduates in Jordan lack the required knowledge and skills to join software industry because of many reasons. This research investigates these reasons by firstly analyzing more than 1000 software job listings in Jordanian and Gulf area e-recruitment services in order to discover the skills and knowledge areas that are mostly required by software industry in Jordan and the Gulf area, and secondly comparing these knowledge areas and skills with those provided by the Software Engineering curricula at the Jordanian Universities. The awareness of the Software Engineering students and academic staff of the concluded mostly required knowledge areas and skills is measured using two questionnaires. Recommendations to decrease the gap between Software Engineering academia and industry had also been taken from a sample of software companies’ manager using a third questionnaire. The results of this research revealed that many important skills such as Web applications development are very poorly covered by Software engineering curricula and that many Software engineering students and academic staffs are not aware about many of the mostly needed skills to join industry.展开更多
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development.
基金supported by the National Natural Science Foundation of China(Grant Number:61672080,Sponsored Authors:Yang S.,Sponsors’Websites:http://www.nsfc.gov.cn/english/site_1/index.html).
文摘The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology.
基金This research was supported partially by LIG Nex1It was also supported partially by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2018-0-01431)supervised by the IITP(Institute for Information&Communications Technology Planning Evaluation).
文摘The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers.
文摘In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in different measurement contexts and to interpret, either as base measures or in combination as derived measures. The lack of consistency when using base measures in data collection can affect both data preparation and data analysis. This paper analyzes the similarities and differences across three different views of measurement methods (ISO International Vocabulary on Metrology, ISO 15939, and ISO 25021), and uses a process proposed for the design of software measurement methods to analyze two examples of such methods selected from the literature.
文摘This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.
基金‘This research is funded by Taif University,TURSP-2020/115’.
文摘Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS.
基金supported by University of Kashan,(Grant/Award Number:'234333').
文摘Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzzy rules are used for the specification of non‐deterministic behaviour and they contain a large number of variables,they constitute a complex form that is difficult to understand and infer.A solution is to visualise the system specification with the capability of automatic rule inference.In this study,by representing a high‐level system specification,the authors visualise rule representation and firing using fuzzy coloured Petri‐nets.Already,several fuzzy Petri‐nets‐based methods have been presented,but they either do not support a large number of rules and variables or do not consider significant cases like(a)the weight of the premise's propositions in the occurrence of the rule conclusion,(b)the weight of conclusion's proposition,(c)threshold values for premise and conclusion's propositions of the rule,and(d)the certainty factor(CF)for the rule or the conclusion's proposition.By considering cases(a)-(d),a wider variety of fuzzy rules are supported.The authors applied their model to the analysis of attacks against a part of a real secure water treatment system.In another real experiment,the authors applied the model to the two scenarios from their previous work and analysed the results.
基金This work was funded by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this research was supported by the Bio and Medical Technology Development Program of the National Research Foundation(NRF)funded by the Korean government(MSIT)(No.NRF-2019M3E5D1A02069073)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group no.RG-1441-490.
文摘There is an emerging interest in using agile methodologies in Global Software Development(GSD)to get the mutual benefits of both methods.Scrum is currently admired by many development teams as an agile most known meth-odology and considered adequate for collocated teams.At the same time,stake-holders in GSD are dispersed by geographical,temporal,and socio-cultural distances.Due to the controversial nature of Scrum and GSD,many significant challenges arise that might restrict the use of Scrum in GSD.We conducted a Sys-tematic Literature Review(SLR)by following Kitchenham guidelines to identify the challenges that limit the use of Scrum in GSD and to explore the mitigation strategies adopted by practitioners to resolve the challenges.To validate our reviewfindings,we conducted an industrial survey of 305 practitioners.The results of our study are consolidated into a research framework.The framework represents current best practices and recommendations to mitigate the identified distributed scrum challenges and is validated byfive experts of distributed Scrum.Results of the expert review were found supportive,reflecting that the framework will help the stakeholders deliver sustainable products by effectively mitigating the identified challenges.
基金This work was funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048)this research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.
文摘The Internet of Things plays a predominant role in automating all real-time applications.One such application is the Internet of Vehicles which monitors the roadside traffic for automating traffic rules.As vehicles are connected to the internet through wireless communication technologies,the Internet of Vehicles network infrastructure is susceptible to flooding attacks.Reconfiguring the network infrastructure is difficult as network customization is not possible.As Software Defined Network provide a flexible programming environment for network customization,detecting flooding attacks on the Internet of Vehicles is integrated on top of it.The basic methodology used is crypto-fuzzy rules,in which cryptographic standard is incorporated in the traditional fuzzy rules.In this research work,an intelligent framework for secure transportation is proposed with the basic ideas of security attacks on the Internet of Vehicles integrated with software-defined networking.The intelligent framework is proposed to apply for the smart city application.The proposed cognitive framework is integrated with traditional fuzzy,cryptofuzzy and Restricted Boltzmann Machine algorithm to detect malicious traffic flows in Software-Defined-Internet of Vehicles.It is inferred from the result interpretations that an intelligent framework for secure transportation system achieves better attack detection accuracy with less delay and also prevents buffer overflow attacks.The proposed intelligent framework for secure transportation system is not compared with existing methods;instead,it is tested with crypto and machine learning algorithms.
基金supported by the Teaching Reform Research Project of Chongqing University“Research on the Continuous Improvement Mechanism of Software Engineering for Engineering Education Professional Certification”(Grant No.2021Y12),“Big Data Engineering Training and Teaching Exploration in the Context of Industry Education Integration”(Grant No.2021Y13)the Teaching Reform Research Project of Chongqing City“Research and Practice on the Training System of Data Science and Big Data Professionals”(Grant No.203200)the Research Project of Chongqing Postgraduate Education and Teaching Reform“Exploration and Practice of the Cultivation of the Experimental Ability of the Software Engineering Academic Master under the Cooperation of Science and Education with Industry and Education”(Grant No.yjg213020).
文摘According to the abstract and practical characteristics of introduction to software engineering,the mixed flipped classroom teaching method is used in the teaching process.It can stimulate students’interest in learning.Taking the SPOC course“Introduction to software engineering”offered by Chongqing University as an example,this study uses the blended flipped classroom teaching method of“learning before teaching”.Online teaching resources design,teaching process design and assessment design were devised and practiced.Through the practice of blended flipped classroom teaching based on SPOC,the students’autonomous learning ability is improved.The effective combination of online teaching and offline classroom is realized.The teaching effect of this course has improved.
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
文摘Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金This research is fully funded byUniversiti Malaysia Terengganu under the research Grant(PGRG).
文摘The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most of the existing literature on offshore outsourcing deals with the outsourcing of software development only.Several frameworks have been developed focusing on guiding software systemmanagers concerning offshore software outsourcing.However,none of these studies delivered comprehensive guidelines for managing the whole process of OSMO.There is a considerable lack of research working on managing OSMO from a vendor’s perspective.Therefore,to find the best practices for managing an OSMO process,it is necessary to further investigate such complex and multifaceted phenomena from the vendor’s perspective.This study validated the preliminary OSMO process model via a case study research approach.The results showed that the OSMO process model is applicable in an industrial setting with few changes.The industrial data collected during the case study enabled this paper to extend the preliminary OSMO process model.The refined version of the OSMO processmodel has four major phases including(i)Project Assessment,(ii)SLA(iii)Execution,and(iv)Risk.
文摘Nowadays software is taking a very important role in almost all aspects of our daily lives which gave great importance to the study field of Software Engineering. However, most of the current Software Engineering graduates in Jordan lack the required knowledge and skills to join software industry because of many reasons. This research investigates these reasons by firstly analyzing more than 1000 software job listings in Jordanian and Gulf area e-recruitment services in order to discover the skills and knowledge areas that are mostly required by software industry in Jordan and the Gulf area, and secondly comparing these knowledge areas and skills with those provided by the Software Engineering curricula at the Jordanian Universities. The awareness of the Software Engineering students and academic staff of the concluded mostly required knowledge areas and skills is measured using two questionnaires. Recommendations to decrease the gap between Software Engineering academia and industry had also been taken from a sample of software companies’ manager using a third questionnaire. The results of this research revealed that many important skills such as Web applications development are very poorly covered by Software engineering curricula and that many Software engineering students and academic staffs are not aware about many of the mostly needed skills to join industry.