Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the ...Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.展开更多
The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a ve...The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.展开更多
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such ...The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such advan-tages as simple hardware platform,independent fromapplication,stable reliability.lastly,some technicalproblems are discussed in details.展开更多
Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This ...Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This paper lays its emphasis on algorithmic skills and programming techniques as well as applicationof the software.展开更多
An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with...An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with dual shaped reflectors, and to analyse radiation patterns for the antennas. Several mathematical models and numerical techniques are presented.展开更多
Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages ot...Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.展开更多
Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and...Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and a project may fail if there are excessive costs and schedule overruns.COVID-19 has affected the software industry by reducing interactions between developers and customers.Such a lack of interaction is a key reason for the failure of software projects.Projects can also fail when customers do not know precisely what they want.Furthermore,selecting the unsuitable elicitation technique can also cause project failure.The present study,therefore,aimed to identify which requirements elicitation technique is the most cost-effective for large-scale projects when time to market is a critical issue or when the customer is not available.To that end,we conducted a systematic literature review on requirements elicitation techniques.Most primary studies identified introspection as the best technique,followed by survey and brainstorming.This finding suggests that introspection should be the first choice of elicitation technique,especially when the customer is not available or the project has strict time and cost constraints.Moreover,introspection should also be used as the starting point in the elicitation process of a large-scale project,and all known requirements should be elicited using this technique.展开更多
In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software develo...In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software development time was measured in number of elapsed days. Using field data on over 130 field software projects from various industries, we empirically test the impact of team size, integrated computer aided software engineering (ICASE) tools, software development type, software development platform, and programming language type on the software development productivity and development time. Our results indicate that team size, software development type, software development platform, and programming language type significantly impact software development productivity. However, only team size significantly impacts software development time. Our results indicate that effective management of software development teams, and using different management strategies for different software development type environments may improve software development productivity.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the dev...A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the development effort are divided into groups based on the distribution of their values in the available dataset. The linguistic terms are identified for the divided groups using fuzzy functions, and the parameters are fuzzified. The fuzzified parameters then adopt associative classification for generating association rules. The association rules depict the parameters influencing the software development effort. As the number of parameters that influence the effort is more, a large number of rules get generated and can reduce the complexity, the generated rules are filtered with respect to the metrics, support and confidence, which measures the strength of the rule. Genetic algorithm is then employed for selecting set of rules with high quality to improve the accuracy of the model. The datasets such as Nasa93, Cocomo81, Desharnais, Maxwell, and Finnish-v2 are used for evaluating the proposed model, and various evaluation metrics such as Mean Magnitude of Relative Error, Mean Absolute Residuals, Shepperd and MacDonell’s Standardized Accuracy, Enhanced Standardized Accuracy and Effect Size are adopted to substantiate the effectiveness of the proposed methods. The results infer that the accuracy of the model is influenced by the metrics support, confidence, and the number of association rules considered for effort prediction.展开更多
Purpose-This research focuses on a very important research question of determining the appropriate feature selection methods for software defect prediction.The study is centered on the creation of a new method that wo...Purpose-This research focuses on a very important research question of determining the appropriate feature selection methods for software defect prediction.The study is centered on the creation of a new method that would enable the identification of both positive and negative selection criteria and the handling of ambiguous information in the decision-making process.Design/methodology/approach-To do so,we develop an improved method by extending the WASPAS assessment in the context of bipolar complex fuzzy sets,which leads to the bipolar complex fuzzy WASPAS method.The approach also uses Einstein operators to increase the accuracy of aggregation and manage complicated decision-making parameters.The methodology is designed for the processing of multi-criteria decision-making problems where criteria have positive and negative polarities as well as other ambiguous information.Findings-It is also shown that the proposed methodology outperforms the traditional weighted sum or product models when assessing feature selection methods.The incorporation of bipolar complex fuzzy sets with WASPAS improves the assessment of selection criteria by taking into account both positive and negative aspects of the criteria,which contributes to more accurate feature selection for software defect prediction.We investigate a case study related to the identification of feature selection techniques for software defect prediction by using the bipolar complex fuzzy WASPAS methodology.We compare the proposed methodology with certain prevailing ones to reveal the supremacy and the requirements of the proposed theory.Originality/value-This research offers the first integrated framework for handling bipolarity and uncertainty in feature selection for software defect prediction.The combination of Einstein operators with bipolar complex fuzzy sets improves the DM process,which will be useful for software engineers and help them select the best feature selection techniques.This work also helps to enhance the overall performance of software defect prediction systems.展开更多
The fast development of the Internet has extended wider space for Information and Communications Technology(ICT),and it has brought a series of challenges for traditional software theories,models,approaches and techno...The fast development of the Internet has extended wider space for Information and Communications Technology(ICT),and it has brought a series of challenges for traditional software theories,models,approaches and technologies.This paper looks back some popular Internet-based new computing paradigms and application schemas,discusses the current status and the future trend of software technologies for Internet Computing,including topics of software model,software runtime supporting platform,software development methodologies and software quality measurement and assurance.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
基金funded by Grant No.12-INF2970-10 from the National Science,Technology and Innovation Plan(MAARIFAH)the King Abdul-Aziz City for Science and Technology(KACST)Kingdom of Saudi Arabia.
文摘Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.
文摘The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
文摘The key to software reliability is fault-tolerant design ofapplication software.New fault-tolerant strategies andtheir design methods for application software under vari-ous computer system are introduced.It has such advan-tages as simple hardware platform,independent fromapplication,stable reliability.lastly,some technicalproblems are discussed in details.
文摘Based on the new algorithm for GIS image pixel topographic factors in remote sensing monitoring ofsoil losses, a software was developed for microcomputer to carry out computation at a medium river basin(county). This paper lays its emphasis on algorithmic skills and programming techniques as well as applicationof the software.
文摘An user-oriented computer software consisting of three modeling codes, named DRAD, DRAA and FDPAT, is introduced. It can be used to design three types of Cassegrain system: classical, with shaped subreflector and with dual shaped reflectors, and to analyse radiation patterns for the antennas. Several mathematical models and numerical techniques are presented.
基金Researchers supporting Project Number(RSPD2024R576),King Saud University,Riyadh,Saudi Arabia.
文摘Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.
基金funding this work through research group no.RG-1441-490.
文摘Requirements elicitation is a fundamental phase of software development in which an analyst discovers the needs of different stakeholders and transforms them into requirements.This phase is cost-and time-intensive,and a project may fail if there are excessive costs and schedule overruns.COVID-19 has affected the software industry by reducing interactions between developers and customers.Such a lack of interaction is a key reason for the failure of software projects.Projects can also fail when customers do not know precisely what they want.Furthermore,selecting the unsuitable elicitation technique can also cause project failure.The present study,therefore,aimed to identify which requirements elicitation technique is the most cost-effective for large-scale projects when time to market is a critical issue or when the customer is not available.To that end,we conducted a systematic literature review on requirements elicitation techniques.Most primary studies identified introspection as the best technique,followed by survey and brainstorming.This finding suggests that introspection should be the first choice of elicitation technique,especially when the customer is not available or the project has strict time and cost constraints.Moreover,introspection should also be used as the starting point in the elicitation process of a large-scale project,and all known requirements should be elicited using this technique.
文摘In this paper, we identify a set of factors that may be used to forecast software productivity and software development time. Software productivity was measured in function points per person hours, and software development time was measured in number of elapsed days. Using field data on over 130 field software projects from various industries, we empirically test the impact of team size, integrated computer aided software engineering (ICASE) tools, software development type, software development platform, and programming language type on the software development productivity and development time. Our results indicate that team size, software development type, software development platform, and programming language type significantly impact software development productivity. However, only team size significantly impacts software development time. Our results indicate that effective management of software development teams, and using different management strategies for different software development type environments may improve software development productivity.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘A mathematical model that makes use of data mining and soft computing techniques is proposed to estimate the software development effort. The proposed model works as follows: The parameters that have impact on the development effort are divided into groups based on the distribution of their values in the available dataset. The linguistic terms are identified for the divided groups using fuzzy functions, and the parameters are fuzzified. The fuzzified parameters then adopt associative classification for generating association rules. The association rules depict the parameters influencing the software development effort. As the number of parameters that influence the effort is more, a large number of rules get generated and can reduce the complexity, the generated rules are filtered with respect to the metrics, support and confidence, which measures the strength of the rule. Genetic algorithm is then employed for selecting set of rules with high quality to improve the accuracy of the model. The datasets such as Nasa93, Cocomo81, Desharnais, Maxwell, and Finnish-v2 are used for evaluating the proposed model, and various evaluation metrics such as Mean Magnitude of Relative Error, Mean Absolute Residuals, Shepperd and MacDonell’s Standardized Accuracy, Enhanced Standardized Accuracy and Effect Size are adopted to substantiate the effectiveness of the proposed methods. The results infer that the accuracy of the model is influenced by the metrics support, confidence, and the number of association rules considered for effort prediction.
文摘Purpose-This research focuses on a very important research question of determining the appropriate feature selection methods for software defect prediction.The study is centered on the creation of a new method that would enable the identification of both positive and negative selection criteria and the handling of ambiguous information in the decision-making process.Design/methodology/approach-To do so,we develop an improved method by extending the WASPAS assessment in the context of bipolar complex fuzzy sets,which leads to the bipolar complex fuzzy WASPAS method.The approach also uses Einstein operators to increase the accuracy of aggregation and manage complicated decision-making parameters.The methodology is designed for the processing of multi-criteria decision-making problems where criteria have positive and negative polarities as well as other ambiguous information.Findings-It is also shown that the proposed methodology outperforms the traditional weighted sum or product models when assessing feature selection methods.The incorporation of bipolar complex fuzzy sets with WASPAS improves the assessment of selection criteria by taking into account both positive and negative aspects of the criteria,which contributes to more accurate feature selection for software defect prediction.We investigate a case study related to the identification of feature selection techniques for software defect prediction by using the bipolar complex fuzzy WASPAS methodology.We compare the proposed methodology with certain prevailing ones to reveal the supremacy and the requirements of the proposed theory.Originality/value-This research offers the first integrated framework for handling bipolarity and uncertainty in feature selection for software defect prediction.The combination of Einstein operators with bipolar complex fuzzy sets improves the DM process,which will be useful for software engineers and help them select the best feature selection techniques.This work also helps to enhance the overall performance of software defect prediction systems.
基金supported by the National Basic Research Program of China (2009CB320700)the National Natural Science Foundation of China (60821003)
文摘The fast development of the Internet has extended wider space for Information and Communications Technology(ICT),and it has brought a series of challenges for traditional software theories,models,approaches and technologies.This paper looks back some popular Internet-based new computing paradigms and application schemas,discusses the current status and the future trend of software technologies for Internet Computing,including topics of software model,software runtime supporting platform,software development methodologies and software quality measurement and assurance.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.