Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces th...Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
Satellite communication networks have been evolving from standalone networks with ad-hoc infrastructures to possibly interconnected portions of a wider Future Internet architecture. Experts belonging to the fifth-gene...Satellite communication networks have been evolving from standalone networks with ad-hoc infrastructures to possibly interconnected portions of a wider Future Internet architecture. Experts belonging to the fifth-generation(5 G) standardization committees are considering satellites as a technology to integrate in the 5 G environment. Software Defined Networking(SDN) is one of the paradigms of the next generation of mobile and fixed communications. It can be employed to perform different control functionalities, such as routing, because it allows traffic flow identification based on different parameters and traffic flow management in a centralized way. A centralized set of controllers makes the decisions and sends the corresponding forwarding rules for each traffic flow to the involved intermediate nodes that practically forward data up to the destination. The time to perform this process in integrated terrestrial-satellite networks could be not negligible due to satellite link delays. The aim of this paper is to introduce an SDN-based terrestrial satellite network architecture and to estimate the mean time to deliver the data of a new traffic flow from the source to the destination including the time required to transfer SDN control actions. The practical effect is to identify the maximum performance than can be expected.展开更多
Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the ...Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.展开更多
This research work evaluates the performance of Re-UCP model and compares the results with the UCP and e-UCP method of software effort estimation. In this research work, an attempt has been made to highlight the accur...This research work evaluates the performance of Re-UCP model and compares the results with the UCP and e-UCP method of software effort estimation. In this research work, an attempt has been made to highlight the accuracy of results by using MRE (Magnitude of Relative Error), MMRE (Mean Magnitude Relative Error), MdMRE (Median of Magnitude Relative Error) tools to check the error rate and PRED (20) and PRED (10) method to find out the predictability of accuracy of Re-UCP software effort estimation method. The observations made from the results are based on the comparison of Re-UCP, e-UCP and UCP models of software effort estimation.展开更多
Software cost estimation is a main concern of the software industry. However, the fact is also that in today’s scenario, software industries are more interested in other issues like new technologies in the market, sh...Software cost estimation is a main concern of the software industry. However, the fact is also that in today’s scenario, software industries are more interested in other issues like new technologies in the market, shorter development time, skill shortage etc. They are actually deviating from critical issues to routine issues. Today, people expect high quality products at very low costs and same is the goal of software engineering. An accuracy in software cost estimation has a direct impact on company’s reputation and also affects the software investment decisions. Accurate cost estimation can minimize the unnecessary costs and increase the productivity and efficiency of the company. The objective of this paper is to identify the existing methods of software cost estimation prevailing in the market and analyzing some of the important factors impacting the software cost estimation process. In order to achieve the objective, a survey was conducted to find out: ● Nature of projects that companies prefer. ● Impact of training on employees in software cost estimation. ● How many people review the estimated cost? ● How much risk buffer the company keeps for future prospects?展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
This study presents a decision making process in three steps of knowledge management for test organization using process simulation and financial analysis. First, project cost assessment of test knowledge management p...This study presents a decision making process in three steps of knowledge management for test organization using process simulation and financial analysis. First, project cost assessment of test knowledge management process subjects to different project duration and number of staffs. Two knowledge management simulation models representing experienced personnel with knowledge sharing and inexperienced personnel with internal training respectively are employed to contrast test personnel capability. Second, performance evaluation of software testing process by different personnel capability is conducted to simulate system test using three project metrics, namely, duration, effort cost, and quality. Third, a comparative financial analysis is prepared to determine the best solution by return on investment, payback period, and benefit cost ratio. The results from three stages of finding are discussed to arrive at the final scenario. We provide a case study evaluating how software testing industry needs to build effective test organization with high quality personnel for sustainable development and improvement.展开更多
Design architecture is the edifice that strengthens the functionalities as well as the security of web applications.In order to facilitate architectural security from the web application’s design phase itself,practit...Design architecture is the edifice that strengthens the functionalities as well as the security of web applications.In order to facilitate architectural security from the web application’s design phase itself,practitioners are now adopting the novel mechanism of security tactics.With the intent to conduct a research from the perspective of security tactics,the present study employs a hybrid multi-criteria decision-making approach named fuzzy analytic hierarchy process-technique for order preference by similarity ideal solution(AHP-TOPSIS)method for selecting and assessing multi-criteria decisions.The adopted methodology is a blend of fuzzy analytic hierarchy process(fuzzy AHP)and fuzzy technique for order preference by similarity ideal solution(fuzzy TOPSIS).To establish the efficacy of this methodology,the results are obtained after the evaluation have been tested on fifteen different web application projects(Online Quiz competition,Entrance Test,and others)of the Babasaheb Bhimrao Ambedkar University,Lucknow,India.The tabulated outcomes demonstrate that the methodology of the Multi-Level Fuzzy Hybrid system is highly effective in providing accurate estimation for strengthening the security of web applications.The proposed study will help experts and developers in developing and managing security from any web application design phase for better accuracy and higher security.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software ...Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.展开更多
The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have bec...The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.展开更多
In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which enca...In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.展开更多
Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of softwar...Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
基金Project supported by the Project of the Anhui Provincial Natural Science Foundation(Grant No.2308085MA19)Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA0410401)+2 种基金the National Natural Science Foundation of China(Grant No.52202120)the National Key Research and Development Program of China(Grant No.2023YFA1609800)USTC Research Funds of the Double First-Class Initiative(Grant No.YD2310002013)。
文摘Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
文摘Satellite communication networks have been evolving from standalone networks with ad-hoc infrastructures to possibly interconnected portions of a wider Future Internet architecture. Experts belonging to the fifth-generation(5 G) standardization committees are considering satellites as a technology to integrate in the 5 G environment. Software Defined Networking(SDN) is one of the paradigms of the next generation of mobile and fixed communications. It can be employed to perform different control functionalities, such as routing, because it allows traffic flow identification based on different parameters and traffic flow management in a centralized way. A centralized set of controllers makes the decisions and sends the corresponding forwarding rules for each traffic flow to the involved intermediate nodes that practically forward data up to the destination. The time to perform this process in integrated terrestrial-satellite networks could be not negligible due to satellite link delays. The aim of this paper is to introduce an SDN-based terrestrial satellite network architecture and to estimate the mean time to deliver the data of a new traffic flow from the source to the destination including the time required to transfer SDN control actions. The practical effect is to identify the maximum performance than can be expected.
基金funded by Grant No.12-INF2970-10 from the National Science,Technology and Innovation Plan(MAARIFAH)the King Abdul-Aziz City for Science and Technology(KACST)Kingdom of Saudi Arabia.
文摘Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.
文摘This research work evaluates the performance of Re-UCP model and compares the results with the UCP and e-UCP method of software effort estimation. In this research work, an attempt has been made to highlight the accuracy of results by using MRE (Magnitude of Relative Error), MMRE (Mean Magnitude Relative Error), MdMRE (Median of Magnitude Relative Error) tools to check the error rate and PRED (20) and PRED (10) method to find out the predictability of accuracy of Re-UCP software effort estimation method. The observations made from the results are based on the comparison of Re-UCP, e-UCP and UCP models of software effort estimation.
文摘Software cost estimation is a main concern of the software industry. However, the fact is also that in today’s scenario, software industries are more interested in other issues like new technologies in the market, shorter development time, skill shortage etc. They are actually deviating from critical issues to routine issues. Today, people expect high quality products at very low costs and same is the goal of software engineering. An accuracy in software cost estimation has a direct impact on company’s reputation and also affects the software investment decisions. Accurate cost estimation can minimize the unnecessary costs and increase the productivity and efficiency of the company. The objective of this paper is to identify the existing methods of software cost estimation prevailing in the market and analyzing some of the important factors impacting the software cost estimation process. In order to achieve the objective, a survey was conducted to find out: ● Nature of projects that companies prefer. ● Impact of training on employees in software cost estimation. ● How many people review the estimated cost? ● How much risk buffer the company keeps for future prospects?
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
文摘This study presents a decision making process in three steps of knowledge management for test organization using process simulation and financial analysis. First, project cost assessment of test knowledge management process subjects to different project duration and number of staffs. Two knowledge management simulation models representing experienced personnel with knowledge sharing and inexperienced personnel with internal training respectively are employed to contrast test personnel capability. Second, performance evaluation of software testing process by different personnel capability is conducted to simulate system test using three project metrics, namely, duration, effort cost, and quality. Third, a comparative financial analysis is prepared to determine the best solution by return on investment, payback period, and benefit cost ratio. The results from three stages of finding are discussed to arrive at the final scenario. We provide a case study evaluating how software testing industry needs to build effective test organization with high quality personnel for sustainable development and improvement.
文摘Design architecture is the edifice that strengthens the functionalities as well as the security of web applications.In order to facilitate architectural security from the web application’s design phase itself,practitioners are now adopting the novel mechanism of security tactics.With the intent to conduct a research from the perspective of security tactics,the present study employs a hybrid multi-criteria decision-making approach named fuzzy analytic hierarchy process-technique for order preference by similarity ideal solution(AHP-TOPSIS)method for selecting and assessing multi-criteria decisions.The adopted methodology is a blend of fuzzy analytic hierarchy process(fuzzy AHP)and fuzzy technique for order preference by similarity ideal solution(fuzzy TOPSIS).To establish the efficacy of this methodology,the results are obtained after the evaluation have been tested on fifteen different web application projects(Online Quiz competition,Entrance Test,and others)of the Babasaheb Bhimrao Ambedkar University,Lucknow,India.The tabulated outcomes demonstrate that the methodology of the Multi-Level Fuzzy Hybrid system is highly effective in providing accurate estimation for strengthening the security of web applications.The proposed study will help experts and developers in developing and managing security from any web application design phase for better accuracy and higher security.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
文摘Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.
基金National Key R&D Program of China(Nos.2023YFC3806900,2022YFE0141400)。
文摘The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.
基金National Natural Science Foundation of China(12471367)。
文摘In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.
基金supported by the Beijing Education Commission Science and Technology Project(No.KM201811417005)the National Natural Science Foundation of China(No.62173237)+6 种基金the Aeronautical Science Foundation of China(No.20240055054001)the Open Fund of State Key Laboratory of Satellite Navigation System and Equipment Technology(No.CEPNT2023A01)Joint Fund of Ministry of Natural Resources Key Laboratory of Spatiotemporal Perception and Intelligent Processing(No.232203)the Civil Aviation Flight Technology and Flight Safety Engineering Technology Research Center of Sichuan(No.GY2024-02B)the Applied Basic Research Programs of Liaoning Province(No.2025JH2/101300011)the General Project of Liaoning Provincial Education Department(No.20250054)Research on Safety Intelligent Management Technology and Systems for Mixed Operations of General Aviation Aircraft in Low-Altitude Airspace(No.310125011).
文摘Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.