In view of the flaws of component-based software (CBS) reliability modeling and analysis, the low recognition degree of debugging process, too many assumptions and difficulties in obtaining the solution, a CBS relia...In view of the flaws of component-based software (CBS) reliability modeling and analysis, the low recognition degree of debugging process, too many assumptions and difficulties in obtaining the solution, a CBS reliability simulation process is presented incorporating the imperfect debugging and the limitation of debugging resources. Considering the effect of imperfect debugging on fault detec- tion and correction process, a CBS integration testing model is sketched by multi-queue muhichannel and finite server queuing model (MMFSQM). Compared with the analytical method based on pa- rameters and other nonparametric approaches, the simulation approach can relax more of the usual reliability modeling assumptions and effectively expound integration testing process of CBS. Then, CBS reliability process simulation procedure is developed accordingly. The proposed simulation ap- proach is validated to be sound and effective by simulation experiment studies and analysis.展开更多
For a more accurate and comprehensive assessment of the trustworthiness of component-based soft- ware system, the fuzzy analytic hierarchy process is introduced to establish the analysis model. Combine qualitative and...For a more accurate and comprehensive assessment of the trustworthiness of component-based soft- ware system, the fuzzy analytic hierarchy process is introduced to establish the analysis model. Combine qualitative and quantitative analyses, the impacts to overall trustworthiness by the different types of components are distinguished. Considering the coupling relationship between components, dividing the system into several layers from target layer to scheme layer, evaluating the scheme advantages disadvantages by group decision-making, the trustworthiness of a typical J2EE structured component-based software is assessed. The trustworthiness asses model of the software components provides an effective methods of operation.展开更多
Against the deficiencies of component-based software(CBS) reliability modeling and analysis,for instance,importing too many assumptions,paying less attention to debugging process without considering imperfect debuggin...Against the deficiencies of component-based software(CBS) reliability modeling and analysis,for instance,importing too many assumptions,paying less attention to debugging process without considering imperfect debugging and change-point(CP) problems adequately,an approach of CBS reliability process analysis is proposed which incorporates the imperfect debugging and CP.First,perfect/imperfect debugging and CP are reviewed.Based on the queuing theory,a multi-queue multichannel and infinite server queuing model(MMISQM) is presented to sketch the integration test process of CBS.Meanwhile,considering the effects of imperfect debugging and CP,expressions for fault detection and correction are derived based on MMISQM.Numerical results demonstrate that the proposed model can sketch the integration test process of CBS with preferable performance which outperforms other models.展开更多
In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliabili...In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliability process analysis model is proposed incorporating debugging time delay, im- perfect debugging and limited debugging resources. CBS integration testing is formulated as a multi- queue muhichannel and finite server queuing model (MMFSQM) to illustrate fault detection process (FDP) and fault correction process (FCP). A unified FCP is sketched, given debugging delay, the diversities of faults processing and the limitations of debugging resources. Furthermore, the impacts of imperfect debugging on fault detection and correction are explicitly elaborated, and the expres- sions of the cumulative number of fault detected and corrected are illustrated. Finally, the results of numerical experiments verify the effectiveness and rationality of the proposed model. By comparison, the proposed model is superior to the other models. The proposed model is closer to real CBS testing process and facilitates software engineer' s quantitatively analyzing, measuring and predicting CBS reliability. K展开更多
Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering....Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering.Furthermore,a general maskedbased additive non-homogeneous Poisson process(NHPP) model was considered to analyze component reliability.However,the problem of masked-based additive model lies in the difficulty of estimating parameters.The maximum likelihood estimation procedure was derived to estimate parameters.Finally,a numerical example was given to illustrate the applicability of proposed model,and the immune particle swarm optimization(IPSO) algorithm was used in maximize log-likelihood function.展开更多
In a component-based software development life cycle, selection of preexisting components is an important task. Every component that has to be reused has an associated risk of failure of not meeting the functional and...In a component-based software development life cycle, selection of preexisting components is an important task. Every component that has to be reused has an associated risk of failure of not meeting the functional and non-functional requirements. A component's failure would lead a developer to look for some other alternative of combinations of COTS, in-house and engineered components among possible candidate combinations. This means design itself can readily change. The very process of design of a software system and component selection seems to be heavily dependent on testing results. Instability of design, further, becomes more severe due to requirements change requests. Therefore, this instability of design has to be essentially mitigated by using proper design and testing approaches, otherwise, it may lead to exorbitantly high testing cost due to the repeated testing of various alternatives. How these three activities: Component-based software design, component selection and component-based software testing are interrelated? What process model is most suited to address this concern? This work explores the above questions and their implication in terms of nature of a process model that can be convincing in case of component-based software development.展开更多
Computer software has been becoming more and more c om plex with the development of hardware. Thus, how to efficiently develop extensib le, maintainable and adaptable software occurs to be an urgent problem. The comp ...Computer software has been becoming more and more c om plex with the development of hardware. Thus, how to efficiently develop extensib le, maintainable and adaptable software occurs to be an urgent problem. The comp onent-based software development technique is a better method to solve the prob lem. In this paper, we first discuss the concept, description method and some fa miliar styles of software architecture, and then analyze the merits of using the software architecture to guide the software development. We also present a gene ral design method for component. Its applications are finally provided.展开更多
Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large s...Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW is proposed, which plays important roles both in achieving interoperability among components and among reusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.展开更多
This paper suggests a component-based software development framework for 3rd party logistics (3PL) business. This framework integrates two engineering methodologies in order to identify the most reusable software comp...This paper suggests a component-based software development framework for 3rd party logistics (3PL) business. This framework integrates two engineering methodologies in order to identify the most reusable software components that can be used in several types of 3PL business models. UML (Unified Modeling Language) is used to design lower-level software components and DEMO (Design and Engineering Methodology for Organization), one of the business engineering methodologies based on the communication theory, is used to identify core business processes for 3PL business models. By using the methodologies, we develop a 3PL management solution by applying the framework into a C2C type of 3PL business model, specifically the door-to-door (D2D) service.展开更多
A new method that designs and implements the component-based distributed & hierarchical flexible manufacturing control software is described with a component concept in this paper. The proposed method takes aim at...A new method that designs and implements the component-based distributed & hierarchical flexible manufacturing control software is described with a component concept in this paper. The proposed method takes aim at improving the flexibility and reliability of the control system. On the basis of describing the concepts of component-based software and the distributed object technology, the architecture of the component-based software of the control system is suggested with the Common Object Request Broker Architecture (CORBA). And then, we propose a design method for component-based distributed & hierarchical flexible manufacturing control system. Finally, to verify the software design method, a prototype flexible manufacturing control system software has been implemented in Orbix 2.3c, VC + + 6. 0 and has been tested in connection with the physical flexible manufacturing shop at the WuXi Professional Institute.展开更多
According to the morphological structure characteristics of plants, the de- velopment mode for component-based virtual plants software was put forward, and the internal structure of plant organs component under this m...According to the morphological structure characteristics of plants, the de- velopment mode for component-based virtual plants software was put forward, and the internal structure of plant organs component under this mode were analyzed, thereby, the overall design mode for virtual plants software was given out, and its characteristics were estimated. Compared with traditional development modes of virtual plants software, component-based virtual plants software had significant advantages in code reusing, development efficiency and expansion of software functions.展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-...Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-propagation reliability model(CBPRM)with low complexity for the complex software system reliability evaluation is presented in this paper.The proposed model is based on the artificial neural networks and the component reliability sensitivity analyses.These analyses are performed dynamically and assigned to the neurons to optimize the reliability evaluation.CBPRM has a linear increasing complexity and outperforms the state-based and the path-based reliability models.Another advantage of CBPRM over others is its robustness.CBPRM depends on the component reliabilities and the correlative sensitivities,which are independent from the software system structure.Based on the theory analysis and experiment results,it shows that the complexity of CBPRM is evidently lower than the contrast models and the reliability evaluating accuracy is acceptable when the software system structure is complex.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have bec...The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.展开更多
In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which enca...In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
基金Supported by the National High Technology Research and Development Program of China(No.2008AA01A201)the National Nature Science Foundation of China(No.60503015,90818016)
文摘In view of the flaws of component-based software (CBS) reliability modeling and analysis, the low recognition degree of debugging process, too many assumptions and difficulties in obtaining the solution, a CBS reliability simulation process is presented incorporating the imperfect debugging and the limitation of debugging resources. Considering the effect of imperfect debugging on fault detec- tion and correction process, a CBS integration testing model is sketched by multi-queue muhichannel and finite server queuing model (MMFSQM). Compared with the analytical method based on pa- rameters and other nonparametric approaches, the simulation approach can relax more of the usual reliability modeling assumptions and effectively expound integration testing process of CBS. Then, CBS reliability process simulation procedure is developed accordingly. The proposed simulation ap- proach is validated to be sound and effective by simulation experiment studies and analysis.
基金Sponsored by the National High Technology Research and Development Program of China ("863"Program) (2009AA01Z433)
文摘For a more accurate and comprehensive assessment of the trustworthiness of component-based soft- ware system, the fuzzy analytic hierarchy process is introduced to establish the analysis model. Combine qualitative and quantitative analyses, the impacts to overall trustworthiness by the different types of components are distinguished. Considering the coupling relationship between components, dividing the system into several layers from target layer to scheme layer, evaluating the scheme advantages disadvantages by group decision-making, the trustworthiness of a typical J2EE structured component-based software is assessed. The trustworthiness asses model of the software components provides an effective methods of operation.
基金Supported by the National High Technology Research and Development Program of China(No.2008AA01A201)the National Natural ScienceFoundation of China(No.60503015)+1 种基金the National Key R&D Program of China(No.2013BA17F02)the Shandong Province Science andTechnology Program of China(No.2011GGX10108,2010GGX10104)
文摘Against the deficiencies of component-based software(CBS) reliability modeling and analysis,for instance,importing too many assumptions,paying less attention to debugging process without considering imperfect debugging and change-point(CP) problems adequately,an approach of CBS reliability process analysis is proposed which incorporates the imperfect debugging and CP.First,perfect/imperfect debugging and CP are reviewed.Based on the queuing theory,a multi-queue multichannel and infinite server queuing model(MMISQM) is presented to sketch the integration test process of CBS.Meanwhile,considering the effects of imperfect debugging and CP,expressions for fault detection and correction are derived based on MMISQM.Numerical results demonstrate that the proposed model can sketch the integration test process of CBS with preferable performance which outperforms other models.
基金Supported by the National High Technology Research and Development Program of China(No.2008AA01A201)the National Natural Science Foundation of China(No.60503015)+1 种基金the National Key R&D Program of China(No.2013BA17F02)the Shandong Province Science and Technology Program of China(No.2011GGX10108,2010GGX10104)
文摘In view of the problems and the weaknesses of component-based software ( CBS ) reliability modeling and analysis, and a lack of consideration for real debugging circumstance of integration tes- ting, a CBS reliability process analysis model is proposed incorporating debugging time delay, im- perfect debugging and limited debugging resources. CBS integration testing is formulated as a multi- queue muhichannel and finite server queuing model (MMFSQM) to illustrate fault detection process (FDP) and fault correction process (FCP). A unified FCP is sketched, given debugging delay, the diversities of faults processing and the limitations of debugging resources. Furthermore, the impacts of imperfect debugging on fault detection and correction are explicitly elaborated, and the expres- sions of the cumulative number of fault detected and corrected are illustrated. Finally, the results of numerical experiments verify the effectiveness and rationality of the proposed model. By comparison, the proposed model is superior to the other models. The proposed model is closer to real CBS testing process and facilitates software engineer' s quantitatively analyzing, measuring and predicting CBS reliability. K
基金Technology Foundation of Guizhou Province,China(No.QianKeHeJZi[2015]2064)Scientific Research Foundation for Advanced Talents in Guizhou Institue of Technology and Science,China(No.XJGC20150106)Joint Foundation of Guizhou Province,China(No.QianKeHeLHZi[2015]7105)
文摘Masked data are the system failure data when exact component causing system failure might be unknown.In this paper,the mathematical description of general masked data was presented in software reliability engineering.Furthermore,a general maskedbased additive non-homogeneous Poisson process(NHPP) model was considered to analyze component reliability.However,the problem of masked-based additive model lies in the difficulty of estimating parameters.The maximum likelihood estimation procedure was derived to estimate parameters.Finally,a numerical example was given to illustrate the applicability of proposed model,and the immune particle swarm optimization(IPSO) algorithm was used in maximize log-likelihood function.
文摘In a component-based software development life cycle, selection of preexisting components is an important task. Every component that has to be reused has an associated risk of failure of not meeting the functional and non-functional requirements. A component's failure would lead a developer to look for some other alternative of combinations of COTS, in-house and engineered components among possible candidate combinations. This means design itself can readily change. The very process of design of a software system and component selection seems to be heavily dependent on testing results. Instability of design, further, becomes more severe due to requirements change requests. Therefore, this instability of design has to be essentially mitigated by using proper design and testing approaches, otherwise, it may lead to exorbitantly high testing cost due to the repeated testing of various alternatives. How these three activities: Component-based software design, component selection and component-based software testing are interrelated? What process model is most suited to address this concern? This work explores the above questions and their implication in terms of nature of a process model that can be convincing in case of component-based software development.
文摘Computer software has been becoming more and more c om plex with the development of hardware. Thus, how to efficiently develop extensib le, maintainable and adaptable software occurs to be an urgent problem. The comp onent-based software development technique is a better method to solve the prob lem. In this paper, we first discuss the concept, description method and some fa miliar styles of software architecture, and then analyze the merits of using the software architecture to guide the software development. We also present a gene ral design method for component. Its applications are finally provided.
文摘Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW is proposed, which plays important roles both in achieving interoperability among components and among reusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.
文摘This paper suggests a component-based software development framework for 3rd party logistics (3PL) business. This framework integrates two engineering methodologies in order to identify the most reusable software components that can be used in several types of 3PL business models. UML (Unified Modeling Language) is used to design lower-level software components and DEMO (Design and Engineering Methodology for Organization), one of the business engineering methodologies based on the communication theory, is used to identify core business processes for 3PL business models. By using the methodologies, we develop a 3PL management solution by applying the framework into a C2C type of 3PL business model, specifically the door-to-door (D2D) service.
基金Supported by National High Technology Development plan(Item No.:2001AA412250)and Shanghai Science & Technology Development Project(Item No.:02FK04)
文摘A new method that designs and implements the component-based distributed & hierarchical flexible manufacturing control software is described with a component concept in this paper. The proposed method takes aim at improving the flexibility and reliability of the control system. On the basis of describing the concepts of component-based software and the distributed object technology, the architecture of the component-based software of the control system is suggested with the Common Object Request Broker Architecture (CORBA). And then, we propose a design method for component-based distributed & hierarchical flexible manufacturing control system. Finally, to verify the software design method, a prototype flexible manufacturing control system software has been implemented in Orbix 2.3c, VC + + 6. 0 and has been tested in connection with the physical flexible manufacturing shop at the WuXi Professional Institute.
基金Supported by the National Natural Science Foundation of China(61062007)the Principal Fund Project of Tarim University,China(TDZKSS201115)~~
文摘According to the morphological structure characteristics of plants, the de- velopment mode for component-based virtual plants software was put forward, and the internal structure of plant organs component under this mode were analyzed, thereby, the overall design mode for virtual plants software was given out, and its characteristics were estimated. Compared with traditional development modes of virtual plants software, component-based virtual plants software had significant advantages in code reusing, development efficiency and expansion of software functions.
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
基金Supported by the National Natural Science Foundation of China(No.60973118,60873075)
文摘Since most of the available component-based software reliability models consume high computational cost and suffer from the evaluating complexity for the software system with complex structures,a component-based back-propagation reliability model(CBPRM)with low complexity for the complex software system reliability evaluation is presented in this paper.The proposed model is based on the artificial neural networks and the component reliability sensitivity analyses.These analyses are performed dynamically and assigned to the neurons to optimize the reliability evaluation.CBPRM has a linear increasing complexity and outperforms the state-based and the path-based reliability models.Another advantage of CBPRM over others is its robustness.CBPRM depends on the component reliabilities and the correlative sensitivities,which are independent from the software system structure.Based on the theory analysis and experiment results,it shows that the complexity of CBPRM is evidently lower than the contrast models and the reliability evaluating accuracy is acceptable when the software system structure is complex.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金National Key R&D Program of China(Nos.2023YFC3806900,2022YFE0141400)。
文摘The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.
基金National Natural Science Foundation of China(12471367)。
文摘In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.