In this work,we demonstrated the InSnO(ITO)TFTs passivated with SiO_(2)via the PECVD process compatible with large-area production for the first time.The passivated ITO TFTs with various channel thicknesses(t_(ch)=4,5...In this work,we demonstrated the InSnO(ITO)TFTs passivated with SiO_(2)via the PECVD process compatible with large-area production for the first time.The passivated ITO TFTs with various channel thicknesses(t_(ch)=4,5,6 nm)exhibit excellent electrical performance and superior uniformity.The reliability properties of ITO TFTs were evaluated in detail under positive bias stress(PBS)conditions before and after passivation.Compared to the devices without passivation,the passivated devices have only 50%threshold voltage degradation(ΔV_(th))and 50%newly generated traps due to excellent isolation of the ambient atmosphere.The negligible performance degradation of ITO TFTs with passivation during negative bias stress(NBS)and negative bias temperature stress(NBTS)verifies the outstanding immunity to the water vapor of the SiO_(2)passivation layer.Overall,the ITO TFT with the t_(ch)of 6 nm and with SiO_(2)passivation exhibits the best performance in terms of electrical properties,uniformity,and reliability,which is promising in large-area production.展开更多
This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for ...This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.展开更多
To study the durability of concrete in harsh environments in Northwest China,concrete was prepared with various durability-improving materials such as concrete anti-erosion inhibitor(SBT-TIA),acrylate polymer(AP),supe...To study the durability of concrete in harsh environments in Northwest China,concrete was prepared with various durability-improving materials such as concrete anti-erosion inhibitor(SBT-TIA),acrylate polymer(AP),super absorbent resin(SAP).The erosion mode and internal deterioration mechanism under salt freeze-thaw cycle and dry-wet cycle were explored.The results show that the addition of enhancing materials can effectively improve the resistance of concrete to salt freezing and sulfate erosion:the relevant indexes of concrete added with X-AP and T-AP are improved after salt freeze-thaw cycles;concrete added with SBTTIA shows optimal sulfate corrosion resistance;and concrete added with AP displays the best resistance to salt freezing.Microanalysis shows that the increase in the number of cycles decreases the generation of internal hydration products and defects in concrete mixed with enhancing materials and improves the related indexes.Based on the Wiener model analysis,the reliability of concrete with different lithologies and enhancing materials is improved,which may provide a reference for the application of manufactured sand concrete and enhancing materials in Northwest China,especially for the study of the improvement effects and mechanism of enhancing materials on the performance of concrete.展开更多
Reservoir-induced landslides in China's Three Gorges Reservoir area are prone to tensile cracks due to the influenceof their own weight and fluctuationsin water levels.The presence of cracks indicates that the ten...Reservoir-induced landslides in China's Three Gorges Reservoir area are prone to tensile cracks due to the influenceof their own weight and fluctuationsin water levels.The presence of cracks indicates that the tensile stress in the area has exceeded the tensile strength of the soil,leading to local instability.To explore the impact of tensile failure behavior on the stability and failure modes of reservoir landslides,the Huangtupo Riverside Slump#1 is taken as a case study.By considering local tensile failure,potential tensile cracks are incorporated into the analysis via the limit equilibrium method and reliability theory.The reliability of landslides under different tensile failure scenarios is quantified.Strain-softening characteristics of the soil are combined to further analyze the failure transmission path of the landslide.Finally,these potential failure modes were validated through physical model tests.The results show that cracks developing at rear positions reduce the stability of the slope and increase the probability of instability.During the destruction process,retrogressive failures with multiple sliding surfaces are likely to occur.However,tensile failure at the forefront reduces the likelihood of an individual slide mass descending.Progressive failure results in both regular and skip transmission patterns.Additionally,cracks and water level changes can also lead to shifts in the positions of the most dangerous blocks.Therefore,in practical landslide analysis and prevention,it is necessary to consider local tensile damage and identify potential tensile crack locations in advance to optimize prevention measures and accurately evaluate landslide risk.展开更多
In reliability analyses,the absence of a priori information on the most probable point of failure(MPP)may result in overlooking critical points,thereby leading to biased assessment outcomes.Moreover,second-order relia...In reliability analyses,the absence of a priori information on the most probable point of failure(MPP)may result in overlooking critical points,thereby leading to biased assessment outcomes.Moreover,second-order reliability methods exhibit limited accuracy in highly nonlinear scenarios.To overcome these challenges,a novel reliability analysis strategy based on a multimodal differential evolution algorithm and a hypersphere integration method is proposed.Initially,the penalty function method is employed to reformulate the MPP search problem as a conditionally constrained optimization task.Subsequently,a differential evolution algorithm incorporating a population delineation strategy is utilized to identify all MPPs.Finally,a paraboloid equation is constructed based on the curvature of the limit-state function at the MPPs,and the failure probability of the structure is calculated by using the hypersphere integration method.The localization effectiveness of the MPPs is compared through multiple numerical cases and two engineering examples,with accuracy comparisons of failure probabilities against the first-order reliability method(FORM)and the secondorder reliability method(SORM).The results indicate that the method effectively identifies existing MPPs and achieves higher solution precision.展开更多
Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces th...Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
It is well recognized that Structural Health Monitoring(SHM)reliability evaluation is a key aspect that needs to be urgently addressed to promote the wide application of SHM methods.However,the existing studies typica...It is well recognized that Structural Health Monitoring(SHM)reliability evaluation is a key aspect that needs to be urgently addressed to promote the wide application of SHM methods.However,the existing studies typically transfer the Non-Destructive Testing/Evaluation(NDT/E)reliability metrics to SHM without a systematic analysis of where these metrics originated.Seldom attentions are paid to the evaluation conditions which are very important to apply these metrics.Aimed at this issue,a new condition control-based Dual-Reliability Evaluation(Dual-RE)method for SHM is proposed.This new method is proposed based on a systematic analysis of the whole framework of reliability evaluation from instrument to NDT,and emphasis is paid to the evaluation condition control.Based on these analyses,considering the special online application scenario of SHM,the proposed Dual-RE method contains two key components:Integrated Sensor-based SHM-RE(IS-SHM-RE)and Critical Service Condition-based SHM-RE(CSC-SHM-RE).ISSHM-RE evaluates the reliability of integrated SHM sensor and system themselves under approximate repeatability conditions,while CSC-SHM-RE assesses SHM reliability under the dominant uncertainties during service,namely intermediate conditions.To demonstrate the Dual-RE,crack monitoring by using the Guided Wave-based-SHM(GW-SHM)on aircraft lug structures is taken as a case study.Both the crack detection and sizing performance are evaluated from accuracy and uncertainty.展开更多
This paper proposes an artificial neural network(ANN) based software reliability model trained by novel particle swarm optimization(PSO) algorithm for enhanced forecasting of the reliability of software. The proposed ...This paper proposes an artificial neural network(ANN) based software reliability model trained by novel particle swarm optimization(PSO) algorithm for enhanced forecasting of the reliability of software. The proposed ANN is developed considering the fault generation phenomenon during software testing with the fault complexity of different levels. We demonstrate the proposed model considering three types of faults residing in the software. We propose a neighborhood based fuzzy PSO algorithm for competent learning of the proposed ANN using software failure data. Fitting and prediction performances of the neighborhood fuzzy PSO based proposed neural network model are compared with the standard PSO based proposed neural network model and existing ANN based software reliability models in the literature through three real software failure data sets. We also compare the performance of the proposed PSO algorithm with the standard PSO algorithm through learning of the proposed ANN. Statistical analysis shows that the neighborhood fuzzy PSO based proposed neural network model has comparatively better fitting and predictive ability than the standard PSO based proposed neural network model and other ANN based software reliability models. Faster release of software is achievable by applying the proposed PSO based neural network model during the testing period.展开更多
Based on the fact that the software development cost is an important factorto control the whole project,we discuss the relationship between the software development cost andsoftware reliability according to the empiri...Based on the fact that the software development cost is an important factorto control the whole project,we discuss the relationship between the software development cost andsoftware reliability according to the empirieal data collected from the development process.Byevolutionary modeling we get an empirical model of the relationship between cost and softwarereliability,and validate the estimate results with the empirical data.展开更多
As one of the most important indexes to evaluate the quality of software, software reliability experiences an increasing development in recent years. We investigate a software reliability growth model(SRGM). The appli...As one of the most important indexes to evaluate the quality of software, software reliability experiences an increasing development in recent years. We investigate a software reliability growth model(SRGM). The application of this model is to predict the occurrence of the software faults based on the non-homogeneous Poisson process(NHPP). Unlike the independent assumptions in other models, we consider fault dependency. The testing faults are divided into three classes in this model: leading faults, first-step dependent faults and second-step dependent faults. The leading faults occurring independently follow an NHPP, while the first-step dependent faults only become detectable after the related leading faults are detected. The second-step dependent faults can only be detected after the related first-step dependent faults are detected. Then, the combined model is built on the basis of the three sub-processes. Finally, an illustration based on real dataset is presented to verify the proposed model.展开更多
Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the ...Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.展开更多
Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped...Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.展开更多
As communication technology and smart manufacturing have developed, the industrial internet of things(IIo T)has gained considerable attention from academia and industry.Wireless sensor networks(WSNs) have many advanta...As communication technology and smart manufacturing have developed, the industrial internet of things(IIo T)has gained considerable attention from academia and industry.Wireless sensor networks(WSNs) have many advantages with broad applications in many areas including environmental monitoring, which makes it a very important part of IIo T. However,energy depletion and hardware malfunctions can lead to node failures in WSNs. The industrial environment can also impact the wireless channel transmission, leading to network reliability problems, even with tightly coupled control and data planes in traditional networks, which obviously also enhances network management cost and complexity. In this paper, we introduce a new software defined network(SDN), and modify this network to propose a framework called the improved software defined wireless sensor network(improved SD-WSN). This proposed framework can address the following issues. 1) For a large scale heterogeneous network, it solves the problem of network management and smooth merging of a WSN into IIo T. 2) The network coverage problem is solved which improves the network reliability. 3) The framework addresses node failure due to various problems, particularly related to energy consumption.Therefore, it is necessary to improve the reliability of wireless sensor networks, by developing certain schemes to reduce energy consumption and the delay time of network nodes under IIo T conditions. Experiments have shown that the improved approach significantly reduces the energy consumption of nodes and the delay time, thus improving the reliability of WSN.展开更多
Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures...Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.展开更多
With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application relia...With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application reliability accurately with the information of system architecture and the components reliabilities together has become a knotty problem.In this paper,the defects in formal description of software architecture and the limitations in existed model assumptions are both analyzed.Moreover,a new software reliability model called Component Interaction Mode(CIM) is proposed.With this model,the problem for existed component-based software reliability analysis models that cannot deal with the cases of component interaction with non-failure independent and non-random control transition is resolved.At last,the practice examples are presented to illustrate the effectiveness of this model.展开更多
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out...According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.展开更多
By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN dep...By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN deployments.For wide-area SDN deployments,multiple controllers are often required,and the placement of these controllers becomes a particularly important task in the SDN context.This paper studies the problem of placing controllers in SDNs,so as to maximize the reliability of SDN control networks.We present a novel metric,called expected percentage of control path loss,to characterize the reliability of SDN control networks.We formulate the reliability-aware control placement problem,prove its NP-hardness,and examine several placement algorithms that can solve this problem.Through extensive simulations using real topologies,we show how the number of controllers and their placement influence the reliability of SDN control networks.Besides,we also found that,through strategic controller placement,the reliability of SDN control networks can be significantly improved without introducing unacceptable switch-to-controller latencies.展开更多
In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogene...In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogeneous Poisson process(NHPP),and it is proved that the prediction accuracy of such models could be improved by adding the describing of characterization of testing effort.However,some research work indicates that the fault detection rate(FDR) is another key factor affects final software quality.Most early NHPPbased models deal with the FDR as constant or piecewise function,which does not fit the different testing stages well.Thus,this paper first incorporates a multivariate function of FDR,which is bathtub-shaped,into the NHPP-based SRGMs considering testing effort in order to further improve performance.A new model framework is proposed,and a stepwise method is used to apply the framework with real data sets to find the optimal model.Experimental studies show that the obtained new model can provide better performance of fitting and prediction compared with other traditional SRGMs.展开更多
As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth ...As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth models (SRGMs), including those combined with multiple change-points (CPs), have been available, these conventional SRGMs cannot be directly applied to web soft- ware reliability analysis because of the complex web operational profile. To characterize the web operational profile precisely, it should be realized that the workload of a web server is normally non-homogeneous and often observed with the pattern of random impulsive shocks. A web software reliability model with random im- pulsive shocks and its statistical analysis method are developed. In the proposed model, the web server workload is characterized by a geometric Brownian motion process. Based on a real data set from IIS server logs of ICRMS website (www.icrms.cn), the proposed model is demonstrated to be powerful for estimating impulsive shocks and web software reliability.展开更多
基金supported in part by the National Natural Science Foundation of China(62404110,62274033)Natural Science Foundation of Jiangsu Province(BK20221453)+1 种基金Fundamental Research Funds for the Central UniversitiesNatural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications(NY223159)。
文摘In this work,we demonstrated the InSnO(ITO)TFTs passivated with SiO_(2)via the PECVD process compatible with large-area production for the first time.The passivated ITO TFTs with various channel thicknesses(t_(ch)=4,5,6 nm)exhibit excellent electrical performance and superior uniformity.The reliability properties of ITO TFTs were evaluated in detail under positive bias stress(PBS)conditions before and after passivation.Compared to the devices without passivation,the passivated devices have only 50%threshold voltage degradation(ΔV_(th))and 50%newly generated traps due to excellent isolation of the ambient atmosphere.The negligible performance degradation of ITO TFTs with passivation during negative bias stress(NBS)and negative bias temperature stress(NBTS)verifies the outstanding immunity to the water vapor of the SiO_(2)passivation layer.Overall,the ITO TFT with the t_(ch)of 6 nm and with SiO_(2)passivation exhibits the best performance in terms of electrical properties,uniformity,and reliability,which is promising in large-area production.
基金supported by Istanbul Technical University(Project No.45698)supported through the“Young Researchers’Career Development Project-training of doctoral students”of the Croatian Science Foundation.
文摘This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.
基金Funded by the National Natural Science Foundation of China(No.52178216)the Research on the Durability and Application of High-performance Concrete for Highway Engineering in the Cold and Arid Salt Areas of Northwest China(No.2022-24)the Construction Project of the Scientific Research Platform of Provincial Enterprises Supported by the Capital Operating Budget of Gansu Province(No.2023GZ018)。
文摘To study the durability of concrete in harsh environments in Northwest China,concrete was prepared with various durability-improving materials such as concrete anti-erosion inhibitor(SBT-TIA),acrylate polymer(AP),super absorbent resin(SAP).The erosion mode and internal deterioration mechanism under salt freeze-thaw cycle and dry-wet cycle were explored.The results show that the addition of enhancing materials can effectively improve the resistance of concrete to salt freezing and sulfate erosion:the relevant indexes of concrete added with X-AP and T-AP are improved after salt freeze-thaw cycles;concrete added with SBTTIA shows optimal sulfate corrosion resistance;and concrete added with AP displays the best resistance to salt freezing.Microanalysis shows that the increase in the number of cycles decreases the generation of internal hydration products and defects in concrete mixed with enhancing materials and improves the related indexes.Based on the Wiener model analysis,the reliability of concrete with different lithologies and enhancing materials is improved,which may provide a reference for the application of manufactured sand concrete and enhancing materials in Northwest China,especially for the study of the improvement effects and mechanism of enhancing materials on the performance of concrete.
基金supported by the Major Program of National Natural Science Foundation of China(Grant No.42090055)the National Key ScientificInstruments and Equipment Development Projects of China(Grant No.41827808)the National Nature Science Foundation of China(Grant No.42207216).
文摘Reservoir-induced landslides in China's Three Gorges Reservoir area are prone to tensile cracks due to the influenceof their own weight and fluctuationsin water levels.The presence of cracks indicates that the tensile stress in the area has exceeded the tensile strength of the soil,leading to local instability.To explore the impact of tensile failure behavior on the stability and failure modes of reservoir landslides,the Huangtupo Riverside Slump#1 is taken as a case study.By considering local tensile failure,potential tensile cracks are incorporated into the analysis via the limit equilibrium method and reliability theory.The reliability of landslides under different tensile failure scenarios is quantified.Strain-softening characteristics of the soil are combined to further analyze the failure transmission path of the landslide.Finally,these potential failure modes were validated through physical model tests.The results show that cracks developing at rear positions reduce the stability of the slope and increase the probability of instability.During the destruction process,retrogressive failures with multiple sliding surfaces are likely to occur.However,tensile failure at the forefront reduces the likelihood of an individual slide mass descending.Progressive failure results in both regular and skip transmission patterns.Additionally,cracks and water level changes can also lead to shifts in the positions of the most dangerous blocks.Therefore,in practical landslide analysis and prevention,it is necessary to consider local tensile damage and identify potential tensile crack locations in advance to optimize prevention measures and accurately evaluate landslide risk.
基金National Natural Science Foundation of China(No.52375236)Fundamental Research Funds for the Central Universities of China(No.23D110316)。
文摘In reliability analyses,the absence of a priori information on the most probable point of failure(MPP)may result in overlooking critical points,thereby leading to biased assessment outcomes.Moreover,second-order reliability methods exhibit limited accuracy in highly nonlinear scenarios.To overcome these challenges,a novel reliability analysis strategy based on a multimodal differential evolution algorithm and a hypersphere integration method is proposed.Initially,the penalty function method is employed to reformulate the MPP search problem as a conditionally constrained optimization task.Subsequently,a differential evolution algorithm incorporating a population delineation strategy is utilized to identify all MPPs.Finally,a paraboloid equation is constructed based on the curvature of the limit-state function at the MPPs,and the failure probability of the structure is calculated by using the hypersphere integration method.The localization effectiveness of the MPPs is compared through multiple numerical cases and two engineering examples,with accuracy comparisons of failure probabilities against the first-order reliability method(FORM)and the secondorder reliability method(SORM).The results indicate that the method effectively identifies existing MPPs and achieves higher solution precision.
基金Project supported by the Project of the Anhui Provincial Natural Science Foundation(Grant No.2308085MA19)Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA0410401)+2 种基金the National Natural Science Foundation of China(Grant No.52202120)the National Key Research and Development Program of China(Grant No.2023YFA1609800)USTC Research Funds of the Double First-Class Initiative(Grant No.YD2310002013)。
文摘Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
基金the support from National Natural Science Foundation of China(No.52275153)the Frontier Technologies R&D Program of Jiangsu,China(No.BF2024068)+1 种基金The Fund of Prospective Layout of Scientific Research for Nanjing University of Aeronautics and Astronautics,ChinaResearch Fund of State Key Laboratory of Mechanics and Control for Aerospace Structures(Nanjing University of Aeronautics and Astronautics),China(Nos.MCAS-I-0425K01,MCAS-I-0423G01)。
文摘It is well recognized that Structural Health Monitoring(SHM)reliability evaluation is a key aspect that needs to be urgently addressed to promote the wide application of SHM methods.However,the existing studies typically transfer the Non-Destructive Testing/Evaluation(NDT/E)reliability metrics to SHM without a systematic analysis of where these metrics originated.Seldom attentions are paid to the evaluation conditions which are very important to apply these metrics.Aimed at this issue,a new condition control-based Dual-Reliability Evaluation(Dual-RE)method for SHM is proposed.This new method is proposed based on a systematic analysis of the whole framework of reliability evaluation from instrument to NDT,and emphasis is paid to the evaluation condition control.Based on these analyses,considering the special online application scenario of SHM,the proposed Dual-RE method contains two key components:Integrated Sensor-based SHM-RE(IS-SHM-RE)and Critical Service Condition-based SHM-RE(CSC-SHM-RE).ISSHM-RE evaluates the reliability of integrated SHM sensor and system themselves under approximate repeatability conditions,while CSC-SHM-RE assesses SHM reliability under the dominant uncertainties during service,namely intermediate conditions.To demonstrate the Dual-RE,crack monitoring by using the Guided Wave-based-SHM(GW-SHM)on aircraft lug structures is taken as a case study.Both the crack detection and sizing performance are evaluated from accuracy and uncertainty.
基金supported by the Council of Scientific and Industrial Research of India(09/028(0947)/2015-EMR-I)
文摘This paper proposes an artificial neural network(ANN) based software reliability model trained by novel particle swarm optimization(PSO) algorithm for enhanced forecasting of the reliability of software. The proposed ANN is developed considering the fault generation phenomenon during software testing with the fault complexity of different levels. We demonstrate the proposed model considering three types of faults residing in the software. We propose a neighborhood based fuzzy PSO algorithm for competent learning of the proposed ANN using software failure data. Fitting and prediction performances of the neighborhood fuzzy PSO based proposed neural network model are compared with the standard PSO based proposed neural network model and existing ANN based software reliability models in the literature through three real software failure data sets. We also compare the performance of the proposed PSO algorithm with the standard PSO algorithm through learning of the proposed ANN. Statistical analysis shows that the neighborhood fuzzy PSO based proposed neural network model has comparatively better fitting and predictive ability than the standard PSO based proposed neural network model and other ANN based software reliability models. Faster release of software is achievable by applying the proposed PSO based neural network model during the testing period.
基金Supported by the National Natural Science Foun dation of China(60173063)
文摘Based on the fact that the software development cost is an important factorto control the whole project,we discuss the relationship between the software development cost andsoftware reliability according to the empirieal data collected from the development process.Byevolutionary modeling we get an empirical model of the relationship between cost and softwarereliability,and validate the estimate results with the empirical data.
基金the National Natural Science Foundation of China(No.71671016)the School Fund of Beijing Information Science&Technology University(No.1935004)
文摘As one of the most important indexes to evaluate the quality of software, software reliability experiences an increasing development in recent years. We investigate a software reliability growth model(SRGM). The application of this model is to predict the occurrence of the software faults based on the non-homogeneous Poisson process(NHPP). Unlike the independent assumptions in other models, we consider fault dependency. The testing faults are divided into three classes in this model: leading faults, first-step dependent faults and second-step dependent faults. The leading faults occurring independently follow an NHPP, while the first-step dependent faults only become detectable after the related leading faults are detected. The second-step dependent faults can only be detected after the related first-step dependent faults are detected. Then, the combined model is built on the basis of the three sub-processes. Finally, an illustration based on real dataset is presented to verify the proposed model.
基金funded by Grant No.12-INF2970-10 from the National Science,Technology and Innovation Plan(MAARIFAH)the King Abdul-Aziz City for Science and Technology(KACST)Kingdom of Saudi Arabia.
文摘Maintaining software reliability is the key idea for conducting quality research.This can be done by having less complex applications.While developers and other experts have made signicant efforts in this context,the level of reliability is not the same as it should be.Therefore,further research into the most detailed mechanisms for evaluating and increasing software reliability is essential.A signicant aspect of growing the degree of reliable applications is the quantitative assessment of reliability.There are multiple statistical as well as soft computing methods available in literature for predicting reliability of software.However,none of these mechanisms are useful for all kinds of failure datasets and applications.Hence nding the most optimal model for reliability prediction is an important concern.This paper suggests a novel method to substantially pick the best model of reliability prediction.This method is the combination of analytic hierarchy method(AHP),hesitant fuzzy(HF)sets and technique for order of preference by similarity to ideal solution(TOPSIS).In addition,using the different iterations of the process,procedural sensitivity was also performed to validate the ndings.The ndings of the software reliability prediction models prioritization will help the developers to estimate reliability prediction based on the software type.
基金supported by the Pre-research Foundation of CPLA General Equipment Department
文摘Testing-effort(TE) and imperfect debugging(ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models(SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions(TEFs), i.e.,delayed S-shaped TEF(DS-TEF) and inflected S-shaped TEF(IS-TEF), are proposed. Then these two TEFs are incorporated into various types(exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process(NHPP)SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs.The experimental results show that:(i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs;(ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs;(iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs.
基金supported by the National Natural Science Foundation of China(61571336)the Science and Technology Project of Henan Province in China(172102210081)the Independent Innovation Research Foundation of Wuhan University of Technology(2016-JL-036)
文摘As communication technology and smart manufacturing have developed, the industrial internet of things(IIo T)has gained considerable attention from academia and industry.Wireless sensor networks(WSNs) have many advantages with broad applications in many areas including environmental monitoring, which makes it a very important part of IIo T. However,energy depletion and hardware malfunctions can lead to node failures in WSNs. The industrial environment can also impact the wireless channel transmission, leading to network reliability problems, even with tightly coupled control and data planes in traditional networks, which obviously also enhances network management cost and complexity. In this paper, we introduce a new software defined network(SDN), and modify this network to propose a framework called the improved software defined wireless sensor network(improved SD-WSN). This proposed framework can address the following issues. 1) For a large scale heterogeneous network, it solves the problem of network management and smooth merging of a WSN into IIo T. 2) The network coverage problem is solved which improves the network reliability. 3) The framework addresses node failure due to various problems, particularly related to energy consumption.Therefore, it is necessary to improve the reliability of wireless sensor networks, by developing certain schemes to reduce energy consumption and the delay time of network nodes under IIo T conditions. Experiments have shown that the improved approach significantly reduces the energy consumption of nodes and the delay time, thus improving the reliability of WSN.
文摘Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development. In most of the existing research available in the literatures, it is considered that a similar testing effort is required on each debugging effort. However, in practice, different types of faults may require different amounts of testing efforts for their detection and removal. Consequently, faults are classified into three categories on the basis of severity: simple, hard and complex. This categorization may be extended to r type of faults on the basis of severity. Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults, they assume that the FRR remains constant during the overall testing period. On the contrary, it has been observed that as testing progresses, FRR changes due to changing testing strategy, skill, environment and personnel resources. In this paper, a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept. Then, the models are formulated for two particular environments. The models were validated on two real-life data sets. The results show better fit and wider applicability of the proposed models as to different types of failure datasets.
基金Supported by the National Natural Science Foundation of China (No. 60873195,60873003,and 61070220)the Doctoral Foundation of Ministry of Education (No.20090111110002)
文摘With the rapid progress of component technology,the software development methodology of gathering a large number of components for designing complex software systems has matured.But,how to assess the application reliability accurately with the information of system architecture and the components reliabilities together has become a knotty problem.In this paper,the defects in formal description of software architecture and the limitations in existed model assumptions are both analyzed.Moreover,a new software reliability model called Component Interaction Mode(CIM) is proposed.With this model,the problem for existed component-based software reliability analysis models that cannot deal with the cases of component interaction with non-failure independent and non-random control transition is resolved.At last,the practice examples are presented to illustrate the effectiveness of this model.
基金the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
基金supported in part by the National High Technology Research and Development Program(863 Program)of China under Grant No.2011AA01A101the National High Technology Research and Development Program(863 Program)of China under Grant No.2013AA01330the National High Technology Research and Development Program(863 Program)of China under Grant No.2013AA013303
文摘By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN deployments.For wide-area SDN deployments,multiple controllers are often required,and the placement of these controllers becomes a particularly important task in the SDN context.This paper studies the problem of placing controllers in SDNs,so as to maximize the reliability of SDN control networks.We present a novel metric,called expected percentage of control path loss,to characterize the reliability of SDN control networks.We formulate the reliability-aware control placement problem,prove its NP-hardness,and examine several placement algorithms that can solve this problem.Through extensive simulations using real topologies,we show how the number of controllers and their placement influence the reliability of SDN control networks.Besides,we also found that,through strategic controller placement,the reliability of SDN control networks can be significantly improved without introducing unacceptable switch-to-controller latencies.
基金supported by the National Natural Science Foundation of China(61070220)the Anhui Provincial Natural Science Foundation(1408085MKL79)
文摘In recent decades,many software reliability growth models(SRGMs) have been proposed for the engineers and testers in measuring the software reliability precisely.Most of them is established based on the non-homogeneous Poisson process(NHPP),and it is proved that the prediction accuracy of such models could be improved by adding the describing of characterization of testing effort.However,some research work indicates that the fault detection rate(FDR) is another key factor affects final software quality.Most early NHPPbased models deal with the FDR as constant or piecewise function,which does not fit the different testing stages well.Thus,this paper first incorporates a multivariate function of FDR,which is bathtub-shaped,into the NHPP-based SRGMs considering testing effort in order to further improve performance.A new model framework is proposed,and a stepwise method is used to apply the framework with real data sets to find the optimal model.Experimental studies show that the obtained new model can provide better performance of fitting and prediction compared with other traditional SRGMs.
基金supported by the International Technology Cooperation Project of Guizhou Province(QianKeHeWaiGZi[2012]7052)the National Scientific Research Project for Statistics(2012LZ054)
文摘As the web-server based business is rapidly developed and popularized, how to evaluate and improve the reliability of web-servers has been extremely important. Although a large num- ber of software reliability growth models (SRGMs), including those combined with multiple change-points (CPs), have been available, these conventional SRGMs cannot be directly applied to web soft- ware reliability analysis because of the complex web operational profile. To characterize the web operational profile precisely, it should be realized that the workload of a web server is normally non-homogeneous and often observed with the pattern of random impulsive shocks. A web software reliability model with random im- pulsive shocks and its statistical analysis method are developed. In the proposed model, the web server workload is characterized by a geometric Brownian motion process. Based on a real data set from IIS server logs of ICRMS website (www.icrms.cn), the proposed model is demonstrated to be powerful for estimating impulsive shocks and web software reliability.