Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disruptin...Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disrupting the neural connections that allow communication between the brain and the rest of the body, which results in varying degrees of motor and sensory impairment. Disconnection in the spinal tracts is an irreversible condition owing to the poor capacity for spontaneous axonal regeneration in the affected neurons.展开更多
Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces th...Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive...During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive distance)to a moving target as quickly as possible,resulting in the extended minimum-time intercept problem(EMTIP).Existing research has primarily focused on the zero-distance intercept problem,MTIP,establishing the necessary or sufficient conditions for MTIP optimality,and utilizing analytic algorithms,such as root-finding algorithms,to calculate the optimal solutions.However,these approaches depend heavily on the properties of the analytic algorithm,making them inapplicable when problem settings change,such as in the case of a positive effective range or complicated target motions outside uniform rectilinear motion.In this study,an approach employing a high-accuracy and quality-guaranteed mixed-integer piecewise-linear program(QG-PWL)is proposed for the EMTIP.This program can accommodate different effective interception ranges and complicated target motions(variable velocity or complicated trajectories).The high accuracy and quality guarantees of QG-PWL originate from elegant strategies such as piecewise linearization and other developed operation strategies.The approximate error in the intercept path length is proved to be bounded to h^(2)/(4√2),where h is the piecewise length.展开更多
A variety of Software Reliability Growth Models (SRGM) have been presented in literature. These models suffer many problems when handling various types of project. The reason is;the nature of each project makes it dif...A variety of Software Reliability Growth Models (SRGM) have been presented in literature. These models suffer many problems when handling various types of project. The reason is;the nature of each project makes it difficult to build a model which can generalize. In this paper we propose the use of Genetic Programming (GP) as an eVolutionary computation approach to handle the software reliability modeling problem. GP deals with one of the key issues in computer science which is called automatic programming. The goal of automatic programming is to create, in an automated way, a computer program that enables a computer to solve problems. GP will be used to build a SRGM which can predict accumulated faults during the software testing process. We evaluate the GP developed model and compare its performance with other common growth models from the literature. Our experiments results show that the proposed GP model is superior compared to Yamada S-Shaped, Generalized Poisson, NHPP and Schneidewind reliability models.展开更多
The present study aims at improving the ability of the canonical genetic programming algorithm to solve problems, and describes an improved genetic programming (IGP). The proposed method can be described as follows: t...The present study aims at improving the ability of the canonical genetic programming algorithm to solve problems, and describes an improved genetic programming (IGP). The proposed method can be described as follows: the first inves-tigates initializing population, the second investigates reproduction operator, the third investigates crossover operator, and the fourth investigates mutation operation. The IGP is examined in two domains and the results suggest that the IGP is more effective and more efficient than the canonical one applied in different domains.展开更多
Assembling paradigms programming are based on the reuses in any programming language (PL) with the passport data of their settings in WSDL. The method of assembling is formal and secures co-operation of the different ...Assembling paradigms programming are based on the reuses in any programming language (PL) with the passport data of their settings in WSDL. The method of assembling is formal and secures co-operation of the different reuses (module, object, component, service and so on) being developed. A formal means of these paradigms creation with help of interfaces is presented. Interface IDL (Stub, Skeleton) is containing data and operations for transmission data to other standard elements linked and describes in the standard language IDL. Assembling will be realized by integration of reuses elements in these paradigms on the instrumental-technological complex (ITC).展开更多
Matlab has a high performance at engineering calculation.C# is good at interface development.Combining their advantages together,hybrid programming with Matlab and C # will help to improve the reliability analysis sof...Matlab has a high performance at engineering calculation.C# is good at interface development.Combining their advantages together,hybrid programming with Matlab and C # will help to improve the reliability analysis software efficiency and accuracy significantly.Procedures of hybrid programming with Matlab and C# in reliability analysis software are introduced in this paper.Finally a mathematical problem is tested to verify the feasibility of this programming method.展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
The manuscript developed an optimal frequency band transmission system structure of QPSK. The software programming experiment of this complete QPSK optimal band transmission system is designed and realized based on Ma...The manuscript developed an optimal frequency band transmission system structure of QPSK. The software programming experiment of this complete QPSK optimal band transmission system is designed and realized based on Matlab. The experimental parameters used in the design are consistent with the requirements of the actual system parameters. The key code of the software design is given in each module of the system. The whole system is simulated. The simulation results show that the QPSK optimal band transmission system can achieve the best reception performance and realize its function.展开更多
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
The scheduling process that aims to assign tasks to members is a difficult job in project management.It plays a prerequisite role in determining the project’s quality and sometimes winning the bidding process.This st...The scheduling process that aims to assign tasks to members is a difficult job in project management.It plays a prerequisite role in determining the project’s quality and sometimes winning the bidding process.This study aims to propose an approach based on multi-objective combinatorial optimization to do this automatically.The generated schedule directs the project to be completed with the shortest critical path,at the minimum cost,while maintaining its quality.There are several real-world business constraints related to human resources,the similarity of the tasks added to the optimization model,and the literature’s traditional rules.To support the decision-maker to evaluate different decision strategies,we use compromise programming to transform multiobjective optimization(MOP)into a single-objective problem.We designed a genetic algorithm scheme to solve the transformed problem.The proposed method allows the incorporation of the model as a navigator for search agents in the optimal solution search process by transferring the objective function to the agents’fitness function.The optimizer can effectively find compromise solutions even if the user may or may not assign a priority to particular objectives.These are achieved through a combination of nonpreference and preference approaches.The experimental results show that the proposed method worked well on the tested dataset.展开更多
Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming....Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming.Exploration of interesting skills programmingof WEB software development is aimed at developers who initial contact with WEB software,through the practice of operation to make themget some fun of WEB programming,from simple to difficult gradually let students master WEB programming skills,improve students' interestin programming.By adding some special effects of webpages to enhance the students' interest in programming,through the key practice ofdatabase,let the students out of fear of programming of database connection.展开更多
We apply the simplex algorithm which is a branch of linear programming to efficiently determine the allocation of resources required to operate a company in the software development field. The main aim of applying thi...We apply the simplex algorithm which is a branch of linear programming to efficiently determine the allocation of resources required to operate a company in the software development field. The main aim of applying this technique is to maximize the profit of a company under certain limitations. This <span>can be done using the trial-and-error approach. However, this tedious</span> process can be replaced by user-level tools such as Excel which are based on linear programming that will give more accurate results. Small software companies cannot afford to hire a high number of senior programmers to produce the required level of quality and to keep up with the demand for adding new features. On the other hand, lowering the quality of the product will reduce the number of customers and decrease profit. Another aspect is maximizing the utilization of hosting servers which are required for providing the services to customers since the cost of buying servers and maintaining them is extremely high. The simplex algorithm in linear programming will take the specified <span>constraints into account to compute the optimal allocation of the available</span> <span>resources to maximize profit and limit the cost. This paper will present a</span> <span>model that uses the simplex algorithm with a set of constraints to determine</span> how many projects of each type a company should take in one period of time.展开更多
Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells c...Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells could be generated from adult mouse fibroblasts is powerful proof that cell fate can be changed.An exciting extension of the discovery of cell fate impermanence is the direct cellular reprogram ming hypothesis-that terminally differentiated cells can be reprogrammed into other adult cell fates without first passing through a stem cell state.展开更多
The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functio...The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functioning(Soles et al.,2023).Synthesized by neural and glial cells,the brain's ECM regulates a myriad of homeostatic cellular processes,including neuronal plasticity and firing(Miyata et al.,2012),cation buffering(Moraws ki et al.,2015),and glia-neuron interactions(Anderson et al.,2016).Considering the diversity of functions,dynamic remodeling of the brain's ECM indicates that this understudied medium is an active participant in both normal physiology and neurological diseases.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
基金financially supported by Ministerio de Ciencia e Innovación projects SAF2017-82736-C2-1-R to MTMFin Universidad Autónoma de Madrid and by Fundación Universidad Francisco de Vitoria to JS+2 种基金a predoctoral scholarship from Fundación Universidad Francisco de Vitoriafinancial support from a 6-month contract from Universidad Autónoma de Madrida 3-month contract from the School of Medicine of Universidad Francisco de Vitoria。
文摘Every year, around the world, between 250,000 and 500,000 people suffer a spinal cord injury(SCI). SCI is a devastating medical condition that arises from trauma or disease-induced damage to the spinal cord, disrupting the neural connections that allow communication between the brain and the rest of the body, which results in varying degrees of motor and sensory impairment. Disconnection in the spinal tracts is an irreversible condition owing to the poor capacity for spontaneous axonal regeneration in the affected neurons.
基金Project supported by the Project of the Anhui Provincial Natural Science Foundation(Grant No.2308085MA19)Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA0410401)+2 种基金the National Natural Science Foundation of China(Grant No.52202120)the National Key Research and Development Program of China(Grant No.2023YFA1609800)USTC Research Funds of the Double First-Class Initiative(Grant No.YD2310002013)。
文摘Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
基金supported by the National Natural Sci‐ence Foundation of China(Grant No.62306325)。
文摘During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive distance)to a moving target as quickly as possible,resulting in the extended minimum-time intercept problem(EMTIP).Existing research has primarily focused on the zero-distance intercept problem,MTIP,establishing the necessary or sufficient conditions for MTIP optimality,and utilizing analytic algorithms,such as root-finding algorithms,to calculate the optimal solutions.However,these approaches depend heavily on the properties of the analytic algorithm,making them inapplicable when problem settings change,such as in the case of a positive effective range or complicated target motions outside uniform rectilinear motion.In this study,an approach employing a high-accuracy and quality-guaranteed mixed-integer piecewise-linear program(QG-PWL)is proposed for the EMTIP.This program can accommodate different effective interception ranges and complicated target motions(variable velocity or complicated trajectories).The high accuracy and quality guarantees of QG-PWL originate from elegant strategies such as piecewise linearization and other developed operation strategies.The approximate error in the intercept path length is proved to be bounded to h^(2)/(4√2),where h is the piecewise length.
文摘A variety of Software Reliability Growth Models (SRGM) have been presented in literature. These models suffer many problems when handling various types of project. The reason is;the nature of each project makes it difficult to build a model which can generalize. In this paper we propose the use of Genetic Programming (GP) as an eVolutionary computation approach to handle the software reliability modeling problem. GP deals with one of the key issues in computer science which is called automatic programming. The goal of automatic programming is to create, in an automated way, a computer program that enables a computer to solve problems. GP will be used to build a SRGM which can predict accumulated faults during the software testing process. We evaluate the GP developed model and compare its performance with other common growth models from the literature. Our experiments results show that the proposed GP model is superior compared to Yamada S-Shaped, Generalized Poisson, NHPP and Schneidewind reliability models.
文摘The present study aims at improving the ability of the canonical genetic programming algorithm to solve problems, and describes an improved genetic programming (IGP). The proposed method can be described as follows: the first inves-tigates initializing population, the second investigates reproduction operator, the third investigates crossover operator, and the fourth investigates mutation operation. The IGP is examined in two domains and the results suggest that the IGP is more effective and more efficient than the canonical one applied in different domains.
文摘Assembling paradigms programming are based on the reuses in any programming language (PL) with the passport data of their settings in WSDL. The method of assembling is formal and secures co-operation of the different reuses (module, object, component, service and so on) being developed. A formal means of these paradigms creation with help of interfaces is presented. Interface IDL (Stub, Skeleton) is containing data and operations for transmission data to other standard elements linked and describes in the standard language IDL. Assembling will be realized by integration of reuses elements in these paradigms on the instrumental-technological complex (ITC).
文摘Matlab has a high performance at engineering calculation.C# is good at interface development.Combining their advantages together,hybrid programming with Matlab and C # will help to improve the reliability analysis software efficiency and accuracy significantly.Procedures of hybrid programming with Matlab and C# in reliability analysis software are introduced in this paper.Finally a mathematical problem is tested to verify the feasibility of this programming method.
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
文摘The manuscript developed an optimal frequency band transmission system structure of QPSK. The software programming experiment of this complete QPSK optimal band transmission system is designed and realized based on Matlab. The experimental parameters used in the design are consistent with the requirements of the actual system parameters. The key code of the software design is given in each module of the system. The whole system is simulated. The simulation results show that the QPSK optimal band transmission system can achieve the best reception performance and realize its function.
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
文摘The scheduling process that aims to assign tasks to members is a difficult job in project management.It plays a prerequisite role in determining the project’s quality and sometimes winning the bidding process.This study aims to propose an approach based on multi-objective combinatorial optimization to do this automatically.The generated schedule directs the project to be completed with the shortest critical path,at the minimum cost,while maintaining its quality.There are several real-world business constraints related to human resources,the similarity of the tasks added to the optimization model,and the literature’s traditional rules.To support the decision-maker to evaluate different decision strategies,we use compromise programming to transform multiobjective optimization(MOP)into a single-objective problem.We designed a genetic algorithm scheme to solve the transformed problem.The proposed method allows the incorporation of the model as a navigator for search agents in the optimal solution search process by transferring the objective function to the agents’fitness function.The optimizer can effectively find compromise solutions even if the user may or may not assign a priority to particular objectives.These are achieved through a combination of nonpreference and preference approaches.The experimental results show that the proposed method worked well on the tested dataset.
文摘Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming.Exploration of interesting skills programmingof WEB software development is aimed at developers who initial contact with WEB software,through the practice of operation to make themget some fun of WEB programming,from simple to difficult gradually let students master WEB programming skills,improve students' interestin programming.By adding some special effects of webpages to enhance the students' interest in programming,through the key practice ofdatabase,let the students out of fear of programming of database connection.
文摘We apply the simplex algorithm which is a branch of linear programming to efficiently determine the allocation of resources required to operate a company in the software development field. The main aim of applying this technique is to maximize the profit of a company under certain limitations. This <span>can be done using the trial-and-error approach. However, this tedious</span> process can be replaced by user-level tools such as Excel which are based on linear programming that will give more accurate results. Small software companies cannot afford to hire a high number of senior programmers to produce the required level of quality and to keep up with the demand for adding new features. On the other hand, lowering the quality of the product will reduce the number of customers and decrease profit. Another aspect is maximizing the utilization of hosting servers which are required for providing the services to customers since the cost of buying servers and maintaining them is extremely high. The simplex algorithm in linear programming will take the specified <span>constraints into account to compute the optimal allocation of the available</span> <span>resources to maximize profit and limit the cost. This paper will present a</span> <span>model that uses the simplex algorithm with a set of constraints to determine</span> how many projects of each type a company should take in one period of time.
基金supported by Canada First Research Excellence Fund,Medicine by Design(to CMM)。
文摘Over the last two decades,the dogma that cell fate is immutable has been increasingly challenged,with important implications for regenerative medicine.The brea kth rough discovery that induced pluripotent stem cells could be generated from adult mouse fibroblasts is powerful proof that cell fate can be changed.An exciting extension of the discovery of cell fate impermanence is the direct cellular reprogram ming hypothesis-that terminally differentiated cells can be reprogrammed into other adult cell fates without first passing through a stem cell state.
基金supported by National Institute on Aging(NIH-NIA)R21 AG074152(to KMA)National Institute of Allergy and Infectious Diseases(NIAID)grant DP2 AI171150(to KMA)Department of Defense(DoD)grant AZ210089(to KMA)。
文摘The brain's extracellular matrix(ECM),which is comprised of protein and glycosaminoglycan(GAG)scaffolds,constitutes 20%-40% of the human brain and is considered one of the largest influencers on brain cell functioning(Soles et al.,2023).Synthesized by neural and glial cells,the brain's ECM regulates a myriad of homeostatic cellular processes,including neuronal plasticity and firing(Miyata et al.,2012),cation buffering(Moraws ki et al.,2015),and glia-neuron interactions(Anderson et al.,2016).Considering the diversity of functions,dynamic remodeling of the brain's ECM indicates that this understudied medium is an active participant in both normal physiology and neurological diseases.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.