Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces th...Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in ...This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
Vulnerabilities are a known problem in modern Open Source Software(OSS).Most developers often rely on third-party libraries to accelerate feature implementation.However,these libraries may contain vulnerabilities that...Vulnerabilities are a known problem in modern Open Source Software(OSS).Most developers often rely on third-party libraries to accelerate feature implementation.However,these libraries may contain vulnerabilities that attackers can exploit to propagate malicious code,posing security risks to dependent projects.Existing research addresses these challenges through Software Composition Analysis(SCA)for vulnerability detection and remediation.Nevertheless,current solutions may introduce additional issues,such as incompatibilities,dependency conflicts,and additional vulnerabilities.To address this,we propose Vulnerability Scan and Protection(VulnScanPro),a robust solution for detection and remediation vulnerabilities in Java projects.Specifically,VulnScanPro builds a finegrained method graph to identify unreachable methods.The method graph is mapped to the project’s dependency tree,constructing a comprehensive vulnerability propagation graph that identifies unreachable vulnerable APIs and dependencies.Based on this analysis,we propose three solutions for vulnerability remediation:(1)Removing unreachable vulnerable dependencies,thereby resolving security risks and reducing maintenance overhead.(2)Upgrading vulnerable dependencies to the closest non-vulnerable versions,while pinning the versions of transitive dependencies introduced by the vulnerable dependency,in order to mitigate compatibility issues and prevent the introduction of new vulnerabilities.(3)Eliminating unreachable vulnerable APIs,particularly when security patches are either incompatible or absent.Experimental results show that these solutions effectively mitigate vulnerabilities and enhance the overall security of the project.展开更多
Aiming at the problems of lagging curriculum,weak practice,and single evaluation in the cultivation of HarmonyOS Development talents,this study constructs a“teacher-machine-student”ternary interactive teaching model...Aiming at the problems of lagging curriculum,weak practice,and single evaluation in the cultivation of HarmonyOS Development talents,this study constructs a“teacher-machine-student”ternary interactive teaching model based on the Congyou platform.Through the building block curriculum system,the HarmonyOS technology stack is decoupled into dynamic capability units,and a multi-disciplinary cross-case library is jointly built with Huawei,which significantly improves the synchronization of teaching content and industrial technology.This paper innovatively designs an AI collaborative teaching system,which employs knowledge graphs to plan learning paths,utilizes virtual equipment clusters to simulate development environments,and establishes a“diagnosis-feedback-enhancement”closed loop through AI-based review,thereby effectively improving students’development efficiency and code reuse rate.A three-dimensional evaluation model integrating task outcomes,process performance,and innovation is constructed,incorporating indicators such as code standardization and an innovation index to strengthen the cultivation of engineering thinking and innovative ability.Furthermore,a data-driven support platform is built to generate student competency profiles,open up the“credit-competency-certification”pathway,promote the transformation of course achievements into contributions to the Huawei ecosystem,and significantly shorten the job adaptation cycle for graduates.The research results provide a replicable paradigm for the cultivation of domestic operating system talents.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software ...Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.展开更多
The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have bec...The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.展开更多
In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which enca...In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defe...In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.展开更多
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education....The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.展开更多
Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis tech...Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis technique to enhance software quality management in open⁃source environments by addressing dynamic scalability,adaptive service provisioning,and software reliability.The proposed methodology integrates a scalability metric,an optimized service provisioning model,and a weighted entropy⁃based reliability assessment to systematically improve key performance parameters.Experimental evaluation conducted on multiple open⁃source software(OSS)versions demonstrates significant improvements:scalability increased by 27.5%,service provisioning time reduced by 18.3%,and software reliability improved by 22.1%compared to baseline methods.A comparative analysis with prior works further highlights the effectiveness of this approach in ensuring adaptability,efficiency,and resilience in dynamic software ecosystems.Future work will focus on real⁃time monitoring and AI⁃driven adaptive provisioning to further enhance software quality management.展开更多
To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplina...To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplinary education focused on specific fields,making it difficult to cultivate engineering leaders with multidisciplinary backgrounds who are capable of solving complex real-world problems.To solve this problem,based on the decade-long interdisciplinary talent cultivation achievements of the College of Software Engineering at Sichuan University,this article proposes the“Software Engineering+”innovative talent cultivation paradigm.It provides an analysis through professional construction of interdisciplinary talents,the design of talent cultivation frameworks,the formulation of cultivation plans,the establishment of interdisciplinary curriculum systems,the reform of teaching modes,and the improvement of institutional systems.Scientific solutions are proposed,and five project models implemented and operated by the College of Software Engineering at Sichuan University are listed as practical examples,offering significant reference value.展开更多
With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current ...With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current situation of the mismatch between educational practices and industrial change,this study proposes an innovative teaching model—“Micro-practices”.This model integrates new knowledge and new technologies into the teaching process quickly and flexibly through practical teaching projects with“short class time,small capacity,and cloud environment”to meet the different educational needs of students,teachers,and enterprises.The aim is to train innovative software engineering talents who can meet the challenges of the future.展开更多
It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb ...It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb rather than their isotopes,and such ages are referred to as chemical ages.Although equations for calculating the chemical ages have been well established and various computation programs have been reported,there is a lack of software that can not only calculate the chemical ages of individual analytical points but also provide an evaluation of the errors of individual ages as well as the whole dataset.In this paper,we develop a software for calculating and assessing the chemical ages of uranium minerals(CAUM),an open-source Python-based program with a friendly Graphical User Interface(GUI).Electron probe microanalysis(EPMA)data of uranium minerals are first imported from Excel files and used to calculate the chemical ages and associated errors of individual analytical points.The age data are then visualized to aid evaluating if the dataset comprises one or multiple populations and whether or not there are meaningful correlations between the chemical ages and impurities.Actions can then be taken to evaluate the errors within individual populations and the significance of the correlations.The use of the software is demonstrated with examples from published data.展开更多
基金Project supported by the Project of the Anhui Provincial Natural Science Foundation(Grant No.2308085MA19)Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDA0410401)+2 种基金the National Natural Science Foundation of China(Grant No.52202120)the National Key Research and Development Program of China(Grant No.2023YFA1609800)USTC Research Funds of the Double First-Class Initiative(Grant No.YD2310002013)。
文摘Small angle x-ray scattering(SAXS)is an advanced technique for characterizing the particle size distribution(PSD)of nanoparticles.However,the ill-posed nature of inverse problems in SAXS data analysis often reduces the accuracy of conventional methods.This article proposes a user-friendly software for PSD analysis,GranuSAS,which employs an algorithm that integrates truncated singular value decomposition(TSVD)with the Chahine method.This approach employs TSVD for data preprocessing,generating a set of initial solutions with noise suppression.A high-quality initial solution is subsequently selected via the L-curve method.This selected candidate solution is then iteratively refined by the Chahine algorithm,enforcing constraints such as non-negativity and improving physical interpretability.Most importantly,GranuSAS employs a parallel architecture that simultaneously yields inversion results from multiple shape models and,by evaluating the accuracy of each model's reconstructed scattering curve,offers a suggestion for model selection in material systems.To systematically validate the accuracy and efficiency of the software,verification was performed using both simulated and experimental datasets.The results demonstrate that the proposed software delivers both satisfactory accuracy and reliable computational efficiency.It provides an easy-to-use and reliable tool for researchers in materials science,helping them fully exploit the potential of SAXS in nanoparticle characterization.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
基金supported by the P.G.Senapathy Center for Computing Resources at IIT Madrasfunding provided by the Ministry of Education,Government of Indiasupported by the National Natural Science Foundation of China(Grant Nos.12388101,12472224 and 92252104).
文摘This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
基金supported by the National Natural Science Foundation of China(Grant No.62141210)the Fundamental Research Funds for the Central Universities(Grant No.N2217005)+1 种基金Open Fund of State Key Lab.for Novel Software Technology,Nanjing University(KFKT2021B01)111 Project(B16009).
文摘Vulnerabilities are a known problem in modern Open Source Software(OSS).Most developers often rely on third-party libraries to accelerate feature implementation.However,these libraries may contain vulnerabilities that attackers can exploit to propagate malicious code,posing security risks to dependent projects.Existing research addresses these challenges through Software Composition Analysis(SCA)for vulnerability detection and remediation.Nevertheless,current solutions may introduce additional issues,such as incompatibilities,dependency conflicts,and additional vulnerabilities.To address this,we propose Vulnerability Scan and Protection(VulnScanPro),a robust solution for detection and remediation vulnerabilities in Java projects.Specifically,VulnScanPro builds a finegrained method graph to identify unreachable methods.The method graph is mapped to the project’s dependency tree,constructing a comprehensive vulnerability propagation graph that identifies unreachable vulnerable APIs and dependencies.Based on this analysis,we propose three solutions for vulnerability remediation:(1)Removing unreachable vulnerable dependencies,thereby resolving security risks and reducing maintenance overhead.(2)Upgrading vulnerable dependencies to the closest non-vulnerable versions,while pinning the versions of transitive dependencies introduced by the vulnerable dependency,in order to mitigate compatibility issues and prevent the introduction of new vulnerabilities.(3)Eliminating unreachable vulnerable APIs,particularly when security patches are either incompatible or absent.Experimental results show that these solutions effectively mitigate vulnerabilities and enhance the overall security of the project.
文摘Aiming at the problems of lagging curriculum,weak practice,and single evaluation in the cultivation of HarmonyOS Development talents,this study constructs a“teacher-machine-student”ternary interactive teaching model based on the Congyou platform.Through the building block curriculum system,the HarmonyOS technology stack is decoupled into dynamic capability units,and a multi-disciplinary cross-case library is jointly built with Huawei,which significantly improves the synchronization of teaching content and industrial technology.This paper innovatively designs an AI collaborative teaching system,which employs knowledge graphs to plan learning paths,utilizes virtual equipment clusters to simulate development environments,and establishes a“diagnosis-feedback-enhancement”closed loop through AI-based review,thereby effectively improving students’development efficiency and code reuse rate.A three-dimensional evaluation model integrating task outcomes,process performance,and innovation is constructed,incorporating indicators such as code standardization and an innovation index to strengthen the cultivation of engineering thinking and innovative ability.Furthermore,a data-driven support platform is built to generate student competency profiles,open up the“credit-competency-certification”pathway,promote the transformation of course achievements into contributions to the Huawei ecosystem,and significantly shorten the job adaptation cycle for graduates.The research results provide a replicable paradigm for the cultivation of domestic operating system talents.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
文摘Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.
基金National Key R&D Program of China(Nos.2023YFC3806900,2022YFE0141400)。
文摘The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.
基金National Natural Science Foundation of China(12471367)。
文摘In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
基金supported by the CCF-NSFOCUS‘Kunpeng’Research Fund(CCF-NSFOCUS2024012).
文摘In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.
基金supported in part by the Teaching Reform Project of Chongqing University of Posts and Telecommunications,China under Grant No.XJG23234Chongqing Municipal Higher Education Teaching Reform Research Project under Grant No.203399the Doctoral Direct Train Project of Chongqing Science and Technology Bureau under Grant No.CSTB2022BSXM-JSX0007。
文摘The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.
文摘Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis technique to enhance software quality management in open⁃source environments by addressing dynamic scalability,adaptive service provisioning,and software reliability.The proposed methodology integrates a scalability metric,an optimized service provisioning model,and a weighted entropy⁃based reliability assessment to systematically improve key performance parameters.Experimental evaluation conducted on multiple open⁃source software(OSS)versions demonstrates significant improvements:scalability increased by 27.5%,service provisioning time reduced by 18.3%,and software reliability improved by 22.1%compared to baseline methods.A comparative analysis with prior works further highlights the effectiveness of this approach in ensuring adaptability,efficiency,and resilience in dynamic software ecosystems.Future work will focus on real⁃time monitoring and AI⁃driven adaptive provisioning to further enhance software quality management.
基金supported by the 2023 Sichuan Province Higher Education Talent Cultivation and Teaching Reform Major Project“Exploration and Practice of Interdisciplinary and Integrated Industrial Software Talent Cultivation Model”(JG2023-14)the Sichuan University Higher Education Teaching Reform Project(10th Phase)Research and Exploration of Practical Teaching Mode under the New Major Background of“Cross Disciplinary and Integration”(SCU10128)。
文摘To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplinary education focused on specific fields,making it difficult to cultivate engineering leaders with multidisciplinary backgrounds who are capable of solving complex real-world problems.To solve this problem,based on the decade-long interdisciplinary talent cultivation achievements of the College of Software Engineering at Sichuan University,this article proposes the“Software Engineering+”innovative talent cultivation paradigm.It provides an analysis through professional construction of interdisciplinary talents,the design of talent cultivation frameworks,the formulation of cultivation plans,the establishment of interdisciplinary curriculum systems,the reform of teaching modes,and the improvement of institutional systems.Scientific solutions are proposed,and five project models implemented and operated by the College of Software Engineering at Sichuan University are listed as practical examples,offering significant reference value.
基金funded by Universityindustry Collaborative Education Program(No.220605181024725)the Undergraduate Education and Teaching Reform Research Project of Northwestern Polytechnical University(No.22GZ13083)。
文摘With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current situation of the mismatch between educational practices and industrial change,this study proposes an innovative teaching model—“Micro-practices”.This model integrates new knowledge and new technologies into the teaching process quickly and flexibly through practical teaching projects with“short class time,small capacity,and cloud environment”to meet the different educational needs of students,teachers,and enterprises.The aim is to train innovative software engineering talents who can meet the challenges of the future.
基金supported by the Natural Science Foundation Program of China(42173072,41503037,U1967207)Postgraduate Innovative Cultivation Program(CDUT2023BJCX013)Uranium Resources Exploration and Exploitation Innovation Center&and Everest Scientific Research Program(CDUT).
文摘It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb rather than their isotopes,and such ages are referred to as chemical ages.Although equations for calculating the chemical ages have been well established and various computation programs have been reported,there is a lack of software that can not only calculate the chemical ages of individual analytical points but also provide an evaluation of the errors of individual ages as well as the whole dataset.In this paper,we develop a software for calculating and assessing the chemical ages of uranium minerals(CAUM),an open-source Python-based program with a friendly Graphical User Interface(GUI).Electron probe microanalysis(EPMA)data of uranium minerals are first imported from Excel files and used to calculate the chemical ages and associated errors of individual analytical points.The age data are then visualized to aid evaluating if the dataset comprises one or multiple populations and whether or not there are meaningful correlations between the chemical ages and impurities.Actions can then be taken to evaluate the errors within individual populations and the significance of the correlations.The use of the software is demonstrated with examples from published data.