Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defe...In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education....The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have bec...The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.展开更多
In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which enca...In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.展开更多
It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb ...It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb rather than their isotopes,and such ages are referred to as chemical ages.Although equations for calculating the chemical ages have been well established and various computation programs have been reported,there is a lack of software that can not only calculate the chemical ages of individual analytical points but also provide an evaluation of the errors of individual ages as well as the whole dataset.In this paper,we develop a software for calculating and assessing the chemical ages of uranium minerals(CAUM),an open-source Python-based program with a friendly Graphical User Interface(GUI).Electron probe microanalysis(EPMA)data of uranium minerals are first imported from Excel files and used to calculate the chemical ages and associated errors of individual analytical points.The age data are then visualized to aid evaluating if the dataset comprises one or multiple populations and whether or not there are meaningful correlations between the chemical ages and impurities.Actions can then be taken to evaluate the errors within individual populations and the significance of the correlations.The use of the software is demonstrated with examples from published data.展开更多
Software defect prediction is a critical component in maintaining software quality,enabling early identification and resolution of issues that could lead to system failures and significant financial losses.With the in...Software defect prediction is a critical component in maintaining software quality,enabling early identification and resolution of issues that could lead to system failures and significant financial losses.With the increasing reliance on user-generated content,social media reviews have emerged as a valuable source of real-time feedback,offering insights into potential software defects that traditional testing methods may overlook.However,existing models face challenges like handling imbalanced data,high computational complexity,and insufficient inte-gration of contextual information from these reviews.To overcome these limitations,this paper introduces the SESDP(Sentiment Analysis-Based Early Software Defect Prediction)model.SESDP employs a Transformer-Based Multi-Task Learning approach using Robustly Optimized Bidirectional Encoder Representations from Transformers Approach(RoBERTa)to simultaneously perform sentiment analysis and defect prediction.By integrating text embedding extraction,sentiment score computation,and feature fusion,the model effectively captures both the contextual nuances and sentiment expressed in user reviews.Experimental results show that SESDP achieves superior performance with an accuracy of 96.37%,precision of 94.7%,and recall of 95.4%,particularly excelling in handling imbalanced datasets compared to baseline models.This approach offers a scalable and efficient solution for early software defect detection,enhancing proactive software quality assurance.展开更多
Starting with the goal and significance of software security testing,this paper introduces the main methods of software security testing in the open network environment,including formal security testing,white box test...Starting with the goal and significance of software security testing,this paper introduces the main methods of software security testing in the open network environment,including formal security testing,white box testing,fuzzy testing,model testing,and fault injection testing.A software security testing method based on a security target model is proposed.This paper provides new ideas for software security testing,better adapts to the open network environment,improves the efficiency and quality of testing,and builds a good software application environment.展开更多
Objective:To evaluate the performance of orthokeratology(ortho-k)lens reordering using software-designed system,so as to determine the feasibility of ortho-k lens reordering without discontinuing lens wear.Methods:Thi...Objective:To evaluate the performance of orthokeratology(ortho-k)lens reordering using software-designed system,so as to determine the feasibility of ortho-k lens reordering without discontinuing lens wear.Methods:This study is a retrospective analysis of data of ortho-k lens wearers who had a history of short-term discontinuation of lens wear.A total of 94 individuals aged over 8 years with spherical equivalent refraction ranging from-0.50 to-6.50 diopters were included.The corneal topography data at baseline(before ortho-k)and after lens wear discontinuation(cessation of ortho-k treatment)were imported separately into the lens-design software,along with corresponding refraction data.Subsequently,corneal and lens parameters were generated and compared.Intraclass correlation coefficients(ICC)were calculated,and Bland and Altman analyses were conducted.Results:All 94 children were involved in the retrospective analysis.Compared with baseline data,there was a high level of consistency between Rwo(without discontinuation)and Rwith(with discontinuation),with an ICC of 0.96(P<0.001).Furthermore,the comparison of lens parameters generated by the Easyfit software between baseline and after short-term discontinuation showed a high degree of consistency,with all of the ICC values exceeding 0.90.Similar results were obtained using the WAVE software,as both ICC values and Bland-Altman plots demonstrated a high level of consistency in lens parameters between two conditions(nearly all data points fell within the 95%LoAs).Conclusions:It is feasible to directly reorder new ortho-k lenses using software fitting approaches.However,further investigations are necessary to validate their practicability in a clinical setting.展开更多
Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software ...Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.展开更多
To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplina...To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplinary education focused on specific fields,making it difficult to cultivate engineering leaders with multidisciplinary backgrounds who are capable of solving complex real-world problems.To solve this problem,based on the decade-long interdisciplinary talent cultivation achievements of the College of Software Engineering at Sichuan University,this article proposes the“Software Engineering+”innovative talent cultivation paradigm.It provides an analysis through professional construction of interdisciplinary talents,the design of talent cultivation frameworks,the formulation of cultivation plans,the establishment of interdisciplinary curriculum systems,the reform of teaching modes,and the improvement of institutional systems.Scientific solutions are proposed,and five project models implemented and operated by the College of Software Engineering at Sichuan University are listed as practical examples,offering significant reference value.展开更多
With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current ...With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current situation of the mismatch between educational practices and industrial change,this study proposes an innovative teaching model—“Micro-practices”.This model integrates new knowledge and new technologies into the teaching process quickly and flexibly through practical teaching projects with“short class time,small capacity,and cloud environment”to meet the different educational needs of students,teachers,and enterprises.The aim is to train innovative software engineering talents who can meet the challenges of the future.展开更多
Purpose:This research addresses the challenge of concept drift in AI-enabled software,particularly within autonomous vehicle systems where concept drift in object recognition(like pedestrian detection)can lead to misc...Purpose:This research addresses the challenge of concept drift in AI-enabled software,particularly within autonomous vehicle systems where concept drift in object recognition(like pedestrian detection)can lead to misclassifications and safety risks.This study introduces a proactive framework to detect early signs of domain-specific concept drift by leveraging domain analysis and natural language processing techniques.This method is designed to help maintain the relevance of domain knowledge and prevent potential failures in AI systems due to evolving concept definitions.Design/methodology/approach:The proposed framework integrates natural language processing and image analysis to continuously update and monitor key domain concepts against evolving external data sources,such as social media and news.By identifying terms and features closely associated with core concepts,the system anticipates and flags significant changes.This was tested in the automotive domain on the pedestrian concept,where the framework was evaluated for its capacity to detect shifts in the recognition of pedestrians,particularly during events like Halloween and specific car accidents.Findings:The framework demonstrated an ability to detect shifts in the domain concept of pedestrians,as evidenced by contextual changes around major events.While it successfully identified pedestrian-related drift,the system’s accuracy varied when overlapping with larger social events.The results indicate the model’s potential to foresee relevant shifts before they impact autonomous systems,although further refinement is needed to handle high-impact concurrent events.Research limitations:This study focused on detecting concept drift in the pedestrian domain within autonomous vehicles,with results varying across domains.To assess generalizability,we tested the framework for airplane-related incidents and demonstrated adaptability.However,unpredictable events and data biases from social media and news may obscure domain-specific drifts.Further evaluation across diverse applications is needed to enhance robustness in evolving AI environments.Practical implications:The proactive detection of concept drift has significant implications for AI-driven domains,especially in safety-critical applications like autonomous driving.By identifying early signs of drift,this framework provides actionable insights for AI system updates,potentially reducing misclassification risks and enhancing public safety.Moreover,it enables timely interventions,reducing costly and labor-intensive retraining requirements by focusing only on the relevant aspects of evolving concepts.This method offers a streamlined approach for maintaining AI system performance in environments where domain knowledge rapidly changes.Originality/value:This study contributes a novel domain-agnostic framework that combines natural language processing with image analysis to predict concept drift early.This unique approach,which is focused on real-time data sources,offers an effective and scalable solution for addressing the evolving nature of domain-specific concepts in AI applications.展开更多
The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Indu...The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.展开更多
This paper presents a case study of the collaborative integration between the School of Information and Software Engineering at the University of Electronic Science and Technology of China(UESTC)and SI-TECH,highlighti...This paper presents a case study of the collaborative integration between the School of Information and Software Engineering at the University of Electronic Science and Technology of China(UESTC)and SI-TECH,highlighting the complementary advantages of both the University and the enterprise.By jointly establishing research institutes and engaging in diversified collaborative initiatives,the University and the enterprise have embarked on a pathway of School-enterprise Integration.Through a virtuous cycle of cooperation and continuous advancement,they have explored a comprehensive talent cultivation model in“5G”software engineering innovation practices based on this integration.Furthermore,this endeavor aims to facilitate the transformation of technological achievements and provides valuable insights for fostering innovative talents in the field of electronic information through enhanced integration between the University and the enterprise.展开更多
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
基金supported by the CCF-NSFOCUS‘Kunpeng’Research Fund(CCF-NSFOCUS2024012).
文摘In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
基金supported in part by the Teaching Reform Project of Chongqing University of Posts and Telecommunications,China under Grant No.XJG23234Chongqing Municipal Higher Education Teaching Reform Research Project under Grant No.203399the Doctoral Direct Train Project of Chongqing Science and Technology Bureau under Grant No.CSTB2022BSXM-JSX0007。
文摘The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
基金National Key R&D Program of China(Nos.2023YFC3806900,2022YFE0141400)。
文摘The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.
基金National Natural Science Foundation of China(12471367)。
文摘In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.
基金supported by the Natural Science Foundation Program of China(42173072,41503037,U1967207)Postgraduate Innovative Cultivation Program(CDUT2023BJCX013)Uranium Resources Exploration and Exploitation Innovation Center&and Everest Scientific Research Program(CDUT).
文摘It has been shown that the age of minerals in which U±Th are a major(e.g.,uraninite,pitchblende and thorite)or minor(e.g.,monazite,xenotime)component can be calculated from the concentrations of U±Th and Pb rather than their isotopes,and such ages are referred to as chemical ages.Although equations for calculating the chemical ages have been well established and various computation programs have been reported,there is a lack of software that can not only calculate the chemical ages of individual analytical points but also provide an evaluation of the errors of individual ages as well as the whole dataset.In this paper,we develop a software for calculating and assessing the chemical ages of uranium minerals(CAUM),an open-source Python-based program with a friendly Graphical User Interface(GUI).Electron probe microanalysis(EPMA)data of uranium minerals are first imported from Excel files and used to calculate the chemical ages and associated errors of individual analytical points.The age data are then visualized to aid evaluating if the dataset comprises one or multiple populations and whether or not there are meaningful correlations between the chemical ages and impurities.Actions can then be taken to evaluate the errors within individual populations and the significance of the correlations.The use of the software is demonstrated with examples from published data.
基金funded by a grant from the Center of Excellence in Information Assurance(CoEIA),King Saud University(KSU).
文摘Software defect prediction is a critical component in maintaining software quality,enabling early identification and resolution of issues that could lead to system failures and significant financial losses.With the increasing reliance on user-generated content,social media reviews have emerged as a valuable source of real-time feedback,offering insights into potential software defects that traditional testing methods may overlook.However,existing models face challenges like handling imbalanced data,high computational complexity,and insufficient inte-gration of contextual information from these reviews.To overcome these limitations,this paper introduces the SESDP(Sentiment Analysis-Based Early Software Defect Prediction)model.SESDP employs a Transformer-Based Multi-Task Learning approach using Robustly Optimized Bidirectional Encoder Representations from Transformers Approach(RoBERTa)to simultaneously perform sentiment analysis and defect prediction.By integrating text embedding extraction,sentiment score computation,and feature fusion,the model effectively captures both the contextual nuances and sentiment expressed in user reviews.Experimental results show that SESDP achieves superior performance with an accuracy of 96.37%,precision of 94.7%,and recall of 95.4%,particularly excelling in handling imbalanced datasets compared to baseline models.This approach offers a scalable and efficient solution for early software defect detection,enhancing proactive software quality assurance.
文摘Starting with the goal and significance of software security testing,this paper introduces the main methods of software security testing in the open network environment,including formal security testing,white box testing,fuzzy testing,model testing,and fault injection testing.A software security testing method based on a security target model is proposed.This paper provides new ideas for software security testing,better adapts to the open network environment,improves the efficiency and quality of testing,and builds a good software application environment.
基金supported by the National Natural Science Foundation of China(82371089).
文摘Objective:To evaluate the performance of orthokeratology(ortho-k)lens reordering using software-designed system,so as to determine the feasibility of ortho-k lens reordering without discontinuing lens wear.Methods:This study is a retrospective analysis of data of ortho-k lens wearers who had a history of short-term discontinuation of lens wear.A total of 94 individuals aged over 8 years with spherical equivalent refraction ranging from-0.50 to-6.50 diopters were included.The corneal topography data at baseline(before ortho-k)and after lens wear discontinuation(cessation of ortho-k treatment)were imported separately into the lens-design software,along with corresponding refraction data.Subsequently,corneal and lens parameters were generated and compared.Intraclass correlation coefficients(ICC)were calculated,and Bland and Altman analyses were conducted.Results:All 94 children were involved in the retrospective analysis.Compared with baseline data,there was a high level of consistency between Rwo(without discontinuation)and Rwith(with discontinuation),with an ICC of 0.96(P<0.001).Furthermore,the comparison of lens parameters generated by the Easyfit software between baseline and after short-term discontinuation showed a high degree of consistency,with all of the ICC values exceeding 0.90.Similar results were obtained using the WAVE software,as both ICC values and Bland-Altman plots demonstrated a high level of consistency in lens parameters between two conditions(nearly all data points fell within the 95%LoAs).Conclusions:It is feasible to directly reorder new ortho-k lenses using software fitting approaches.However,further investigations are necessary to validate their practicability in a clinical setting.
文摘Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.
基金supported by the 2023 Sichuan Province Higher Education Talent Cultivation and Teaching Reform Major Project“Exploration and Practice of Interdisciplinary and Integrated Industrial Software Talent Cultivation Model”(JG2023-14)the Sichuan University Higher Education Teaching Reform Project(10th Phase)Research and Exploration of Practical Teaching Mode under the New Major Background of“Cross Disciplinary and Integration”(SCU10128)。
文摘To address the severe challenges posed by the international situation and meet the needs of the national major development strategies,the traditional software engineering talent cultivation model lacks interdisciplinary education focused on specific fields,making it difficult to cultivate engineering leaders with multidisciplinary backgrounds who are capable of solving complex real-world problems.To solve this problem,based on the decade-long interdisciplinary talent cultivation achievements of the College of Software Engineering at Sichuan University,this article proposes the“Software Engineering+”innovative talent cultivation paradigm.It provides an analysis through professional construction of interdisciplinary talents,the design of talent cultivation frameworks,the formulation of cultivation plans,the establishment of interdisciplinary curriculum systems,the reform of teaching modes,and the improvement of institutional systems.Scientific solutions are proposed,and five project models implemented and operated by the College of Software Engineering at Sichuan University are listed as practical examples,offering significant reference value.
基金funded by Universityindustry Collaborative Education Program(No.220605181024725)the Undergraduate Education and Teaching Reform Research Project of Northwestern Polytechnical University(No.22GZ13083)。
文摘With the rapid development of software engineering,traditional teaching methods are confronted with the challenges of short knowledge update cycles and the rapid emergence of new technologies.By analyzing the current situation of the mismatch between educational practices and industrial change,this study proposes an innovative teaching model—“Micro-practices”.This model integrates new knowledge and new technologies into the teaching process quickly and flexibly through practical teaching projects with“short class time,small capacity,and cloud environment”to meet the different educational needs of students,teachers,and enterprises.The aim is to train innovative software engineering talents who can meet the challenges of the future.
基金supported by U.S.Office of Naval Research(ONR)Grant number G2A62826.
文摘Purpose:This research addresses the challenge of concept drift in AI-enabled software,particularly within autonomous vehicle systems where concept drift in object recognition(like pedestrian detection)can lead to misclassifications and safety risks.This study introduces a proactive framework to detect early signs of domain-specific concept drift by leveraging domain analysis and natural language processing techniques.This method is designed to help maintain the relevance of domain knowledge and prevent potential failures in AI systems due to evolving concept definitions.Design/methodology/approach:The proposed framework integrates natural language processing and image analysis to continuously update and monitor key domain concepts against evolving external data sources,such as social media and news.By identifying terms and features closely associated with core concepts,the system anticipates and flags significant changes.This was tested in the automotive domain on the pedestrian concept,where the framework was evaluated for its capacity to detect shifts in the recognition of pedestrians,particularly during events like Halloween and specific car accidents.Findings:The framework demonstrated an ability to detect shifts in the domain concept of pedestrians,as evidenced by contextual changes around major events.While it successfully identified pedestrian-related drift,the system’s accuracy varied when overlapping with larger social events.The results indicate the model’s potential to foresee relevant shifts before they impact autonomous systems,although further refinement is needed to handle high-impact concurrent events.Research limitations:This study focused on detecting concept drift in the pedestrian domain within autonomous vehicles,with results varying across domains.To assess generalizability,we tested the framework for airplane-related incidents and demonstrated adaptability.However,unpredictable events and data biases from social media and news may obscure domain-specific drifts.Further evaluation across diverse applications is needed to enhance robustness in evolving AI environments.Practical implications:The proactive detection of concept drift has significant implications for AI-driven domains,especially in safety-critical applications like autonomous driving.By identifying early signs of drift,this framework provides actionable insights for AI system updates,potentially reducing misclassification risks and enhancing public safety.Moreover,it enables timely interventions,reducing costly and labor-intensive retraining requirements by focusing only on the relevant aspects of evolving concepts.This method offers a streamlined approach for maintaining AI system performance in environments where domain knowledge rapidly changes.Originality/value:This study contributes a novel domain-agnostic framework that combines natural language processing with image analysis to predict concept drift early.This unique approach,which is focused on real-time data sources,offers an effective and scalable solution for addressing the evolving nature of domain-specific concepts in AI applications.
文摘The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle.
文摘This paper presents a case study of the collaborative integration between the School of Information and Software Engineering at the University of Electronic Science and Technology of China(UESTC)and SI-TECH,highlighting the complementary advantages of both the University and the enterprise.By jointly establishing research institutes and engaging in diversified collaborative initiatives,the University and the enterprise have embarked on a pathway of School-enterprise Integration.Through a virtuous cycle of cooperation and continuous advancement,they have explored a comprehensive talent cultivation model in“5G”software engineering innovation practices based on this integration.Furthermore,this endeavor aims to facilitate the transformation of technological achievements and provides valuable insights for fostering innovative talents in the field of electronic information through enhanced integration between the University and the enterprise.