A software package to be used in high-speed oscilloscope-basedthree-dimensionalbunch-by-bunch charge and position measurement is presented.The software package takes the pick-up electrode signal waveform recorded by t...A software package to be used in high-speed oscilloscope-basedthree-dimensionalbunch-by-bunch charge and position measurement is presented.The software package takes the pick-up electrode signal waveform recorded by the high-speed oscilloscope as input,and it calculates and outputs the bunch-by-bunch charge and position.In addition to enabling a three-dimensional observation of the motion of each passing bunch on all beam position monitor pick-up electrodes,it offers many additional features such as injection analysis,bunch response function reconstruction,and turn-by-turn beam analysis.The software package has an easy-to-understand graphical user interface and convenient interactive operation,which has been verified on the Windows 10 system.展开更多
BACKGROUND HeartModel(HM)is a fully automated adaptive quantification software that can quickly quantify left heart volume and left ventricular function.This study used HM to quantify the left ventricular end-diastoli...BACKGROUND HeartModel(HM)is a fully automated adaptive quantification software that can quickly quantify left heart volume and left ventricular function.This study used HM to quantify the left ventricular end-diastolic(LVEDV)and end-systolic volumes(LVESV)of patients with dilated cardiomyopathy(DCM),coronary artery heart disease with segmental wall motion abnormality,and hypertrophic cardiomyopathy(HCM)to determine whether there were differences in the feasibility,accuracy,and repeatability of measuring the LVEDV,LVESV,LV ejection fraction(LVEF)and left atrial end-systolic volume(LAESV)and to compare these measurements with those obtained with traditional twodimensional(2D)and three-dimensional(3D)methods.AIM To evaluate the application value of HM in quantifying left heart chamber volume and LVEF in clinical patients.METHODS A total of 150 subjects who underwent 2D and 3D echocardiography were divided into 4 groups:(1)42 patients with normal heart shape and function(control group,Group A);(2)35 patients with DCM(Group B);(3)41 patients with LV remodeling after acute myocardial infarction(Group C);and(4)32 patients with HCM(Group D).The LVEDV,LVESV,LVEF and LAESV obtained by HM with(HM-RE)and without regional endocardial border editing(HM-NE)were compared with those measured by traditional 2D/3D echocardiographic methods to assess the correlation,consistency,and repeatability of all methods.RESULTS(1)The parameters measured by HM were significantly different among the groups(P<0.05 for all).Compared with Groups A,C,and D,Group B had higher LVEDV and LVESV(P<0.05 for all)and lower LVEF(P<0.05 for all);(2)HM-NE overestimated LVEDV,LVESV,and LAESV with wide biases and underestimated LVEF with a small bias;contour adjustment reduced the biases and limits of agreement(bias:LVEDV,28.17 mL,LVESV,14.92 mL,LAESV,8.18 mL,LVEF,-0.04%).The correlations between HM-RE and advanced cardiac 3D quantification(3DQA)(r_(s)=0.91-0.95,P<0.05 for all)were higher than those between HM-NE(r_(s)=0.85-0.93,P<0.05 for all)and the traditional 2D methods.The correlations between HM-RE and 3DQA were good for Groups A,B,and C but remained weak for Group D(LVEDV and LVESV,r_(s)=0.48-0.54,P<0.05 for all);and(3)The intraobserver and interobserver variability for the HM-RE measurements were low.CONCLUSION HM can be used to quantify the LV volume and LVEF in patients with common heart diseases and sufficient image quality.HM with contour editing is highly reproducible and accurate and may be recommended for clinical practice.展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
Three-dimensional(3D)urban structures play a critical role in informing climate mitigation strategies aimed at the built environment and facilitating sustainable urban development.Regrettably,there exists a significan...Three-dimensional(3D)urban structures play a critical role in informing climate mitigation strategies aimed at the built environment and facilitating sustainable urban development.Regrettably,there exists a significant gap in detailed and consistent data on 3D building space structures with global coverage due to the challenges inherent in the data collection and model calibration processes.In this study,we constructed a global urban structure(GUS-3D)dataset,including building volume,height,and footprint information,at a 500 m spatial resolution using extensive satellite observation products and numerous reference building samples.Our analysis indicated that the total volume of buildings worldwide in2015 exceeded 1×10^(12)m^(3).Over the 1985 to 2015 period,we observed a slight increase in the magnitude of 3D building volume growth(i.e.,it increased from 166.02 km3 during the 1985–2000 period to 175.08km3 during the 2000–2015 period),while the expansion magnitudes of the two-dimensional(2D)building footprint(22.51×10^(3) vs 13.29×10^(3)km^(2))and urban extent(157×10^(3) vs 133.8×10^(3)km^(2))notably decreased.This trend highlights the significant increase in intensive vertical utilization of urban land.Furthermore,we identified significant heterogeneity in building space provision and inequality across cities worldwide.This inequality is particularly pronounced in many populous Asian cities,which has been overlooked in previous studies on economic inequality.The GUS-3D dataset shows great potential to deepen our understanding of the urban environment and creates new horizons for numerous 3D urban studies.展开更多
To address the problem of multi-missile cooperative interception against maneuvering targets at a prespecified impact time and desired Line-of-Sight(LOS)angles in ThreeDimensional(3D)space,this paper proposes a 3D lea...To address the problem of multi-missile cooperative interception against maneuvering targets at a prespecified impact time and desired Line-of-Sight(LOS)angles in ThreeDimensional(3D)space,this paper proposes a 3D leader-following cooperative interception guidance law.First,in the LOS direction of the leader,an impact time-controlled guidance law is derived based on the fixed-time stability theory,which enables the leader to complete the interception task at a prespecified impact time.Next,in the LOS direction of the followers,by introducing a time consensus tracking error function,a fixed-time consensus tracking guidance law is investigated to guarantee the consensus tracking convergence of the time-to-go.Then,in the direction normal to the LOS,by combining the designed global integral sliding mode surface and the second-order Sliding Mode Control(SMC)theory,an innovative 3D LOS-angle-constrained interception guidance law is developed,which eliminates the reaching phase in the traditional sliding mode guidance laws and effectively saves energy consumption.Moreover,it effectively suppresses the chattering phenomenon while avoiding the singularity issue,and compensates for unknown interference caused by target maneuvering online,making it convenient for practical engineering applications.Finally,theoretical proof analysis and multiple sets of numerical simulation results verify the effectiveness,superiority,and robustness of the investigated guidance law.展开更多
Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accu...Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.展开更多
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniq...Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.展开更多
Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive s...Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.展开更多
The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have bec...The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.展开更多
Liposarcoma is one of the most common soft tissue sarcomas,however,its occurrence rate is still rare compared to other cancers.Due to its rarity,in vitro experiments are an essential approach to elucidate liposarcoma ...Liposarcoma is one of the most common soft tissue sarcomas,however,its occurrence rate is still rare compared to other cancers.Due to its rarity,in vitro experiments are an essential approach to elucidate liposarcoma pathobiology.Conventional cell culture-based research(2D cell culture)is still playing a pivotal role,while several shortcomings have been recently under discussion.In vivo,mouse models are usually adopted for pre-clinical analyses with expectations to overcome the issues of 2D cell culture.However,they do not fully recapitulate human dedifferentiated liposarcoma(DDLPS)characteristics.Therefore,three-dimensional(3D)culture systems have been the recent research focus in the cell biology field with the expectation to overcome at the same time the disadvantages of 2D cell culture and in vivo animal models and fill in the gap between them.Given the liposarcoma rarity,we believe that 3D cell culture techniques,including 3D cell cultures/co-cultures,and Patient-Derived tumor Organoids(PDOs),represent a promising approach to facilitate liposarcoma investigation and elucidate its molecular mechanisms and effective therapy development.In this review,we first provide a general overview of 3D cell cultures compared to 2D cell cultures.We then focus on one of the recent 3D cell culture applications,Patient-Derived Organoids(PDOs),summarizing and discussing several PDO methodologies.Finally,we discuss the current and future applications of PDOs to sarcoma,particularly in the field of liposarcoma.展开更多
In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which enca...In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.展开更多
Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of softwar...Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defe...In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.展开更多
The development of minimally invasive surgery has transformed the management of gastrointestinal cancer.Notably,three-dimensional visualization systems have increased surgical precision.This editorial discusses a rece...The development of minimally invasive surgery has transformed the management of gastrointestinal cancer.Notably,three-dimensional visualization systems have increased surgical precision.This editorial discusses a recent study by Shen and Zhang,which compared the clinical applications of naked-eye threedimensional laparoscopic systems vs traditional optical systems in radical surgery for gastric and colorectal cancer.Both systems appeared to yield comparable surgical and oncological outcomes in terms of safety parameters,operating times,and quality of lymph node dissection.However,the spectacle-free system’s technical and logistical limitations hindered its effects on the surgical team’s overall competency.This editorial examines the authors’findings within the broader context of the evolution of oncologic laparoscopy,discusses the relevance of the results in light of the current literature,and proposes future research directions focused on multicenter validation,comprehensive ergonomic analysis,and technological advancements aimed at enhancing intraoperative collaboration.As technology continues to evolve,clinical implementation of new methods must be supported by robust scientific evidence and standardized criteria,to ensure tangible improvements in efficiency,safety,and oncologic outcomes.展开更多
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education....The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.展开更多
This paper presents our endeavors in developing the large-scale, ultra-high-resolution E3SM Land Model (uELM), specifically designed for exascale computers furnished with accelerators such as Nvidia GPUs. The uELM is ...This paper presents our endeavors in developing the large-scale, ultra-high-resolution E3SM Land Model (uELM), specifically designed for exascale computers furnished with accelerators such as Nvidia GPUs. The uELM is a sophisticated code that substantially relies on High-Performance Computing (HPC) environments, necessitating particular machine and software configurations. To facilitate community-based uELM developments employing GPUs, we have created a portable, standalone software environment preconfigured with uELM input datasets, simulation cases, and source code. This environment, utilizing Docker, encompasses all essential code, libraries, and system software for uELM development on GPUs. It also features a functional unit test framework and an offline model testbed for comprehensive numerical experiments. From a technical perspective, the paper discusses GPU-ready container generations, uELM code management, and input data distribution across computational platforms. Lastly, the paper demonstrates the use of environment for functional unit testing, end-to-end simulation on CPUs and GPUs, and collaborative code development.展开更多
Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis tech...Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis technique to enhance software quality management in open⁃source environments by addressing dynamic scalability,adaptive service provisioning,and software reliability.The proposed methodology integrates a scalability metric,an optimized service provisioning model,and a weighted entropy⁃based reliability assessment to systematically improve key performance parameters.Experimental evaluation conducted on multiple open⁃source software(OSS)versions demonstrates significant improvements:scalability increased by 27.5%,service provisioning time reduced by 18.3%,and software reliability improved by 22.1%compared to baseline methods.A comparative analysis with prior works further highlights the effectiveness of this approach in ensuring adaptability,efficiency,and resilience in dynamic software ecosystems.Future work will focus on real⁃time monitoring and AI⁃driven adaptive provisioning to further enhance software quality management.展开更多
The three-dimensional particle electrode system exhibits significant potential for application in the treatment of wastewater.Nonetheless,the advancement of effective granular electrodes characterized by elevated cata...The three-dimensional particle electrode system exhibits significant potential for application in the treatment of wastewater.Nonetheless,the advancement of effective granular electrodes characterized by elevated catalytic activity and minimal energy consumption continues to pose a significant challenge.In this research,Fluorine-doped copper-carbon(F/Cu-GAC)particle electrodes were effectively synthesized through an impregnationcalcination technique,utilizing granular activated carbon as the carrier and fluorinedoped modified copper oxides as the catalytic agents.The particle electrodes were subsequently utilized to promote the degradation of 2,4,6-trichlorophenol(2,4,6-TCP)in a threedimensional electrocatalytic reactor(3DER).The F/Cu-GAC particle electrodes were polarized under the action of electric field,which promoted the heterogeneous Fenton-like reaction in which H2O2 generated by two-electron oxygen reduction reaction(2e-ORR)of O_(2) was catalytically decomposed to·OH.The 3DER equipped with F/Cu-GAC particle electrodes showed 100%removal of 2,4,6-TCP and 79.24%removal of TOC with a specific energy consumption(EC)of approximately 0.019 kWh/g·COD after 2 h of operation.The F/Cu-GAC particle electrodes exhibited an overpotential of 0.38 V and an electrochemically active surface area(ECSA)of 715 cm^(2),as determined through linear sweep voltammetry(LSV)and cyclic voltammetry(CV)assessments.These findings suggest a high level of electrocatalytic performance.Furthermore,the catalytic mechanism of the 3DER equipped with F/Cu-GAC particle electrodes was elucidated through the application of X-ray photoelectron spectroscopy(XPS),electron spin resonance(ESR),and active species capture experiments.This investigation offers a novel approach for the effective degradation of 2,4,6-TCP.展开更多
基金supported by the Ten Thousand Talent Program and National Natural Science Foundation of China(No.11575282)the Ten Thousand Talent Program and Chinese Academy of Sciences Key Technology Talent Program。
文摘A software package to be used in high-speed oscilloscope-basedthree-dimensionalbunch-by-bunch charge and position measurement is presented.The software package takes the pick-up electrode signal waveform recorded by the high-speed oscilloscope as input,and it calculates and outputs the bunch-by-bunch charge and position.In addition to enabling a three-dimensional observation of the motion of each passing bunch on all beam position monitor pick-up electrodes,it offers many additional features such as injection analysis,bunch response function reconstruction,and turn-by-turn beam analysis.The software package has an easy-to-understand graphical user interface and convenient interactive operation,which has been verified on the Windows 10 system.
文摘BACKGROUND HeartModel(HM)is a fully automated adaptive quantification software that can quickly quantify left heart volume and left ventricular function.This study used HM to quantify the left ventricular end-diastolic(LVEDV)and end-systolic volumes(LVESV)of patients with dilated cardiomyopathy(DCM),coronary artery heart disease with segmental wall motion abnormality,and hypertrophic cardiomyopathy(HCM)to determine whether there were differences in the feasibility,accuracy,and repeatability of measuring the LVEDV,LVESV,LV ejection fraction(LVEF)and left atrial end-systolic volume(LAESV)and to compare these measurements with those obtained with traditional twodimensional(2D)and three-dimensional(3D)methods.AIM To evaluate the application value of HM in quantifying left heart chamber volume and LVEF in clinical patients.METHODS A total of 150 subjects who underwent 2D and 3D echocardiography were divided into 4 groups:(1)42 patients with normal heart shape and function(control group,Group A);(2)35 patients with DCM(Group B);(3)41 patients with LV remodeling after acute myocardial infarction(Group C);and(4)32 patients with HCM(Group D).The LVEDV,LVESV,LVEF and LAESV obtained by HM with(HM-RE)and without regional endocardial border editing(HM-NE)were compared with those measured by traditional 2D/3D echocardiographic methods to assess the correlation,consistency,and repeatability of all methods.RESULTS(1)The parameters measured by HM were significantly different among the groups(P<0.05 for all).Compared with Groups A,C,and D,Group B had higher LVEDV and LVESV(P<0.05 for all)and lower LVEF(P<0.05 for all);(2)HM-NE overestimated LVEDV,LVESV,and LAESV with wide biases and underestimated LVEF with a small bias;contour adjustment reduced the biases and limits of agreement(bias:LVEDV,28.17 mL,LVESV,14.92 mL,LAESV,8.18 mL,LVEF,-0.04%).The correlations between HM-RE and advanced cardiac 3D quantification(3DQA)(r_(s)=0.91-0.95,P<0.05 for all)were higher than those between HM-NE(r_(s)=0.85-0.93,P<0.05 for all)and the traditional 2D methods.The correlations between HM-RE and 3DQA were good for Groups A,B,and C but remained weak for Group D(LVEDV and LVESV,r_(s)=0.48-0.54,P<0.05 for all);and(3)The intraobserver and interobserver variability for the HM-RE measurements were low.CONCLUSION HM can be used to quantify the LV volume and LVEF in patients with common heart diseases and sufficient image quality.HM with contour editing is highly reproducible and accurate and may be recommended for clinical practice.
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
基金supported by the National Science Fund for Distinguished Young Scholars(42225107)the National Natural Science Foundation of China(42001326,42371414,42171409,and 42271419)+1 种基金the Natural Science Foundation of Guangdong Province of China(2022A1515012207)the Basic and Applied Basic Research Project of Guangzhou Science and Technology Planning(202201011539)。
文摘Three-dimensional(3D)urban structures play a critical role in informing climate mitigation strategies aimed at the built environment and facilitating sustainable urban development.Regrettably,there exists a significant gap in detailed and consistent data on 3D building space structures with global coverage due to the challenges inherent in the data collection and model calibration processes.In this study,we constructed a global urban structure(GUS-3D)dataset,including building volume,height,and footprint information,at a 500 m spatial resolution using extensive satellite observation products and numerous reference building samples.Our analysis indicated that the total volume of buildings worldwide in2015 exceeded 1×10^(12)m^(3).Over the 1985 to 2015 period,we observed a slight increase in the magnitude of 3D building volume growth(i.e.,it increased from 166.02 km3 during the 1985–2000 period to 175.08km3 during the 2000–2015 period),while the expansion magnitudes of the two-dimensional(2D)building footprint(22.51×10^(3) vs 13.29×10^(3)km^(2))and urban extent(157×10^(3) vs 133.8×10^(3)km^(2))notably decreased.This trend highlights the significant increase in intensive vertical utilization of urban land.Furthermore,we identified significant heterogeneity in building space provision and inequality across cities worldwide.This inequality is particularly pronounced in many populous Asian cities,which has been overlooked in previous studies on economic inequality.The GUS-3D dataset shows great potential to deepen our understanding of the urban environment and creates new horizons for numerous 3D urban studies.
文摘To address the problem of multi-missile cooperative interception against maneuvering targets at a prespecified impact time and desired Line-of-Sight(LOS)angles in ThreeDimensional(3D)space,this paper proposes a 3D leader-following cooperative interception guidance law.First,in the LOS direction of the leader,an impact time-controlled guidance law is derived based on the fixed-time stability theory,which enables the leader to complete the interception task at a prespecified impact time.Next,in the LOS direction of the followers,by introducing a time consensus tracking error function,a fixed-time consensus tracking guidance law is investigated to guarantee the consensus tracking convergence of the time-to-go.Then,in the direction normal to the LOS,by combining the designed global integral sliding mode surface and the second-order Sliding Mode Control(SMC)theory,an innovative 3D LOS-angle-constrained interception guidance law is developed,which eliminates the reaching phase in the traditional sliding mode guidance laws and effectively saves energy consumption.Moreover,it effectively suppresses the chattering phenomenon while avoiding the singularity issue,and compensates for unknown interference caused by target maneuvering online,making it convenient for practical engineering applications.Finally,theoretical proof analysis and multiple sets of numerical simulation results verify the effectiveness,superiority,and robustness of the investigated guidance law.
基金funded by the Youth Fund of the National Natural Science Foundation of China(Grant No.42261070).
文摘Spectrum-based fault localization (SBFL) generates a ranked list of suspicious elements by using the program execution spectrum, but the excessive number of elements ranked in parallel results in low localization accuracy. Most researchers consider intra-class dependencies to improve localization accuracy. However, some studies show that inter-class method call type faults account for more than 20%, which means such methods still have certain limitations. To solve the above problems, this paper proposes a two-phase software fault localization based on relational graph convolutional neural networks (Two-RGCNFL). Firstly, in Phase 1, the method call dependence graph (MCDG) of the program is constructed, the intra-class and inter-class dependencies in MCDG are extracted by using the relational graph convolutional neural network, and the classifier is used to identify the faulty methods. Then, the GraphSMOTE algorithm is improved to alleviate the impact of class imbalance on classification accuracy. Aiming at the problem of parallel ranking of element suspicious values in traditional SBFL technology, in Phase 2, Doc2Vec is used to learn static features, while spectrum information serves as dynamic features. A RankNet model based on siamese multi-layer perceptron is constructed to score and rank statements in the faulty method. This work conducts experiments on 5 real projects of Defects4J benchmark. Experimental results show that, compared with the traditional SBFL technique and two baseline methods, our approach improves the Top-1 accuracy by 262.86%, 29.59% and 53.01%, respectively, which verifies the effectiveness of Two-RGCNFL. Furthermore, this work verifies the importance of inter-class dependencies through ablation experiments.
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Software-related security aspects are a growing and legitimate concern,especially with 5G data available just at our palms.To conduct research in this field,periodic comparative analysis is needed with the new techniques coming up rapidly.The purpose of this study is to review the recent developments in the field of security integration in the software development lifecycle(SDLC)by analyzing the articles published in the last two decades and to propose a way forward.This review follows Kitchenham’s review protocol.The review has been divided into three main stages including planning,execution,and analysis.From the selected 100 articles,it becomes evident that need of a collaborative approach is necessary for addressing critical software security risks(CSSRs)through effective risk management/estimation techniques.Quantifying risks using a numeric scale enables a comprehensive understanding of their severity,facilitating focused resource allocation and mitigation efforts.Through a comprehensive understanding of potential vulnerabilities and proactive mitigation efforts facilitated by protection poker,organizations can prioritize resources effectively to ensure the successful outcome of projects and initiatives in today’s dynamic threat landscape.The review reveals that threat analysis and security testing are needed to develop automated tools for the future.Accurate estimation of effort required to prioritize potential security risks is a big challenge in software security.The accuracy of effort estimation can be further improved by exploring new techniques,particularly those involving deep learning.It is also imperative to validate these effort estimation methods to ensure all potential security threats are addressed.Another challenge is selecting the right model for each specific security threat.To achieve a comprehensive evaluation,researchers should use well-known benchmark checklists.
文摘Link failure is a critical issue in large networks and must be effectively addressed.In software-defined networks(SDN),link failure recovery schemes can be categorized into proactive and reactive approaches.Reactive schemes have longer recovery times while proactive schemes provide faster recovery but overwhelm the memory of switches by flow entries.As SDN adoption grows,ensuring efficient recovery from link failures in the data plane becomes crucial.In particular,data center networks(DCNs)demand rapid recovery times and efficient resource utilization to meet carrier-grade requirements.This paper proposes an efficient Decentralized Failure Recovery(DFR)model for SDNs,meeting recovery time requirements and optimizing switch memory resource consumption.The DFR model enables switches to autonomously reroute traffic upon link failures without involving the controller,achieving fast recovery times while minimizing memory usage.DFR employs the Fast Failover Group in the OpenFlow standard for local recovery without requiring controller communication and utilizes the k-shortest path algorithm to proactively install backup paths,allowing immediate local recovery without controller intervention and enhancing overall network stability and scalability.DFR employs flow entry aggregation techniques to reduce switch memory usage.Instead of matching flow entries to the destination host’s MAC address,DFR matches packets to the destination switch’s MAC address.This reduces the switches’Ternary Content-Addressable Memory(TCAM)consumption.Additionally,DFR modifies Address Resolution Protocol(ARP)replies to provide source hosts with the destination switch’s MAC address,facilitating flow entry aggregation without affecting normal network operations.The performance of DFR is evaluated through the network emulator Mininet 2.3.1 and Ryu 3.1 as SDN controller.For different number of active flows,number of hosts per edge switch,and different network sizes,the proposed model outperformed various failure recovery models:restoration-based,protection by flow entries,protection by group entries and protection by Vlan-tagging model in terms of recovery time,switch memory consumption and controller overhead which represented the number of flow entry updates to recover from the failure.Experimental results demonstrate that DFR achieves recovery times under 20 milliseconds,satisfying carrier-grade requirements for rapid failure recovery.Additionally,DFR reduces switch memory usage by up to 95%compared to traditional protection methods and minimizes controller load by eliminating the need for controller intervention during failure recovery.Theresults underscore the efficiency and scalability of the DFR model,making it a practical solution for enhancing network resilience in SDN environments.
基金National Key R&D Program of China(Nos.2023YFC3806900,2022YFE0141400)。
文摘The advent of parametric design has resulted in a marked increase in the complexity of building.Unfortunately,traditional construction methods make it difficult to meet the needs.Therefore,construction robots have become a pivotal production tool in this context.Since the arm span of a single robot usually does not exceed 3 meters,it is not competent for producing large-scale building components.Accordingly,the extension of the robot,s working range is often achieved by external axes.Nevertheless,the coupling control of external axes and robots and their kinematic solution have become key challenges.The primary technical difficulties include customized construction robots,automatic solutions for external axes,fixed axis joints,and specific motion mode control.This paper proposes solutions to these difficulties,introduces the relevant basic concepts and algorithms in detail,and encapsulates these robotics principles and algorithm processes into the Grasshopper plug-in commonly used by architects to form the FURobot software platform.This platform effectively solves the above problems,lowers the threshold for architects,and improves production efficiency.The effectiveness of the algorithm and software in this paper is verified through simulation experiments.
文摘Liposarcoma is one of the most common soft tissue sarcomas,however,its occurrence rate is still rare compared to other cancers.Due to its rarity,in vitro experiments are an essential approach to elucidate liposarcoma pathobiology.Conventional cell culture-based research(2D cell culture)is still playing a pivotal role,while several shortcomings have been recently under discussion.In vivo,mouse models are usually adopted for pre-clinical analyses with expectations to overcome the issues of 2D cell culture.However,they do not fully recapitulate human dedifferentiated liposarcoma(DDLPS)characteristics.Therefore,three-dimensional(3D)culture systems have been the recent research focus in the cell biology field with the expectation to overcome at the same time the disadvantages of 2D cell culture and in vivo animal models and fill in the gap between them.Given the liposarcoma rarity,we believe that 3D cell culture techniques,including 3D cell cultures/co-cultures,and Patient-Derived tumor Organoids(PDOs),represent a promising approach to facilitate liposarcoma investigation and elucidate its molecular mechanisms and effective therapy development.In this review,we first provide a general overview of 3D cell cultures compared to 2D cell cultures.We then focus on one of the recent 3D cell culture applications,Patient-Derived Organoids(PDOs),summarizing and discussing several PDO methodologies.Finally,we discuss the current and future applications of PDOs to sarcoma,particularly in the field of liposarcoma.
基金National Natural Science Foundation of China(12471367)。
文摘In this work,we present a parallel implementation of radiation hydrodynamics coupled with particle transport,utilizing software infrastructure JASMIN(J Adaptive Structured Meshes applications INfrastructure)which encapsulates high-performance technology for the numerical simulation of complex applications.Two serial codes,radiation hydrodynamics RH2D and particle transport Sn2D,have been integrated into RHSn2D on JASMIN infrastructure,which can efficiently use thousands of processors to simulate the complex multi-physics phenomena.Moreover,the non-conforming processors strategy has ensured RHSn2D against the serious load imbalance between radiation hydrodynamics and particle transport for large scale parallel simulations.Numerical results show that RHSn2D achieves a parallel efficiency of 17.1%using 90720 cells on 8192 processors compared with 256 processors in the same problem.
基金supported by the Beijing Education Commission Science and Technology Project(No.KM201811417005)the National Natural Science Foundation of China(No.62173237)+6 种基金the Aeronautical Science Foundation of China(No.20240055054001)the Open Fund of State Key Laboratory of Satellite Navigation System and Equipment Technology(No.CEPNT2023A01)Joint Fund of Ministry of Natural Resources Key Laboratory of Spatiotemporal Perception and Intelligent Processing(No.232203)the Civil Aviation Flight Technology and Flight Safety Engineering Technology Research Center of Sichuan(No.GY2024-02B)the Applied Basic Research Programs of Liaoning Province(No.2025JH2/101300011)the General Project of Liaoning Provincial Education Department(No.20250054)Research on Safety Intelligent Management Technology and Systems for Mixed Operations of General Aviation Aircraft in Low-Altitude Airspace(No.310125011).
文摘Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
基金supported by the CCF-NSFOCUS‘Kunpeng’Research Fund(CCF-NSFOCUS2024012).
文摘In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.
文摘The development of minimally invasive surgery has transformed the management of gastrointestinal cancer.Notably,three-dimensional visualization systems have increased surgical precision.This editorial discusses a recent study by Shen and Zhang,which compared the clinical applications of naked-eye threedimensional laparoscopic systems vs traditional optical systems in radical surgery for gastric and colorectal cancer.Both systems appeared to yield comparable surgical and oncological outcomes in terms of safety parameters,operating times,and quality of lymph node dissection.However,the spectacle-free system’s technical and logistical limitations hindered its effects on the surgical team’s overall competency.This editorial examines the authors’findings within the broader context of the evolution of oncologic laparoscopy,discusses the relevance of the results in light of the current literature,and proposes future research directions focused on multicenter validation,comprehensive ergonomic analysis,and technological advancements aimed at enhancing intraoperative collaboration.As technology continues to evolve,clinical implementation of new methods must be supported by robust scientific evidence and standardized criteria,to ensure tangible improvements in efficiency,safety,and oncologic outcomes.
基金supported in part by the Teaching Reform Project of Chongqing University of Posts and Telecommunications,China under Grant No.XJG23234Chongqing Municipal Higher Education Teaching Reform Research Project under Grant No.203399the Doctoral Direct Train Project of Chongqing Science and Technology Bureau under Grant No.CSTB2022BSXM-JSX0007。
文摘The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.
文摘This paper presents our endeavors in developing the large-scale, ultra-high-resolution E3SM Land Model (uELM), specifically designed for exascale computers furnished with accelerators such as Nvidia GPUs. The uELM is a sophisticated code that substantially relies on High-Performance Computing (HPC) environments, necessitating particular machine and software configurations. To facilitate community-based uELM developments employing GPUs, we have created a portable, standalone software environment preconfigured with uELM input datasets, simulation cases, and source code. This environment, utilizing Docker, encompasses all essential code, libraries, and system software for uELM development on GPUs. It also features a functional unit test framework and an offline model testbed for comprehensive numerical experiments. From a technical perspective, the paper discusses GPU-ready container generations, uELM code management, and input data distribution across computational platforms. Lastly, the paper demonstrates the use of environment for functional unit testing, end-to-end simulation on CPUs and GPUs, and collaborative code development.
文摘Ensuring software quality in open⁃source environments requires adaptive mechanisms to enhance scalability,optimize service provisioning,and improve reliability.This study presents the dynamic correlation analysis technique to enhance software quality management in open⁃source environments by addressing dynamic scalability,adaptive service provisioning,and software reliability.The proposed methodology integrates a scalability metric,an optimized service provisioning model,and a weighted entropy⁃based reliability assessment to systematically improve key performance parameters.Experimental evaluation conducted on multiple open⁃source software(OSS)versions demonstrates significant improvements:scalability increased by 27.5%,service provisioning time reduced by 18.3%,and software reliability improved by 22.1%compared to baseline methods.A comparative analysis with prior works further highlights the effectiveness of this approach in ensuring adaptability,efficiency,and resilience in dynamic software ecosystems.Future work will focus on real⁃time monitoring and AI⁃driven adaptive provisioning to further enhance software quality management.
基金supported by Guangxi Science and Technology Major Program(No.AA23073008)Hubei Key Laboratory of Water System Science for Sponge City Construction(Wuhan University)(No.2023–05)Nanning Innovation and Entrepreneur Leading Talent Project(No.2021001).
文摘The three-dimensional particle electrode system exhibits significant potential for application in the treatment of wastewater.Nonetheless,the advancement of effective granular electrodes characterized by elevated catalytic activity and minimal energy consumption continues to pose a significant challenge.In this research,Fluorine-doped copper-carbon(F/Cu-GAC)particle electrodes were effectively synthesized through an impregnationcalcination technique,utilizing granular activated carbon as the carrier and fluorinedoped modified copper oxides as the catalytic agents.The particle electrodes were subsequently utilized to promote the degradation of 2,4,6-trichlorophenol(2,4,6-TCP)in a threedimensional electrocatalytic reactor(3DER).The F/Cu-GAC particle electrodes were polarized under the action of electric field,which promoted the heterogeneous Fenton-like reaction in which H2O2 generated by two-electron oxygen reduction reaction(2e-ORR)of O_(2) was catalytically decomposed to·OH.The 3DER equipped with F/Cu-GAC particle electrodes showed 100%removal of 2,4,6-TCP and 79.24%removal of TOC with a specific energy consumption(EC)of approximately 0.019 kWh/g·COD after 2 h of operation.The F/Cu-GAC particle electrodes exhibited an overpotential of 0.38 V and an electrochemically active surface area(ECSA)of 715 cm^(2),as determined through linear sweep voltammetry(LSV)and cyclic voltammetry(CV)assessments.These findings suggest a high level of electrocatalytic performance.Furthermore,the catalytic mechanism of the 3DER equipped with F/Cu-GAC particle electrodes was elucidated through the application of X-ray photoelectron spectroscopy(XPS),electron spin resonance(ESR),and active species capture experiments.This investigation offers a novel approach for the effective degradation of 2,4,6-TCP.