We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be c...We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be calculated explicitly which is a squared function.展开更多
Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher ...Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher order Markov chain model and how to automatically select the proper order are given in this paper. The chi square test is first run on synthetic data sets to show that it can efficiently find the proper order of Markov chain. Using chi square test, distinct higher order context dependences inherent in ten sets of sequences of yeast S.cerevisiae from other literature have been found. So the Markov chain with higher order would be more suitable for modeling the non coding background sequences than an independent model.展开更多
It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. ...It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. Moreover, energy consumption is drastically affected when the end users select unsuitable STs or when they do not use the STs correctly. Furthermore, in future, Mexico will face serious electrical energy challenges that can be considerably resolved if the end users operate the STs in a correct manner. Hence, it is important to carry out an in-depth study and analysis on thermostats, by focusing on social aspects that influence the technological use and performance of the thermostats. This paper proposes the use of a signal detection theory (SDT), fuzzy detection theory (FDT), and chi-square (CS) test in order to understand the perceptions and beliefs of end users about the use of STs in Mexico. This paper extensively shows the perceptions and beliefs about the selected thermostats in Mexico. Besides, it presents an in-depth discussion on the cognitive perceptions and beliefs of end users. Moreover, it shows why the expectations of the end users about STs are not met. It also promotes the technological and social development of STs such that they are relatively more accepted in complex electrical grids such as smart grids.展开更多
In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For exampl...In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.展开更多
Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-v...Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-valued observations is asymmetric, and its functional form may not be known or easily characterized. In this case, comparisons of the groups in terms of their respective percentiles may be appropriate as these estimates are nonparametric and more robust to outliers and other irregularities. The median test is often used to compare distributions with similar but asymmetric shapes but may be uninformative when there are excess zeros or dissimilar shapes. For zero-inflated distributions, it is useful to compare the distributions with respect to their proportion of zeros, coupled with the comparison of percentile profiles for the observed non-zero values. A simple chi-square test for simultaneous testing of these two components is proposed, applicable to both continuous and discrete data. Results of simulation studies are reported to summarize empirical power under several scenarios. We give recommendations for the minimum sample size which is necessary to achieve suitable test performance in specific examples.展开更多
A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function an...A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function and cumulative distribution function is presented. The statistical features of the Generalized Kumaraswamy Generalized Power Gompertz distribution are systematically derived and adequately studied. The estimation of the model parameters in the absence of censoring and under-right censoring is performed using the method of maximum likelihood. The test statistic for right-censored data, criteria test for GKGPG distribution, estimated matrix Ŵ, Ĉ, and Ĝ, criteria test Y<sup>2</sup>n</sub>, alongside the quadratic form of the test statistic is derived. Mean simulated values of maximum likelihood estimates and their corresponding square mean errors are presented and confirmed to agree closely with the true parameter values. Simulated levels of significance for Y<sup>2</sup>n</sub> (γ) test for the GKGPG model against their theoretical values were recorded. We conclude that the null hypothesis for which simulated samples are fitted by GKGPG distribution is widely validated for the different levels of significance considered. From the summary of the results of the strength of a specific type of braided cord dataset on the GKGPG model, it is observed that the proposed GKGPG model fits the data set for a significance level ε = 0.05.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent ...With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent customer service,and medical diagnosis,is constructed based on complex technologies like machine learning and deep learning.Its uncertain behavior and data dependence pose unprecedented challenges to software testing.However,existing software testing courses mainly focus on conventional contents and are unable to meet the requirements of intelligent software testing.Therefore,this work deeply analyzed the relevant technologies of intelligent software testing,including reliability evaluation indicator system,neuron coverage,and test case generation.It also systematically designed an intelligent software testing course,covering teaching objectives,teaching content,teaching methods,and a teaching case.Verified by the practical teaching in four classes,this course has achieved remarkable results,providing practical experience for the reform of software testing courses.展开更多
With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raisin...With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raising increasing concerns about their security.Existing fuzz testing methods include random or dictionary-based input generation,which often fail to ensure both syntactic and semantic correctness,and OpenAPIbased approaches,which offer better accuracy but typically lack detailed descriptions of endpoints,parameters,or data formats.To address these issues,this paper proposes the APIDocX fuzz testing framework.It introduces a crawler tailored for dynamic web pages that automatically simulates user interactions to trigger APIs,capturing and extracting parameter information from communication packets.A multi-endpoint parameter adaptation method based on improved Jaccard similarity is then used to generalize these parameters to other potential API endpoints,filling in gaps in OpenAPI specifications.Experimental results demonstrate that the extracted parameters can be generalized with 79.61%accuracy.Fuzz testing using the enriched OpenAPI documents leads to improvements in test coverage,the number of valid test cases generated,and fault detection capabilities.This approach offers an effective enhancement to automated REST API security testing.展开更多
Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud ar...Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.展开更多
Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen t...Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen tests are usually limited at the nanogram range,which is primarily due to the passive capillary fluidics through nitrocellulose membranes,often associated with non-specific bindings and high background noise.To overcome this challenge,we report a Beads-on-a-Tip design by replacing nitrocellulose membranes with a pipette tip loaded with magnetic beads.The beads are pre-conjugated with capture antibodies that support a typical sandwich immunoassay.This design enriches the low-abundant antigen proteins and allows an active washing process to significantly reduce non-specific bindings.To further improve the detection sensitivity,we employed upconversion nanoparticles(UCNPs)as luminescent reporters and SARS-CoV-2 spike(S)antigen as a model analyte to benchmark the performance of this design against our previously reported methods.We found that the key to enhance the immunocomplex formation and signal-to-noise ratio lay in optimizing incubation time and the UCNP-to-bead ratio.We therefore successfully demonstrated that the new method can achieve a very large dynamic range from 500 fg/mL to 10μg/mL,across over 7 digits,and a limit of detection of 706 fg/mL,nearly another order of magnitude lower than the best reported LFIA using UCNPs in COVID-19 spike antigen detection.Our system offers a promising solution for ultra-sensitive and quantitative POCT diagnostics.展开更多
Parallel machining robot is a new type of robotized equipment for high-efficiency machining structural com-ponents with complex geometries.Terminal rigidity is of great importance index for such type of equipment,whic...Parallel machining robot is a new type of robotized equipment for high-efficiency machining structural com-ponents with complex geometries.Terminal rigidity is of great importance index for such type of equipment,which affects their load capacity and working accuracy.Before a parallel machining robot can be used for heavy-load and high-efficiency machining,its terminal rigidity should be evaluated systematically.The present study is to quantitatively reveal the stiffness properties of a previously invented Z4 redundantly actuated parallel ma-chining robot(RAPMR).For this purpose,two critical issues,i.e.,stiffness modelling and index construction,are clarified to carry out stiffness evaluation of the Z4 RAPMR.Firstly,drawing on the screw theory,a semi-analytic stiffness model of the proposed RAPMR is established at a component level.Secondly,a set of virtual work-based stiffness indices is constructed to evaluate the terminal rigidity of parallel robots.Those indices have a consistent physical unit in describing linear and angular terminal rigidity.With these indices,the local and the global stiffness performance of the Z4 RAPMR are predicted.Thirdly,a laboratory prototype of the proposed RAPMR is fabricated.And the experimental test is performed to verify the correctness of the established stiffness model.The present work is expected to provide fundamental information for further light-weight design and rigidity enhancement.展开更多
The dynamic characteristics of the track system can directly affect its service performance and failure process.To explore the load characteristics and dynamic response of the track system under the dynamic loads from...The dynamic characteristics of the track system can directly affect its service performance and failure process.To explore the load characteristics and dynamic response of the track system under the dynamic loads from the rack vehicle in traction conditions,a systematic test of the track subsystem was carried out on a large-slope test line.In the test,the bending stress of the rack teeth,the wheel-rail forces,and the acceleration of crucial components in the track system were measured.Subsequently,a detailed analysis was conducted on the tested signals of the rack railway track system in the time domain and the time-frequency domains.The test results indicate that the traction force significantly affects the rack tooth bending stress and the wheel-rail forces.The vibrations of the track system under the traction conditions are mainly caused by the impacts generated from the gear-rack engagement,which are then transferred to the sleepers,the rails,and the ballast beds.Furthermore,both the maximum stress on the racks and the wheel-rail forces measured on the rails remain below their allowable values.This experimental study evaluates the load characteristics and reveals the vibration characteristics of the rack railway track system under the vehicle’s ultimate load,which is very important for the load-strengthening design of the key components such as racks and the vibration and noise reduction of the track system.展开更多
Severe failures of nonstructural components have occurred during previous earthquakes.Claddings are one of the most widely used nonstructural component and are installed in many modern buildings;therefore,an evaluatio...Severe failures of nonstructural components have occurred during previous earthquakes.Claddings are one of the most widely used nonstructural component and are installed in many modern buildings;therefore,an evaluation of their seismic performance is important and cannot be ignored.To investigate the seismic performance of large-sized high performance concrete cladding(HPCC),a series of full-scale experimental tests were conducted using a unidirectional shaking table.A steel supporting frame was used to install the HPCCs and reproduce the effects of the building under earthquake.The tests were divided into two parts:in-plane(IP)testing and out-plane(OP)testing.Three recorded accelerograms,one artificial accelerogram,and one sinusoidal accelerogram were used to conduct the shaking table tests.The results show that the maximum recorded IP responses of acceleration and interstory drift ratio were 1.04 g and 1/97,while the OP responses were 1.02 g and 1/51.The HPCCs functioned well throughout the entire experimental protocol.The fundamental frequency of the HPCCs systems rarely changed after the tests.展开更多
The stress-strain behavior of calcareous sand is significantly influencedby particle breakage(B)and initial relative density(Dri),but few constitutive models consider their combined effects.To bridge this gap,we condu...The stress-strain behavior of calcareous sand is significantly influencedby particle breakage(B)and initial relative density(Dri),but few constitutive models consider their combined effects.To bridge this gap,we conducted a series of triaxial tests on calcareous sand with varying Dri and stress paths,examining particle breakage and critical state behavior.Key findingsinclude:(1)At a constant stress ratio(η),B follows a hyperbolic relationship with mean effective stress(p'),and for a given p',B increases proportionally withη;(2)The critical state line(CSL)moves downward with increasing Dri,whereas the critical state friction angle(φcs)decreases with increasing B.Based on these findings,we propose a unifiedbreakage evolution model to quantify particle breakage in calcareous sand under various loading conditions.Integrating this model with the Normal Consolidation Line(NCL)and CSL equations,we successfully simulate the steepening of NCL and CSL slopes as B increases with the onset of particle breakage.Furthermore,we quantitatively evaluate the effect of B onφcs.Finally,within the framework of Critical State Soil Mechanics and Hypoplasticity theory,we develop a hypoplastic model incorporating B and Dri.The model is validated through strong agreement with experimental results across various initial relative densities,stress paths and drainage conditions.展开更多
X-rays are widely used in the non-destructive testing(NDT)of electrical equipment.Radio frequency(RF)electron linear accelerators can generate MeV high-energy X-rays with strong penetrating ability;however,the system ...X-rays are widely used in the non-destructive testing(NDT)of electrical equipment.Radio frequency(RF)electron linear accelerators can generate MeV high-energy X-rays with strong penetrating ability;however,the system generally has a large scale,which is not suitable for on-site testing.Compared with the S-band(S-linac)at the same stage of beam energy,the accelerator working in the X-band(X-linac)can compress the facility scale by over 2/3 in the longitudinal direction,which is convenient for the on-site NDT of electrical equipment.To address the beam quality and design complexity simultaneously,the non-dominated sorting genetic algorithmⅡ(NSGA-Ⅱ),which is a multi-objective genetic algorithm(MOGA),was developed to optimize the cavity chain design of the X-linac.Additionally,the designs of the focusing coils,electron gun,and RF couplers,which are other key components of the X-linac,were introduced in this context.In particular,the focusing coil distributions were optimized using a genetic algorithm.Furthermore,after designing such key components,PARMELA software was adopted to perform beam dynamics calculations with the optimized accelerating fields and magnetic fields.The results show that the beam performance was obtained with a capture ratio of more than 90%,an energy spread of less than 10%,and an average energy of approximately 3 MeV.The design and simulation results indicate that the proposed NSGAⅡ-based approach is feasible for X-linac accelerator design.Furthermore,it can be generalized as a universal technique for industrial electron linear accelerators provided that specific optimization objectives and constraints are set according to different application scenarios and requirements.展开更多
Underwater gas-liquid two-phase propulsion technology is an emerging propulsion method that offers high efficiency and unrestricted navigation speed.The integration of this technology into water ramjet engines can sig...Underwater gas-liquid two-phase propulsion technology is an emerging propulsion method that offers high efficiency and unrestricted navigation speed.The integration of this technology into water ramjet engines can significantly enhance propulsion efficiency and holds substantial potential for broad applications.However,forming a gas-liquid two-phase flow within the nozzle requires introducing a large amount of rammed seawater.At this time,there is a complex phase transition problem of combustion products in the combustion chamber,which makes the thermodynamic calculation for gas-liquid two-phase water ramjet engines particularly challenging.This paper proposes a thermodynamic calculation method for gas-liquid two-phase water ramjet engines,based on the energy equation for gas-liquid two-phase flow and traditional thermodynamic principles,enabling thermodynamic calculations under conditions of ultra-high water-fuel ratios.Additionally,ground ignition tests of the gas-liquid two-phase engine were conducted,yielding critical engine test parameters.The results demonstrate that the gas-liquid two-phase water ramjet engine achieves a high specific impulse,with a theoretical maximum specific impulse of up to 7000(N s)/kg.The multiphase flow effects significantly impact engine performance,with specific impulse losses reaching up to 25.86%.The error between the thrust and specific impulse in the ground test and the theoretical values is within 10%,validating the proposed thermodynamic calculation method as a reliable reference for further research on gas-liquid two-phase water ramjet engines.展开更多
In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to...In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.展开更多
基金the National Natural Science Foundation of China (10571139)
文摘We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be calculated explicitly which is a squared function.
文摘Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher order Markov chain model and how to automatically select the proper order are given in this paper. The chi square test is first run on synthetic data sets to show that it can efficiently find the proper order of Markov chain. Using chi square test, distinct higher order context dependences inherent in ten sets of sequences of yeast S.cerevisiae from other literature have been found. So the Markov chain with higher order would be more suitable for modeling the non coding background sequences than an independent model.
文摘It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. Moreover, energy consumption is drastically affected when the end users select unsuitable STs or when they do not use the STs correctly. Furthermore, in future, Mexico will face serious electrical energy challenges that can be considerably resolved if the end users operate the STs in a correct manner. Hence, it is important to carry out an in-depth study and analysis on thermostats, by focusing on social aspects that influence the technological use and performance of the thermostats. This paper proposes the use of a signal detection theory (SDT), fuzzy detection theory (FDT), and chi-square (CS) test in order to understand the perceptions and beliefs of end users about the use of STs in Mexico. This paper extensively shows the perceptions and beliefs about the selected thermostats in Mexico. Besides, it presents an in-depth discussion on the cognitive perceptions and beliefs of end users. Moreover, it shows why the expectations of the end users about STs are not met. It also promotes the technological and social development of STs such that they are relatively more accepted in complex electrical grids such as smart grids.
文摘In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.
文摘Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-valued observations is asymmetric, and its functional form may not be known or easily characterized. In this case, comparisons of the groups in terms of their respective percentiles may be appropriate as these estimates are nonparametric and more robust to outliers and other irregularities. The median test is often used to compare distributions with similar but asymmetric shapes but may be uninformative when there are excess zeros or dissimilar shapes. For zero-inflated distributions, it is useful to compare the distributions with respect to their proportion of zeros, coupled with the comparison of percentile profiles for the observed non-zero values. A simple chi-square test for simultaneous testing of these two components is proposed, applicable to both continuous and discrete data. Results of simulation studies are reported to summarize empirical power under several scenarios. We give recommendations for the minimum sample size which is necessary to achieve suitable test performance in specific examples.
文摘A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function and cumulative distribution function is presented. The statistical features of the Generalized Kumaraswamy Generalized Power Gompertz distribution are systematically derived and adequately studied. The estimation of the model parameters in the absence of censoring and under-right censoring is performed using the method of maximum likelihood. The test statistic for right-censored data, criteria test for GKGPG distribution, estimated matrix Ŵ, Ĉ, and Ĝ, criteria test Y<sup>2</sup>n</sub>, alongside the quadratic form of the test statistic is derived. Mean simulated values of maximum likelihood estimates and their corresponding square mean errors are presented and confirmed to agree closely with the true parameter values. Simulated levels of significance for Y<sup>2</sup>n</sub> (γ) test for the GKGPG model against their theoretical values were recorded. We conclude that the null hypothesis for which simulated samples are fitted by GKGPG distribution is widely validated for the different levels of significance considered. From the summary of the results of the strength of a specific type of braided cord dataset on the GKGPG model, it is observed that the proposed GKGPG model fits the data set for a significance level ε = 0.05.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
基金Computer Basic Education Teaching Research Project of Association of Fundamental Computing Education in Chinese Universities(Nos.2025-AFCEC-527 and 2024-AFCEC-088)Research on the Reform of Public Course Teaching at Nantong College of Science and Technology(No.2024JGG015).
文摘With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent customer service,and medical diagnosis,is constructed based on complex technologies like machine learning and deep learning.Its uncertain behavior and data dependence pose unprecedented challenges to software testing.However,existing software testing courses mainly focus on conventional contents and are unable to meet the requirements of intelligent software testing.Therefore,this work deeply analyzed the relevant technologies of intelligent software testing,including reliability evaluation indicator system,neuron coverage,and test case generation.It also systematically designed an intelligent software testing course,covering teaching objectives,teaching content,teaching methods,and a teaching case.Verified by the practical teaching in four classes,this course has achieved remarkable results,providing practical experience for the reform of software testing courses.
基金supported by the Open Foundation of Key Laboratory of Cyberspace Security,Ministry of Education of China(KLCS20240211)。
文摘With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raising increasing concerns about their security.Existing fuzz testing methods include random or dictionary-based input generation,which often fail to ensure both syntactic and semantic correctness,and OpenAPIbased approaches,which offer better accuracy but typically lack detailed descriptions of endpoints,parameters,or data formats.To address these issues,this paper proposes the APIDocX fuzz testing framework.It introduces a crawler tailored for dynamic web pages that automatically simulates user interactions to trigger APIs,capturing and extracting parameter information from communication packets.A multi-endpoint parameter adaptation method based on improved Jaccard similarity is then used to generalize these parameters to other potential API endpoints,filling in gaps in OpenAPI specifications.Experimental results demonstrate that the extracted parameters can be generalized with 79.61%accuracy.Fuzz testing using the enriched OpenAPI documents leads to improvements in test coverage,the number of valid test cases generated,and fault detection capabilities.This approach offers an effective enhancement to automated REST API security testing.
基金supported in part by the National Science and Technology Council of Taiwan under the contract numbers NSTC 114-2221-E-019-055-MY2 and NSTC 114-2221-E-019-069.
文摘Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.
基金financially supported by ARC Linkage project(LP210200642)ARC Center of Excellence for Quantum Biotechnology(grant no.CE230100021)+1 种基金National Health and Medical Research Council Investigator Fellowship—(grant no.APP2017499)Chan Zuckerberg Initiative Deep Tissue Imaging Phase 2(grant no.DT12-0000000182).
文摘Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen tests are usually limited at the nanogram range,which is primarily due to the passive capillary fluidics through nitrocellulose membranes,often associated with non-specific bindings and high background noise.To overcome this challenge,we report a Beads-on-a-Tip design by replacing nitrocellulose membranes with a pipette tip loaded with magnetic beads.The beads are pre-conjugated with capture antibodies that support a typical sandwich immunoassay.This design enriches the low-abundant antigen proteins and allows an active washing process to significantly reduce non-specific bindings.To further improve the detection sensitivity,we employed upconversion nanoparticles(UCNPs)as luminescent reporters and SARS-CoV-2 spike(S)antigen as a model analyte to benchmark the performance of this design against our previously reported methods.We found that the key to enhance the immunocomplex formation and signal-to-noise ratio lay in optimizing incubation time and the UCNP-to-bead ratio.We therefore successfully demonstrated that the new method can achieve a very large dynamic range from 500 fg/mL to 10μg/mL,across over 7 digits,and a limit of detection of 706 fg/mL,nearly another order of magnitude lower than the best reported LFIA using UCNPs in COVID-19 spike antigen detection.Our system offers a promising solution for ultra-sensitive and quantitative POCT diagnostics.
基金Supported by National Natural Science Foundation of China(Grant No.52375009)Fujian Provincial Young and Middle-Aged Teacher Education Research Project of China(Grant No.JAT220029).
文摘Parallel machining robot is a new type of robotized equipment for high-efficiency machining structural com-ponents with complex geometries.Terminal rigidity is of great importance index for such type of equipment,which affects their load capacity and working accuracy.Before a parallel machining robot can be used for heavy-load and high-efficiency machining,its terminal rigidity should be evaluated systematically.The present study is to quantitatively reveal the stiffness properties of a previously invented Z4 redundantly actuated parallel ma-chining robot(RAPMR).For this purpose,two critical issues,i.e.,stiffness modelling and index construction,are clarified to carry out stiffness evaluation of the Z4 RAPMR.Firstly,drawing on the screw theory,a semi-analytic stiffness model of the proposed RAPMR is established at a component level.Secondly,a set of virtual work-based stiffness indices is constructed to evaluate the terminal rigidity of parallel robots.Those indices have a consistent physical unit in describing linear and angular terminal rigidity.With these indices,the local and the global stiffness performance of the Z4 RAPMR are predicted.Thirdly,a laboratory prototype of the proposed RAPMR is fabricated.And the experimental test is performed to verify the correctness of the established stiffness model.The present work is expected to provide fundamental information for further light-weight design and rigidity enhancement.
基金supported by the National Natural Science Foundation of China(No.52388102)the Sichuan Science and Technology Program(No.2024NSFTD0011)the Fundamental Research Funds for the State Key Laboratory of Rail Transit Vehicle System of Southwest Jiaotong University(No.2023TPL-T11).
文摘The dynamic characteristics of the track system can directly affect its service performance and failure process.To explore the load characteristics and dynamic response of the track system under the dynamic loads from the rack vehicle in traction conditions,a systematic test of the track subsystem was carried out on a large-slope test line.In the test,the bending stress of the rack teeth,the wheel-rail forces,and the acceleration of crucial components in the track system were measured.Subsequently,a detailed analysis was conducted on the tested signals of the rack railway track system in the time domain and the time-frequency domains.The test results indicate that the traction force significantly affects the rack tooth bending stress and the wheel-rail forces.The vibrations of the track system under the traction conditions are mainly caused by the impacts generated from the gear-rack engagement,which are then transferred to the sleepers,the rails,and the ballast beds.Furthermore,both the maximum stress on the racks and the wheel-rail forces measured on the rails remain below their allowable values.This experimental study evaluates the load characteristics and reveals the vibration characteristics of the rack railway track system under the vehicle’s ultimate load,which is very important for the load-strengthening design of the key components such as racks and the vibration and noise reduction of the track system.
基金National Key R&D Program of China under Grant No.2024YFD1600404。
文摘Severe failures of nonstructural components have occurred during previous earthquakes.Claddings are one of the most widely used nonstructural component and are installed in many modern buildings;therefore,an evaluation of their seismic performance is important and cannot be ignored.To investigate the seismic performance of large-sized high performance concrete cladding(HPCC),a series of full-scale experimental tests were conducted using a unidirectional shaking table.A steel supporting frame was used to install the HPCCs and reproduce the effects of the building under earthquake.The tests were divided into two parts:in-plane(IP)testing and out-plane(OP)testing.Three recorded accelerograms,one artificial accelerogram,and one sinusoidal accelerogram were used to conduct the shaking table tests.The results show that the maximum recorded IP responses of acceleration and interstory drift ratio were 1.04 g and 1/97,while the OP responses were 1.02 g and 1/51.The HPCCs functioned well throughout the entire experimental protocol.The fundamental frequency of the HPCCs systems rarely changed after the tests.
基金support to this study from the National Natural Science Foundation of China,NSFC(Grant No.52278367)The Belt and Road Special Foundation of the National Key Laboratory ofWater Disaster Prevention(Grant No.2024nkms08).
文摘The stress-strain behavior of calcareous sand is significantly influencedby particle breakage(B)and initial relative density(Dri),but few constitutive models consider their combined effects.To bridge this gap,we conducted a series of triaxial tests on calcareous sand with varying Dri and stress paths,examining particle breakage and critical state behavior.Key findingsinclude:(1)At a constant stress ratio(η),B follows a hyperbolic relationship with mean effective stress(p'),and for a given p',B increases proportionally withη;(2)The critical state line(CSL)moves downward with increasing Dri,whereas the critical state friction angle(φcs)decreases with increasing B.Based on these findings,we propose a unifiedbreakage evolution model to quantify particle breakage in calcareous sand under various loading conditions.Integrating this model with the Normal Consolidation Line(NCL)and CSL equations,we successfully simulate the steepening of NCL and CSL slopes as B increases with the onset of particle breakage.Furthermore,we quantitatively evaluate the effect of B onφcs.Finally,within the framework of Critical State Soil Mechanics and Hypoplasticity theory,we develop a hypoplastic model incorporating B and Dri.The model is validated through strong agreement with experimental results across various initial relative densities,stress paths and drainage conditions.
基金supported by the National Natural Science Foundation of China(Nos.12341501 and 12575164)。
文摘X-rays are widely used in the non-destructive testing(NDT)of electrical equipment.Radio frequency(RF)electron linear accelerators can generate MeV high-energy X-rays with strong penetrating ability;however,the system generally has a large scale,which is not suitable for on-site testing.Compared with the S-band(S-linac)at the same stage of beam energy,the accelerator working in the X-band(X-linac)can compress the facility scale by over 2/3 in the longitudinal direction,which is convenient for the on-site NDT of electrical equipment.To address the beam quality and design complexity simultaneously,the non-dominated sorting genetic algorithmⅡ(NSGA-Ⅱ),which is a multi-objective genetic algorithm(MOGA),was developed to optimize the cavity chain design of the X-linac.Additionally,the designs of the focusing coils,electron gun,and RF couplers,which are other key components of the X-linac,were introduced in this context.In particular,the focusing coil distributions were optimized using a genetic algorithm.Furthermore,after designing such key components,PARMELA software was adopted to perform beam dynamics calculations with the optimized accelerating fields and magnetic fields.The results show that the beam performance was obtained with a capture ratio of more than 90%,an energy spread of less than 10%,and an average energy of approximately 3 MeV.The design and simulation results indicate that the proposed NSGAⅡ-based approach is feasible for X-linac accelerator design.Furthermore,it can be generalized as a universal technique for industrial electron linear accelerators provided that specific optimization objectives and constraints are set according to different application scenarios and requirements.
基金supported by the Stable Support Fund forBasic Disciplines,China(No.3072024WD0201)。
文摘Underwater gas-liquid two-phase propulsion technology is an emerging propulsion method that offers high efficiency and unrestricted navigation speed.The integration of this technology into water ramjet engines can significantly enhance propulsion efficiency and holds substantial potential for broad applications.However,forming a gas-liquid two-phase flow within the nozzle requires introducing a large amount of rammed seawater.At this time,there is a complex phase transition problem of combustion products in the combustion chamber,which makes the thermodynamic calculation for gas-liquid two-phase water ramjet engines particularly challenging.This paper proposes a thermodynamic calculation method for gas-liquid two-phase water ramjet engines,based on the energy equation for gas-liquid two-phase flow and traditional thermodynamic principles,enabling thermodynamic calculations under conditions of ultra-high water-fuel ratios.Additionally,ground ignition tests of the gas-liquid two-phase engine were conducted,yielding critical engine test parameters.The results demonstrate that the gas-liquid two-phase water ramjet engine achieves a high specific impulse,with a theoretical maximum specific impulse of up to 7000(N s)/kg.The multiphase flow effects significantly impact engine performance,with specific impulse losses reaching up to 25.86%.The error between the thrust and specific impulse in the ground test and the theoretical values is within 10%,validating the proposed thermodynamic calculation method as a reliable reference for further research on gas-liquid two-phase water ramjet engines.
基金supported by Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.2019-0-01842,Artificial Intelligence Graduate School Program(GIST))supported by Korea Planning&Evaluation Institute of Industrial Technology(KEIT)grant funded by the Ministry of Trade,Industry&Energy(MOTIE,Republic of Korea)(RS-2025-25448249+1 种基金Automotive Industry Technology Development(R&D)Program)supported by the Regional Innovation System&Education(RISE)programthrough the(Gwangju RISE Center),funded by the Ministry of Education(MOE)and the Gwangju Metropolitan City,Republic of Korea(2025-RISE-05-001).
文摘In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.