We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be c...We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be calculated explicitly which is a squared function.展开更多
Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher ...Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher order Markov chain model and how to automatically select the proper order are given in this paper. The chi square test is first run on synthetic data sets to show that it can efficiently find the proper order of Markov chain. Using chi square test, distinct higher order context dependences inherent in ten sets of sequences of yeast S.cerevisiae from other literature have been found. So the Markov chain with higher order would be more suitable for modeling the non coding background sequences than an independent model.展开更多
It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. ...It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. Moreover, energy consumption is drastically affected when the end users select unsuitable STs or when they do not use the STs correctly. Furthermore, in future, Mexico will face serious electrical energy challenges that can be considerably resolved if the end users operate the STs in a correct manner. Hence, it is important to carry out an in-depth study and analysis on thermostats, by focusing on social aspects that influence the technological use and performance of the thermostats. This paper proposes the use of a signal detection theory (SDT), fuzzy detection theory (FDT), and chi-square (CS) test in order to understand the perceptions and beliefs of end users about the use of STs in Mexico. This paper extensively shows the perceptions and beliefs about the selected thermostats in Mexico. Besides, it presents an in-depth discussion on the cognitive perceptions and beliefs of end users. Moreover, it shows why the expectations of the end users about STs are not met. It also promotes the technological and social development of STs such that they are relatively more accepted in complex electrical grids such as smart grids.展开更多
In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For exampl...In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.展开更多
Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-v...Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-valued observations is asymmetric, and its functional form may not be known or easily characterized. In this case, comparisons of the groups in terms of their respective percentiles may be appropriate as these estimates are nonparametric and more robust to outliers and other irregularities. The median test is often used to compare distributions with similar but asymmetric shapes but may be uninformative when there are excess zeros or dissimilar shapes. For zero-inflated distributions, it is useful to compare the distributions with respect to their proportion of zeros, coupled with the comparison of percentile profiles for the observed non-zero values. A simple chi-square test for simultaneous testing of these two components is proposed, applicable to both continuous and discrete data. Results of simulation studies are reported to summarize empirical power under several scenarios. We give recommendations for the minimum sample size which is necessary to achieve suitable test performance in specific examples.展开更多
A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function an...A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function and cumulative distribution function is presented. The statistical features of the Generalized Kumaraswamy Generalized Power Gompertz distribution are systematically derived and adequately studied. The estimation of the model parameters in the absence of censoring and under-right censoring is performed using the method of maximum likelihood. The test statistic for right-censored data, criteria test for GKGPG distribution, estimated matrix Ŵ, Ĉ, and Ĝ, criteria test Y<sup>2</sup>n</sub>, alongside the quadratic form of the test statistic is derived. Mean simulated values of maximum likelihood estimates and their corresponding square mean errors are presented and confirmed to agree closely with the true parameter values. Simulated levels of significance for Y<sup>2</sup>n</sub> (γ) test for the GKGPG model against their theoretical values were recorded. We conclude that the null hypothesis for which simulated samples are fitted by GKGPG distribution is widely validated for the different levels of significance considered. From the summary of the results of the strength of a specific type of braided cord dataset on the GKGPG model, it is observed that the proposed GKGPG model fits the data set for a significance level ε = 0.05.展开更多
Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for opti...Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.展开更多
With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent ...With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent customer service,and medical diagnosis,is constructed based on complex technologies like machine learning and deep learning.Its uncertain behavior and data dependence pose unprecedented challenges to software testing.However,existing software testing courses mainly focus on conventional contents and are unable to meet the requirements of intelligent software testing.Therefore,this work deeply analyzed the relevant technologies of intelligent software testing,including reliability evaluation indicator system,neuron coverage,and test case generation.It also systematically designed an intelligent software testing course,covering teaching objectives,teaching content,teaching methods,and a teaching case.Verified by the practical teaching in four classes,this course has achieved remarkable results,providing practical experience for the reform of software testing courses.展开更多
Members of the British Textile Machinery Association(BTMA)can look back on 2025 as a year marked by notable technological advances and continued progress in global trade,despite an uncertain and volatile market.“Our ...Members of the British Textile Machinery Association(BTMA)can look back on 2025 as a year marked by notable technological advances and continued progress in global trade,despite an uncertain and volatile market.“Our members have been very active over the past 12 months and this has resulted in new technologies for the production of technical fibres and fabrics,the introduction of AI and machine learning into process control systems and significant advances in materials testing,”says BTMA CEO Jason Kent.“There’s real excitement about what can be achieved in 2026 as we look ahead to upcoming exhibitions such as JEC Composites in Paris in March and Techtextil in Frankfurt in April.”展开更多
With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raisin...With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raising increasing concerns about their security.Existing fuzz testing methods include random or dictionary-based input generation,which often fail to ensure both syntactic and semantic correctness,and OpenAPIbased approaches,which offer better accuracy but typically lack detailed descriptions of endpoints,parameters,or data formats.To address these issues,this paper proposes the APIDocX fuzz testing framework.It introduces a crawler tailored for dynamic web pages that automatically simulates user interactions to trigger APIs,capturing and extracting parameter information from communication packets.A multi-endpoint parameter adaptation method based on improved Jaccard similarity is then used to generalize these parameters to other potential API endpoints,filling in gaps in OpenAPI specifications.Experimental results demonstrate that the extracted parameters can be generalized with 79.61%accuracy.Fuzz testing using the enriched OpenAPI documents leads to improvements in test coverage,the number of valid test cases generated,and fault detection capabilities.This approach offers an effective enhancement to automated REST API security testing.展开更多
Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud ar...Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.展开更多
Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen t...Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen tests are usually limited at the nanogram range,which is primarily due to the passive capillary fluidics through nitrocellulose membranes,often associated with non-specific bindings and high background noise.To overcome this challenge,we report a Beads-on-a-Tip design by replacing nitrocellulose membranes with a pipette tip loaded with magnetic beads.The beads are pre-conjugated with capture antibodies that support a typical sandwich immunoassay.This design enriches the low-abundant antigen proteins and allows an active washing process to significantly reduce non-specific bindings.To further improve the detection sensitivity,we employed upconversion nanoparticles(UCNPs)as luminescent reporters and SARS-CoV-2 spike(S)antigen as a model analyte to benchmark the performance of this design against our previously reported methods.We found that the key to enhance the immunocomplex formation and signal-to-noise ratio lay in optimizing incubation time and the UCNP-to-bead ratio.We therefore successfully demonstrated that the new method can achieve a very large dynamic range from 500 fg/mL to 10μg/mL,across over 7 digits,and a limit of detection of 706 fg/mL,nearly another order of magnitude lower than the best reported LFIA using UCNPs in COVID-19 spike antigen detection.Our system offers a promising solution for ultra-sensitive and quantitative POCT diagnostics.展开更多
The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method...The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method to assess the overall significance of the coefficients.The authors establish that the proposed test is asymptotically normal under both the null hypothesis and local alternatives.Based on the locally concerned U-statistic,the authors further develop a globally concerned U-statistic to test whether the coefficient function is zero.A stochastic perturbation method is employed to approximate the distribution of the globally concerned test statistic.Monte Carlo simulations demonstrate the validity of the proposed test in finite samples.展开更多
To address the insufficient prediction accuracy of multi-state parameters in electro-hydraulic servo material fatigue testing machines under complex loading and nonlinear coupling conditions,this paper proposes a mult...To address the insufficient prediction accuracy of multi-state parameters in electro-hydraulic servo material fatigue testing machines under complex loading and nonlinear coupling conditions,this paper proposes a multivariate sequence-to-sequence prediction model integrating a Long Short-Term Memory(LSTM)encoder,a Gated Recurrent Unit(GRU)decoder,and a multi-head attention mechanism.This approach enhances prediction accuracy and robustness across different control modes and load spectra by leveraging multi-channel inputs and cross-variable feature interactions,thereby capturing both short-term high-frequency dynamics and long-term slow drift characteristics.Experiments using long-term data from real test benches demonstrate that the model achieves a stable MSE below 0.01 on the validation set,with MAE and RMSE of approximately 0.018 and 0.052,respectively,and a coefficient of determination reaching 0.98.This significantly outperforms traditional identification methods and single RNN models.Sensitivity analysis indicates that a prediction stride of 10 achieves an optimal balance between accuracy and computational overhead.Ablation experiments validated the contribution of multi-head attention and decoder architecture to enhancing cross-variable coupling modeling capabilities.This model can be applied to residualdriven early warning in health monitoring,and risk assessment with scheme optimization in test design.It enables near-real-time deployment feasibility,providing a practical data-driven technical pathway for reliability assurance in advanced equipment.展开更多
Patients affected by monogenic diseases impose a substantial burden on both themselves and their families.The primary preventive measure,i.e.,invasive prenatal diagnosis,carries a risk of miscarriage and cannot be per...Patients affected by monogenic diseases impose a substantial burden on both themselves and their families.The primary preventive measure,i.e.,invasive prenatal diagnosis,carries a risk of miscarriage and cannot be performed early in pregnancy.Hence,there is a need for non-invasive prenatal testing(NIPT)for monogenic diseases.By utilizing enriched cell-free fetal DNA(cffDNA)from maternal plasma,we refine the NIPT method,which combines targeted region capture technology,haplotyping,and analysis of informative site frequency.We apply this method to 93 clinical families at genetic risk for thalassemia,encompassing various genetic variant types,to establish a workflow and evaluate its efficiency.Our approach requires only 3 ng of DNA input to generate 0.1 Gb informative target genomic data and leverages a minimum of 3%cffDNA.This method has a 98.16%success rate and 100%concordance with conventional invasive methods.Furthermore,we demonstrate the ability to analyze fetal genotypes as early as eight weeks of gestation.This study establishes an optimized NIPT method for the early detection of various thalassemia disorders during pregnancy.This technique demonstrates high accuracy and potential for clinical application in prenatal diagnosis.展开更多
As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effe...As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effective for traditional networks and applications but lack the capabilities to address DL-specific threats,such asmodel inversion,membership inference,and adversarial attacks.This review provides a comprehensive analysis of penetration testing for the privacy of DL models,examining the shortfalls of existing frameworks,tools,and testing methodologies.Through systematic evaluation of existing literature and empirical analysis,we identify three major contributions:(i)a critical assessment of traditional penetration testing frameworks’inadequacies when applied to DL-specific privacy vulnerabilities,(ii)a comprehensive evaluation of state-of-the-art privacy-preserving methods and their integration with penetration testing workflows,and(iii)the development of a structured framework that combines reconnaissance,threat modeling,exploitation,and post-exploitation phases specifically tailored for DL privacy assessment.Moreover,this review evaluates popular solutions such as IBMAdversarial Robustness Toolbox and TensorFlowPrivacy,alongside privacy-preserving techniques(e.g.,Differential Privacy,Homomorphic Encryption,and Federated Learning),which we systematically analyze through comparative studies of their effectiveness,computational overhead,and practical deployment constraints.While these techniques offer promising safeguards,their adoption is hindered by accuracy loss,performance overheads,and the rapid evolution of attack strategies.Our findings reveal that no single existing solution provides comprehensive protection,which leads us to propose a hybrid approach that strategically combines multiple privacy-preserving mechanisms.The findings of this survey underscore an urgent need for automated,regulationcompliant penetration testing frameworks specifically tailored to DL systems.We argue for hybrid privacy solutions that combinemultiple protectivemechanisms to ensure bothmodel accuracy and privacy.Building on our analysis,we present actionable recommendations for developing adaptive penetration testing strategies that incorporate automated vulnerability assessment,continuous monitoring,and regulatory compliance verification.展开更多
Smartphone-based electrocardiograms(ECGs)are increasingly utilized for monitoring atrial fibrillation(AF)recurrence after catheter ablation(CA),referred to as smartphone AF burden(SMURDEN).The SMURDEN data often exhib...Smartphone-based electrocardiograms(ECGs)are increasingly utilized for monitoring atrial fibrillation(AF)recurrence after catheter ablation(CA),referred to as smartphone AF burden(SMURDEN).The SMURDEN data often exhibit complex patterns of zero AF episodes,which may arise from either true AF-free status(structural zeros)or missed AF episodes due to intermittent monitoring(random zeros).Such a mixture of AF-free and at-risk patients can lead to zero-inflation in the data.The authors propose a novel zero-inflation test for binomial regression models to identify recurrence-free AF populations.Unlike traditional approaches requiring fully specified zero-inflated models,the proposed test utilizes a weighted average of the discrepancies between observed and expected zero proportions,with weights determined by binomial sizes.A closed-form test statistic is developed,and its asymptotic distribution is derived using estimating equations.Simulations demonstrate superior performance over existing methods,and real-world AF monitoring data validate the practical utility of our proposed test.展开更多
Abstract To investigate the use of the three-point bending method and supplement the corresponding strength data of compacted snow for transportation-related applications in cold regions,compacted snow beams with an a...Abstract To investigate the use of the three-point bending method and supplement the corresponding strength data of compacted snow for transportation-related applications in cold regions,compacted snow beams with an average density of 592 kg·m−3 were fabricated and tested at three distinct flexural strain rates.Each strain rate corresponded to the ductile,transitional,and brittle behavior of compacted snow,respectively.The flexural strength,ranging from 0.518 to 0.933 MPa,peaks at the ductile-to-brittle transition,while the flexural modulus,varying between 48.97 and 287.72 MPa,increases with strain rate within the tested range.At the same strain rate corresponding to brittle failure,both mechanical properties of compacted snow exhibit higher values than those of natural snow tested by the authors.Notably,the flexural strain rate at the ductile-to-brittle transition for compacted snow identified in this study is comparable to those previously reported for natural snow under uniaxial tension.Additionally,the obtained strength data are thoroughly compared with existing literature,with detailed discussions provided.The loading rates associated with typical failure modes of compacted snow under bending,together with the obtained strength values,provide methodological guidance and reference data for future in situ testing of compacted snow structures.展开更多
基金the National Natural Science Foundation of China (10571139)
文摘We study the asymptotics tot the statistic of chi-square in type Ⅱ error. By the contraction principle, the large deviations and moderate deviations are obtained, and the rate function of moderate deviations can be calculated explicitly which is a squared function.
文摘Modeling non coding background sequences appropriately is important for the detection of regulatory elements from DNA sequences. Based on the chi square statistic test, some explanations about why to choose higher order Markov chain model and how to automatically select the proper order are given in this paper. The chi square test is first run on synthetic data sets to show that it can efficiently find the proper order of Markov chain. Using chi square test, distinct higher order context dependences inherent in ten sets of sequences of yeast S.cerevisiae from other literature have been found. So the Markov chain with higher order would be more suitable for modeling the non coding background sequences than an independent model.
文摘It is well known that smart thermostats (STs) have become key devices in the implementation of smart homes;thus, they are considered as primary elements for the control of electrical energy consumption in households. Moreover, energy consumption is drastically affected when the end users select unsuitable STs or when they do not use the STs correctly. Furthermore, in future, Mexico will face serious electrical energy challenges that can be considerably resolved if the end users operate the STs in a correct manner. Hence, it is important to carry out an in-depth study and analysis on thermostats, by focusing on social aspects that influence the technological use and performance of the thermostats. This paper proposes the use of a signal detection theory (SDT), fuzzy detection theory (FDT), and chi-square (CS) test in order to understand the perceptions and beliefs of end users about the use of STs in Mexico. This paper extensively shows the perceptions and beliefs about the selected thermostats in Mexico. Besides, it presents an in-depth discussion on the cognitive perceptions and beliefs of end users. Moreover, it shows why the expectations of the end users about STs are not met. It also promotes the technological and social development of STs such that they are relatively more accepted in complex electrical grids such as smart grids.
文摘In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.
文摘Zero-inflated distributions are common in statistical problems where there is interest in testing homogeneity of two or more independent groups. Often, the underlying distribution that has an inflated number of zero-valued observations is asymmetric, and its functional form may not be known or easily characterized. In this case, comparisons of the groups in terms of their respective percentiles may be appropriate as these estimates are nonparametric and more robust to outliers and other irregularities. The median test is often used to compare distributions with similar but asymmetric shapes but may be uninformative when there are excess zeros or dissimilar shapes. For zero-inflated distributions, it is useful to compare the distributions with respect to their proportion of zeros, coupled with the comparison of percentile profiles for the observed non-zero values. A simple chi-square test for simultaneous testing of these two components is proposed, applicable to both continuous and discrete data. Results of simulation studies are reported to summarize empirical power under several scenarios. We give recommendations for the minimum sample size which is necessary to achieve suitable test performance in specific examples.
文摘A new six-parameter continuous distribution called the Generalized Kumaraswamy Generalized Power Gompertz (GKGPG) distribution is proposed in this study, a graphical illustration of the probability density function and cumulative distribution function is presented. The statistical features of the Generalized Kumaraswamy Generalized Power Gompertz distribution are systematically derived and adequately studied. The estimation of the model parameters in the absence of censoring and under-right censoring is performed using the method of maximum likelihood. The test statistic for right-censored data, criteria test for GKGPG distribution, estimated matrix Ŵ, Ĉ, and Ĝ, criteria test Y<sup>2</sup>n</sub>, alongside the quadratic form of the test statistic is derived. Mean simulated values of maximum likelihood estimates and their corresponding square mean errors are presented and confirmed to agree closely with the true parameter values. Simulated levels of significance for Y<sup>2</sup>n</sub> (γ) test for the GKGPG model against their theoretical values were recorded. We conclude that the null hypothesis for which simulated samples are fitted by GKGPG distribution is widely validated for the different levels of significance considered. From the summary of the results of the strength of a specific type of braided cord dataset on the GKGPG model, it is observed that the proposed GKGPG model fits the data set for a significance level ε = 0.05.
文摘Test case prioritization and ranking play a crucial role in software testing by improving fault detection efficiency and ensuring software reliability.While prioritization selects the most relevant test cases for optimal coverage,ranking further refines their execution order to detect critical faults earlier.This study investigates machine learning techniques to enhance both prioritization and ranking,contributing to more effective and efficient testing processes.We first employ advanced feature engineering alongside ensemble models,including Gradient Boosted,Support Vector Machines,Random Forests,and Naive Bayes classifiers to optimize test case prioritization,achieving an accuracy score of 0.98847 and significantly improving the Average Percentage of Fault Detection(APFD).Subsequently,we introduce a deep Q-learning framework combined with a Genetic Algorithm(GA)to refine test case ranking within priority levels.This approach achieves a rank accuracy of 0.9172,demonstrating robust performance despite the increasing computational demands of specialized variation operators.Our findings highlight the effectiveness of stacked ensemble learning and reinforcement learning in optimizing test case prioritization and ranking.This integrated approach improves testing efficiency,reduces late-stage defects,and improves overall software stability.The study provides valuable information for AI-driven testing frameworks,paving the way for more intelligent and adaptive software quality assurance methodologies.
基金Computer Basic Education Teaching Research Project of Association of Fundamental Computing Education in Chinese Universities(Nos.2025-AFCEC-527 and 2024-AFCEC-088)Research on the Reform of Public Course Teaching at Nantong College of Science and Technology(No.2024JGG015).
文摘With the rapid development of artificial intelligence,the intelligence level of software is increasingly improving.Intelligent software,which is widely applied in crucial fields such as autonomous driving,intelligent customer service,and medical diagnosis,is constructed based on complex technologies like machine learning and deep learning.Its uncertain behavior and data dependence pose unprecedented challenges to software testing.However,existing software testing courses mainly focus on conventional contents and are unable to meet the requirements of intelligent software testing.Therefore,this work deeply analyzed the relevant technologies of intelligent software testing,including reliability evaluation indicator system,neuron coverage,and test case generation.It also systematically designed an intelligent software testing course,covering teaching objectives,teaching content,teaching methods,and a teaching case.Verified by the practical teaching in four classes,this course has achieved remarkable results,providing practical experience for the reform of software testing courses.
文摘Members of the British Textile Machinery Association(BTMA)can look back on 2025 as a year marked by notable technological advances and continued progress in global trade,despite an uncertain and volatile market.“Our members have been very active over the past 12 months and this has resulted in new technologies for the production of technical fibres and fabrics,the introduction of AI and machine learning into process control systems and significant advances in materials testing,”says BTMA CEO Jason Kent.“There’s real excitement about what can be achieved in 2026 as we look ahead to upcoming exhibitions such as JEC Composites in Paris in March and Techtextil in Frankfurt in April.”
基金supported by the Open Foundation of Key Laboratory of Cyberspace Security,Ministry of Education of China(KLCS20240211)。
文摘With the rapid development of Internet technology,REST APIs(Representational State Transfer Application Programming Interfaces)have become the primary communication standard in modern microservice architectures,raising increasing concerns about their security.Existing fuzz testing methods include random or dictionary-based input generation,which often fail to ensure both syntactic and semantic correctness,and OpenAPIbased approaches,which offer better accuracy but typically lack detailed descriptions of endpoints,parameters,or data formats.To address these issues,this paper proposes the APIDocX fuzz testing framework.It introduces a crawler tailored for dynamic web pages that automatically simulates user interactions to trigger APIs,capturing and extracting parameter information from communication packets.A multi-endpoint parameter adaptation method based on improved Jaccard similarity is then used to generalize these parameters to other potential API endpoints,filling in gaps in OpenAPI specifications.Experimental results demonstrate that the extracted parameters can be generalized with 79.61%accuracy.Fuzz testing using the enriched OpenAPI documents leads to improvements in test coverage,the number of valid test cases generated,and fault detection capabilities.This approach offers an effective enhancement to automated REST API security testing.
基金supported in part by the National Science and Technology Council of Taiwan under the contract numbers NSTC 114-2221-E-019-055-MY2 and NSTC 114-2221-E-019-069.
文摘Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.
基金financially supported by ARC Linkage project(LP210200642)ARC Center of Excellence for Quantum Biotechnology(grant no.CE230100021)+1 种基金National Health and Medical Research Council Investigator Fellowship—(grant no.APP2017499)Chan Zuckerberg Initiative Deep Tissue Imaging Phase 2(grant no.DT12-0000000182).
文摘Lateral flow immunoassays(LFIAs)are low-cost,rapid,and easy to use for pointof-care testing(POCT),but the majority of the available LFIA tests are indicative,rather than quantitative,and their sensitivity in antigen tests are usually limited at the nanogram range,which is primarily due to the passive capillary fluidics through nitrocellulose membranes,often associated with non-specific bindings and high background noise.To overcome this challenge,we report a Beads-on-a-Tip design by replacing nitrocellulose membranes with a pipette tip loaded with magnetic beads.The beads are pre-conjugated with capture antibodies that support a typical sandwich immunoassay.This design enriches the low-abundant antigen proteins and allows an active washing process to significantly reduce non-specific bindings.To further improve the detection sensitivity,we employed upconversion nanoparticles(UCNPs)as luminescent reporters and SARS-CoV-2 spike(S)antigen as a model analyte to benchmark the performance of this design against our previously reported methods.We found that the key to enhance the immunocomplex formation and signal-to-noise ratio lay in optimizing incubation time and the UCNP-to-bead ratio.We therefore successfully demonstrated that the new method can achieve a very large dynamic range from 500 fg/mL to 10μg/mL,across over 7 digits,and a limit of detection of 706 fg/mL,nearly another order of magnitude lower than the best reported LFIA using UCNPs in COVID-19 spike antigen detection.Our system offers a promising solution for ultra-sensitive and quantitative POCT diagnostics.
基金supported by the National Social Science Foundation of China under Grant No.23&ZD126National Science Foundation of China under Grant No.12471256+1 种基金Natural Science Foundation of Shanxi Province under Grant No.202203021221219Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi under Grant No.2023L164。
文摘The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method to assess the overall significance of the coefficients.The authors establish that the proposed test is asymptotically normal under both the null hypothesis and local alternatives.Based on the locally concerned U-statistic,the authors further develop a globally concerned U-statistic to test whether the coefficient function is zero.A stochastic perturbation method is employed to approximate the distribution of the globally concerned test statistic.Monte Carlo simulations demonstrate the validity of the proposed test in finite samples.
基金supported by Natural Science Foundation of China(NSFC),Grant number 5247052693.
文摘To address the insufficient prediction accuracy of multi-state parameters in electro-hydraulic servo material fatigue testing machines under complex loading and nonlinear coupling conditions,this paper proposes a multivariate sequence-to-sequence prediction model integrating a Long Short-Term Memory(LSTM)encoder,a Gated Recurrent Unit(GRU)decoder,and a multi-head attention mechanism.This approach enhances prediction accuracy and robustness across different control modes and load spectra by leveraging multi-channel inputs and cross-variable feature interactions,thereby capturing both short-term high-frequency dynamics and long-term slow drift characteristics.Experiments using long-term data from real test benches demonstrate that the model achieves a stable MSE below 0.01 on the validation set,with MAE and RMSE of approximately 0.018 and 0.052,respectively,and a coefficient of determination reaching 0.98.This significantly outperforms traditional identification methods and single RNN models.Sensitivity analysis indicates that a prediction stride of 10 achieves an optimal balance between accuracy and computational overhead.Ablation experiments validated the contribution of multi-head attention and decoder architecture to enhancing cross-variable coupling modeling capabilities.This model can be applied to residualdriven early warning in health monitoring,and risk assessment with scheme optimization in test design.It enables near-real-time deployment feasibility,providing a practical data-driven technical pathway for reliability assurance in advanced equipment.
基金supported by the National Key R&D Program of China(2024YFA1802300)the Major Science and Technology Program of Hainan Province(ZDKJ2021037)+4 种基金the Regional Innovation and Development Joint Fund of the National Natural Science Foundation of China(U24A20677)Hainan Province Science and Technology Special Fund(ZDYF2020117,ZDY2024SHFZ143)Hainan Province Science and TechnologyProject(LCXY202102,LCYX202203,LCYX202301,LCYx202502)Innovative research project for postgraduate students in Hainan Medical University(HYYB2021A05)the Hainan Province Clinical Medical Center,and the specific research fund of The Innovation Platform for Academicians of Hainan Province(YSPTZX202310).
文摘Patients affected by monogenic diseases impose a substantial burden on both themselves and their families.The primary preventive measure,i.e.,invasive prenatal diagnosis,carries a risk of miscarriage and cannot be performed early in pregnancy.Hence,there is a need for non-invasive prenatal testing(NIPT)for monogenic diseases.By utilizing enriched cell-free fetal DNA(cffDNA)from maternal plasma,we refine the NIPT method,which combines targeted region capture technology,haplotyping,and analysis of informative site frequency.We apply this method to 93 clinical families at genetic risk for thalassemia,encompassing various genetic variant types,to establish a workflow and evaluate its efficiency.Our approach requires only 3 ng of DNA input to generate 0.1 Gb informative target genomic data and leverages a minimum of 3%cffDNA.This method has a 98.16%success rate and 100%concordance with conventional invasive methods.Furthermore,we demonstrate the ability to analyze fetal genotypes as early as eight weeks of gestation.This study establishes an optimized NIPT method for the early detection of various thalassemia disorders during pregnancy.This technique demonstrates high accuracy and potential for clinical application in prenatal diagnosis.
基金supported in part by the Tianjin Natural Science Foundation Project(24JCZDJC01000)the Fundamental Research Funds for the Central Universities of China(No.3122025091).
文摘As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effective for traditional networks and applications but lack the capabilities to address DL-specific threats,such asmodel inversion,membership inference,and adversarial attacks.This review provides a comprehensive analysis of penetration testing for the privacy of DL models,examining the shortfalls of existing frameworks,tools,and testing methodologies.Through systematic evaluation of existing literature and empirical analysis,we identify three major contributions:(i)a critical assessment of traditional penetration testing frameworks’inadequacies when applied to DL-specific privacy vulnerabilities,(ii)a comprehensive evaluation of state-of-the-art privacy-preserving methods and their integration with penetration testing workflows,and(iii)the development of a structured framework that combines reconnaissance,threat modeling,exploitation,and post-exploitation phases specifically tailored for DL privacy assessment.Moreover,this review evaluates popular solutions such as IBMAdversarial Robustness Toolbox and TensorFlowPrivacy,alongside privacy-preserving techniques(e.g.,Differential Privacy,Homomorphic Encryption,and Federated Learning),which we systematically analyze through comparative studies of their effectiveness,computational overhead,and practical deployment constraints.While these techniques offer promising safeguards,their adoption is hindered by accuracy loss,performance overheads,and the rapid evolution of attack strategies.Our findings reveal that no single existing solution provides comprehensive protection,which leads us to propose a hybrid approach that strategically combines multiple privacy-preserving mechanisms.The findings of this survey underscore an urgent need for automated,regulationcompliant penetration testing frameworks specifically tailored to DL systems.We argue for hybrid privacy solutions that combinemultiple protectivemechanisms to ensure bothmodel accuracy and privacy.Building on our analysis,we present actionable recommendations for developing adaptive penetration testing strategies that incorporate automated vulnerability assessment,continuous monitoring,and regulatory compliance verification.
基金supported by the Fundamental Research Funds for the Central Universities in UIBE under Grant No.CXTD14-05。
文摘Smartphone-based electrocardiograms(ECGs)are increasingly utilized for monitoring atrial fibrillation(AF)recurrence after catheter ablation(CA),referred to as smartphone AF burden(SMURDEN).The SMURDEN data often exhibit complex patterns of zero AF episodes,which may arise from either true AF-free status(structural zeros)or missed AF episodes due to intermittent monitoring(random zeros).Such a mixture of AF-free and at-risk patients can lead to zero-inflation in the data.The authors propose a novel zero-inflation test for binomial regression models to identify recurrence-free AF populations.Unlike traditional approaches requiring fully specified zero-inflated models,the proposed test utilizes a weighted average of the discrepancies between observed and expected zero proportions,with weights determined by binomial sizes.A closed-form test statistic is developed,and its asymptotic distribution is derived using estimating equations.Simulations demonstrate superior performance over existing methods,and real-world AF monitoring data validate the practical utility of our proposed test.
基金financial support from the Shanghai Science and Technology Committee(Grant no.24DZ3100504)the National Key Research and Development Program of China(Grant no.2022YFC2807102).
文摘Abstract To investigate the use of the three-point bending method and supplement the corresponding strength data of compacted snow for transportation-related applications in cold regions,compacted snow beams with an average density of 592 kg·m−3 were fabricated and tested at three distinct flexural strain rates.Each strain rate corresponded to the ductile,transitional,and brittle behavior of compacted snow,respectively.The flexural strength,ranging from 0.518 to 0.933 MPa,peaks at the ductile-to-brittle transition,while the flexural modulus,varying between 48.97 and 287.72 MPa,increases with strain rate within the tested range.At the same strain rate corresponding to brittle failure,both mechanical properties of compacted snow exhibit higher values than those of natural snow tested by the authors.Notably,the flexural strain rate at the ductile-to-brittle transition for compacted snow identified in this study is comparable to those previously reported for natural snow under uniaxial tension.Additionally,the obtained strength data are thoroughly compared with existing literature,with detailed discussions provided.The loading rates associated with typical failure modes of compacted snow under bending,together with the obtained strength values,provide methodological guidance and reference data for future in situ testing of compacted snow structures.