In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
Laser-induced breakdown spectroscopy(LIBS)has become a widely used atomic spectroscopic technique for rapid coal analysis.However,the vast amount of spectral information in LIBS contains signal uncertainty,which can a...Laser-induced breakdown spectroscopy(LIBS)has become a widely used atomic spectroscopic technique for rapid coal analysis.However,the vast amount of spectral information in LIBS contains signal uncertainty,which can affect its quantification performance.In this work,we propose a hybrid variable selection method to improve the performance of LIBS quantification.Important variables are first identified using Pearson's correlation coefficient,mutual information,least absolute shrinkage and selection operator(LASSO)and random forest,and then filtered and combined with empirical variables related to fingerprint elements of coal ash content.Subsequently,these variables are fed into a partial least squares regression(PLSR).Additionally,in some models,certain variables unrelated to ash content are removed manually to study the impact of variable deselection on model performance.The proposed hybrid strategy was tested on three LIBS datasets for quantitative analysis of coal ash content and compared with the corresponding data-driven baseline method.It is significantly better than the variable selection only method based on empirical knowledge and in most cases outperforms the baseline method.The results showed that on all three datasets the hybrid strategy for variable selection combining empirical knowledge and data-driven algorithms achieved the lowest root mean square error of prediction(RMSEP)values of 1.605,3.478 and 1.647,respectively,which were significantly lower than those obtained from multiple linear regression using only 12 empirical variables,which are 1.959,3.718 and 2.181,respectively.The LASSO-PLSR model with empirical support and 20 selected variables exhibited a significantly improved performance after variable deselection,with RMSEP values dropping from 1.635,3.962 and 1.647 to 1.483,3.086 and 1.567,respectively.Such results demonstrate that using empirical knowledge as a support for datadriven variable selection can be a viable approach to improve the accuracy and reliability of LIBS quantification.展开更多
The variable selection of high dimensional nonparametric nonlinear systems aims to select the contributing variables or to eliminate the redundant variables.For a high dimensional nonparametric nonlinear system,howeve...The variable selection of high dimensional nonparametric nonlinear systems aims to select the contributing variables or to eliminate the redundant variables.For a high dimensional nonparametric nonlinear system,however,identifying whether a variable contributes or not is not easy.Therefore,based on the Fourier spectrum of densityweighted derivative,one novel variable selection approach is developed,which does not suffer from the dimensionality curse and improves the identification accuracy.Furthermore,a necessary and sufficient condition for testing a variable whether it contributes or not is provided.The proposed approach does not require strong assumptions on the distribution,such as elliptical distribution.The simulation study verifies the effectiveness of the novel variable selection algorithm.展开更多
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i...Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.展开更多
An internal state variable(ISV)model was established according to the experimental results of hot plane strain compression(PSC)to predict the microstructure evolution during hot spinning of ZK61 alloy.The effects of t...An internal state variable(ISV)model was established according to the experimental results of hot plane strain compression(PSC)to predict the microstructure evolution during hot spinning of ZK61 alloy.The effects of the internal variables were considered in this ISV model,and the parameters were optimized by genetic algorithm.After validation,the ISV model was used to simulate the evolution of grain size(GS)and dynamic recrystallization(DRX)fraction during hot spinning via Abaqus and its subroutine Vumat.By comparing the simulated results with the experimental results,the application of the ISV model was proven to be reliable.Meanwhile,the strength of the thin-walled spun ZK61 tube increased from 303 to 334 MPa due to grain refinement by DRX and texture strengthening.Besides,some ultrafine grains(0.5μm)that played an important role in mechanical properties were formed due to the proliferation,movement,and entanglement of dislocations during the spinning process.展开更多
The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(...In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(CSI),which is difficult to achieve in practice.To be more practical,it is important to investigate covert communications against a warden with uncertain locations and imperfect CSI,which makes it difficult for legitimate transceivers to estimate the detection probability of the warden.First,the uncertainty caused by the unknown warden location must be removed,and the Optimal Detection Position(OPTDP)of the warden is derived which can provide the best detection performance(i.e.,the worst case for a covert communication).Then,to further avoid the impractical assumption of perfect CSI,the covert throughput is maximized using only the channel distribution information.Given this OPTDP based worst case for covert communications,the jammer selection,the jamming power,the transmission power,and the transmission rate are jointly optimized to maximize the covert throughput(OPTDP-JP).To solve this coupling problem,a Heuristic algorithm based on Maximum Distance Ratio(H-MAXDR)is proposed to provide a sub-optimal solution.First,according to the analysis of the covert throughput,the node with the maximum distance ratio(i.e.,the ratio of the distances from the jammer to the receiver and that to the warden)is selected as the friendly jammer(MAXDR).Then,the optimal transmission and jamming power can be derived,followed by the optimal transmission rate obtained via the bisection method.In numerical and simulation results,it is shown that although the location of the warden is unknown,by assuming the OPTDP of the warden,the proposed OPTDP-JP can always satisfy the covertness constraint.In addition,with an uncertain warden and imperfect CSI,the covert throughput provided by OPTDP-JP is 80%higher than the existing schemes when the covertness constraint is 0.9,showing the effectiveness of OPTDP-JP.展开更多
The principle of genomic selection(GS) entails estimating breeding values(BVs) by summing all the SNP polygenic effects. The visible/near-infrared spectroscopy(VIS/NIRS) wavelength and abundance values can directly re...The principle of genomic selection(GS) entails estimating breeding values(BVs) by summing all the SNP polygenic effects. The visible/near-infrared spectroscopy(VIS/NIRS) wavelength and abundance values can directly reflect the concentrations of chemical substances, and the measurement of meat traits by VIS/NIRS is similar to the processing of genomic selection data by summing all ‘polygenic effects' associated with spectral feature peaks. Therefore, it is meaningful to investigate the incorporation of VIS/NIRS information into GS models to establish an efficient and low-cost breeding model. In this study, we measured 6 meat quality traits in 359Duroc×Landrace×Yorkshire pigs from Guangxi Zhuang Autonomous Region, China, and genotyped them with high-density SNP chips. According to the completeness of the information for the target population, we proposed 4breeding strategies applied to different scenarios: Ⅰ, only spectral and genotypic data exist for the target population;Ⅱ, only spectral data exist for the target population;Ⅲ, only spectral and genotypic data but with different prediction processes exist for the target population;and Ⅳ, only spectral and phenotypic data exist for the target population.The 4 scenarios were used to evaluate the genomic estimated breeding value(GEBV) accuracy by increasing the VIS/NIR spectral information. In the results of the 5-fold cross-validation, the genetic algorithm showed remarkable potential for preselection of feature wavelengths. The breeding efficiency of Strategies Ⅱ, Ⅲ, and Ⅳ was superior to that of traditional GS for most traits, and the GEBV prediction accuracy was improved by 32.2, 40.8 and 15.5%, respectively on average. Among them, the prediction accuracy of Strategy Ⅱ for fat(%) even improved by 50.7% compared to traditional GS. The GEBV prediction accuracy of Strategy Ⅰ was nearly identical to that of traditional GS, and the fluctuation range was less than 7%. Moreover, the breeding cost of the 4 strategies was lower than that of traditional GS methods, with Strategy Ⅳ being the lowest as it did not require genotyping.Our findings demonstrate that GS methods based on VIS/NIRS data have significant predictive potential and are worthy of further research to provide a valuable reference for the development of effective and affordable breeding strategies.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
This article constructs statistical selection procedures for exponential populations that may differ in only the threshold parameters. The scale parameters of the populations are assumed common and known. The independ...This article constructs statistical selection procedures for exponential populations that may differ in only the threshold parameters. The scale parameters of the populations are assumed common and known. The independent samples drawn from the populations are taken to be of the same size. The best population is defined as the one associated with the largest threshold parameter. In case more than one population share the largest threshold, one of these is tagged at random and denoted the best. Two procedures are developed for choosing a subset of the populations having the property that the chosen subset contains the best population with a prescribed probability. One procedure is based on the sample minimum values drawn from the populations, and another is based on the sample means from the populations. An “Indifference Zone” (IZ) selection procedure is also developed based on the sample minimum values. The IZ procedure asserts that the population with the largest test statistic (e.g., the sample minimum) is the best population. With this approach, the sample size is chosen so as to guarantee that the probability of a correct selection is no less than a prescribed probability in the parameter region where the largest threshold is at least a prescribed amount larger than the remaining thresholds. Numerical examples are given, and the computer R-codes for all calculations are given in the Appendices.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain c...Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain conditioning factor selection method rather than systematically study this uncertainty issue.Targeted,this study aims to systematically explore the influence rules of various commonly used conditioning factor selection methods on LSP,and on this basis to innovatively propose a principle with universal application for optimal selection of conditioning factors.An'yuan County in southern China is taken as example considering 431 landslides and 29 types of conditioning factors.Five commonly used factor selection methods,namely,the correlation analysis(CA),linear regression(LR),principal component analysis(PCA),rough set(RS)and artificial neural network(ANN),are applied to select the optimal factor combinations from the original 29 conditioning factors.The factor selection results are then used as inputs of four types of common machine learning models to construct 20 types of combined models,such as CA-multilayer perceptron,CA-random forest.Additionally,multifactor-based multilayer perceptron random forest models that selecting conditioning factors based on the proposed principle of“accurate data,rich types,clear significance,feasible operation and avoiding duplication”are constructed for comparisons.Finally,the LSP uncertainties are evaluated by the accuracy,susceptibility index distribution,etc.Results show that:(1)multifactor-based models have generally higher LSP performance and lower uncertainties than those of factors selection-based models;(2)Influence degree of different machine learning on LSP accuracy is greater than that of different factor selection methods.Conclusively,the above commonly used conditioning factor selection methods are not ideal for improving LSP performance and may complicate the LSP processes.In contrast,a satisfied combination of conditioning factors can be constructed according to the proposed principle.展开更多
We consider a single server constant retrial queue,in which a state-dependent service policy is used to control the service rate.Customer arrival follows Poisson process,while service time and retrial time are exponen...We consider a single server constant retrial queue,in which a state-dependent service policy is used to control the service rate.Customer arrival follows Poisson process,while service time and retrial time are exponential distributions.Whenever the server is available,it admits the retrial customers into service based on a first-come first-served rule.The service rate adjusts in real-time based on the retrial queue length.An iterative algorithm is proposed to numerically solve the personal optimal problem in the fully observable scenario.Furthermore,we investigate the impact of parameters on the social optimal threshold.The effectiveness of the results is illustrated by two examples.展开更多
Fiber-reinforced composites are an ideal material for the lightweight design of aerospace structures. Especially in recent years, with the rapid development of composite additive manufacturing technology, the design o...Fiber-reinforced composites are an ideal material for the lightweight design of aerospace structures. Especially in recent years, with the rapid development of composite additive manufacturing technology, the design optimization of variable stiffness of fiber-reinforced composite laminates has attracted widespread attention from scholars and industry. In these aerospace composite structures, numerous cutout panels and shells serve as access points for maintaining electrical, fuel, and hydraulic systems. The traditional fiber-reinforced composite laminate subtractive drilling manufacturing inevitably faces the problems of interlayer delamination, fiber fracture, and burr of the laminate. Continuous fiber additive manufacturing technology offers the potential for integrated design optimization and manufacturing with high structural performance. Considering the integration of design and manufacturability in continuous fiber additive manufacturing, the paper proposes linear and nonlinear filtering strategies based on the Normal Distribution Fiber Optimization (NDFO) material interpolation scheme to overcome the challenge of discrete fiber optimization results, which are difficult to apply directly to continuous fiber additive manufacturing. With minimizing structural compliance as the objective function, the proposed approach provides a strategy to achieve continuity of discrete fiber paths in the variable stiffness design optimization of composite laminates with regular and irregular holes. In the variable stiffness design optimization model, the number of candidate fiber laying angles in the NDFO material interpolation scheme is considered as design variable. The sensitivity information of structural compliance with respect to the number of candidate fiber laying angles is obtained using the analytical sensitivity analysis method. Based on the proposed variable stiffness design optimization method for complex perforated composite laminates, the numerical examples consider the variable stiffness design optimization of typical non-perforated and perforated composite laminates with circular, square, and irregular holes, and systematically discuss the number of candidate discrete fiber laying angles, discrete fiber continuous filtering strategies, and filter radius on structural compliance, continuity, and manufacturability. The optimized discrete fiber angles of variable stiffness laminates are converted into continuous fiber laying paths using a streamlined process for continuous fiber additive manufacturing. Meanwhile, the optimized non-perforated and perforated MBB beams after discrete fiber continuous treatment, are manufactured using continuous fiber co-extrusion additive manufacturing technology to verify the effectiveness of the variable stiffness fiber optimization framework proposed in this paper.展开更多
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re...In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.展开更多
The complete convergence for weighted sums of sequences of independent,identically distributed random variables under sublinear expectation space is studied.By moment inequality and truncation methods,we establish the...The complete convergence for weighted sums of sequences of independent,identically distributed random variables under sublinear expectation space is studied.By moment inequality and truncation methods,we establish the equivalent conditions of complete convergence for weighted sums of sequences of independent,identically distributed random variables under sublinear expectation space.The results complement the corresponding results in probability space to those for sequences of independent,identically distributed random variables under sublinear expectation space.展开更多
Mitochondria play a key role in lipid metabolism,and mitochondrial DNA(mtDNA)mutations are thus considered to affect obesity susceptibility by altering oxidative phosphorylation and mitochondrial function.In this stud...Mitochondria play a key role in lipid metabolism,and mitochondrial DNA(mtDNA)mutations are thus considered to affect obesity susceptibility by altering oxidative phosphorylation and mitochondrial function.In this study,we investigate mtDNA variants that may affect obesity risk in 2877 Han Chinese individuals from 3 independent populations.The association analysis of 16 basal mtDNA haplogroups with body mass index,waist circumference,and waist-to-hip ratio reveals that only haplogroup M7 is significantly negatively correlated with all three adiposity-related anthropometric traits in the overall cohort,verified by the analysis of a single population,i.e.,the Zhengzhou population.Furthermore,subhaplogroup analysis suggests that M7b1a1 is the most likely haplogroup associated with a decreased obesity risk,and the variation T12811C(causing Y159H in ND5)harbored in M7b1a1 may be the most likely candidate for altering the mitochondrial function.Specifically,we find that proportionally more nonsynonymous mutations accumulate in M7b1a1 carriers,indicating that M7b1a1 is either under positive selection or subject to a relaxation of selective constraints.We also find that nuclear variants,especially in DACT2 and PIEZO1,may functionally interact with M7b1a1.展开更多
With the application of 2.5D Woven Variable Thickness Composites(2.5DWVTC)in aviation and other fields,the issue of strength failure in this composite type has become a focal point.First,a three-step modeling approach...With the application of 2.5D Woven Variable Thickness Composites(2.5DWVTC)in aviation and other fields,the issue of strength failure in this composite type has become a focal point.First,a three-step modeling approach is proposed to rapidly construct full-scale meso-finite element models for Outer Reduction Yarn Woven Composites(ORYWC)and Inner Reduction Yarn Woven Composites(IRYWC).Then,six independent damage variables are identified:yarn fiber tension/compression,yarn matrix tension/compression,and resin matrix tension/compression.These variables are utilized to establish the constitutive equation of woven composites,considering the coupling effects of microscopic damage.Finally,combined with the Hashin failure criterion and von Mises failure criterion,the strength prediction model is implemented in ANSYS using APDL language to simulate the strength failure process of 2.5DWVTC.The results show that the predicted stiffness and strength values of various parts of ORYWC and IRYWC are in good agreement with the relevant test results.展开更多
In this paper,by utilizing the Marcinkiewicz-Zygmund inequality and Rosenthal-type inequality of negatively superadditive dependent(NSD)random arrays and truncated method,we investigate the complete f-moment convergen...In this paper,by utilizing the Marcinkiewicz-Zygmund inequality and Rosenthal-type inequality of negatively superadditive dependent(NSD)random arrays and truncated method,we investigate the complete f-moment convergence of NSD random variables.We establish and improve a general result on the complete f-moment convergence for Sung’s type randomly weighted sums of NSD random variables under some general assumptions.As an application,we show the complete consistency for the randomly weighted estimator in a nonparametric regression model based on NSD errors.展开更多
Non-orthogonal multiple access(NOMA)is a promising technology for the next generation wireless communication networks.The benefits of this technology can be further enhanced through deployment in conjunction with mult...Non-orthogonal multiple access(NOMA)is a promising technology for the next generation wireless communication networks.The benefits of this technology can be further enhanced through deployment in conjunction with multiple-input multipleoutput(MIMO)systems.Antenna selection plays a critical role in MIMO–NOMA systems as it has the potential to significantly reduce the cost and complexity associated with radio frequency chains.This paper considers antenna selection for downlink MIMO–NOMA networks with multiple-antenna basestation(BS)and multiple-antenna user equipments(UEs).An iterative antenna selection scheme is developed for a two-user system,and to determine the initial power required for this selection scheme,a power estimation method is also proposed.The proposed algorithm is then extended to a general multiuser NOMA system.Numerical results demonstrate that the proposed antenna selection algorithm achieves near-optimal performance with much lower computational complexity in both two-user and multiuser scenarios.展开更多
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
基金financial supports from National Natural Science Foundation of China(No.62205172)Huaneng Group Science and Technology Research Project(No.HNKJ22-H105)Tsinghua University Initiative Scientific Research Program and the International Joint Mission on Climate Change and Carbon Neutrality。
文摘Laser-induced breakdown spectroscopy(LIBS)has become a widely used atomic spectroscopic technique for rapid coal analysis.However,the vast amount of spectral information in LIBS contains signal uncertainty,which can affect its quantification performance.In this work,we propose a hybrid variable selection method to improve the performance of LIBS quantification.Important variables are first identified using Pearson's correlation coefficient,mutual information,least absolute shrinkage and selection operator(LASSO)and random forest,and then filtered and combined with empirical variables related to fingerprint elements of coal ash content.Subsequently,these variables are fed into a partial least squares regression(PLSR).Additionally,in some models,certain variables unrelated to ash content are removed manually to study the impact of variable deselection on model performance.The proposed hybrid strategy was tested on three LIBS datasets for quantitative analysis of coal ash content and compared with the corresponding data-driven baseline method.It is significantly better than the variable selection only method based on empirical knowledge and in most cases outperforms the baseline method.The results showed that on all three datasets the hybrid strategy for variable selection combining empirical knowledge and data-driven algorithms achieved the lowest root mean square error of prediction(RMSEP)values of 1.605,3.478 and 1.647,respectively,which were significantly lower than those obtained from multiple linear regression using only 12 empirical variables,which are 1.959,3.718 and 2.181,respectively.The LASSO-PLSR model with empirical support and 20 selected variables exhibited a significantly improved performance after variable deselection,with RMSEP values dropping from 1.635,3.962 and 1.647 to 1.483,3.086 and 1.567,respectively.Such results demonstrate that using empirical knowledge as a support for datadriven variable selection can be a viable approach to improve the accuracy and reliability of LIBS quantification.
基金Project supported by the National Key Research and Development Program of China(No.2021YFB3400700)the National Natural Science Foundation of China(Nos.12422201,12072188,12121002,and 12372017)。
文摘The variable selection of high dimensional nonparametric nonlinear systems aims to select the contributing variables or to eliminate the redundant variables.For a high dimensional nonparametric nonlinear system,however,identifying whether a variable contributes or not is not easy.Therefore,based on the Fourier spectrum of densityweighted derivative,one novel variable selection approach is developed,which does not suffer from the dimensionality curse and improves the identification accuracy.Furthermore,a necessary and sufficient condition for testing a variable whether it contributes or not is provided.The proposed approach does not require strong assumptions on the distribution,such as elliptical distribution.The simulation study verifies the effectiveness of the novel variable selection algorithm.
基金supported by the National Natural Science Foundation of China(42250101)the Macao Foundation。
文摘Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.
基金supported by the National Natural Science Foundation of China(No.51905123)Major Scientific and Technological Innovation Program of Shandong Province,China(Nos.2020CXGC010303,2022ZLGX04)Key R&D Programme of Shandong Province,China(No.2022JMRH0308).
文摘An internal state variable(ISV)model was established according to the experimental results of hot plane strain compression(PSC)to predict the microstructure evolution during hot spinning of ZK61 alloy.The effects of the internal variables were considered in this ISV model,and the parameters were optimized by genetic algorithm.After validation,the ISV model was used to simulate the evolution of grain size(GS)and dynamic recrystallization(DRX)fraction during hot spinning via Abaqus and its subroutine Vumat.By comparing the simulated results with the experimental results,the application of the ISV model was proven to be reliable.Meanwhile,the strength of the thin-walled spun ZK61 tube increased from 303 to 334 MPa due to grain refinement by DRX and texture strengthening.Besides,some ultrafine grains(0.5μm)that played an important role in mechanical properties were formed due to the proliferation,movement,and entanglement of dislocations during the spinning process.
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
基金supported by the CAS Project for Young Scientists in Basic Research under Grant YSBR-035Jiangsu Provincial Key Research and Development Program under Grant BE2021013-2.
文摘In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(CSI),which is difficult to achieve in practice.To be more practical,it is important to investigate covert communications against a warden with uncertain locations and imperfect CSI,which makes it difficult for legitimate transceivers to estimate the detection probability of the warden.First,the uncertainty caused by the unknown warden location must be removed,and the Optimal Detection Position(OPTDP)of the warden is derived which can provide the best detection performance(i.e.,the worst case for a covert communication).Then,to further avoid the impractical assumption of perfect CSI,the covert throughput is maximized using only the channel distribution information.Given this OPTDP based worst case for covert communications,the jammer selection,the jamming power,the transmission power,and the transmission rate are jointly optimized to maximize the covert throughput(OPTDP-JP).To solve this coupling problem,a Heuristic algorithm based on Maximum Distance Ratio(H-MAXDR)is proposed to provide a sub-optimal solution.First,according to the analysis of the covert throughput,the node with the maximum distance ratio(i.e.,the ratio of the distances from the jammer to the receiver and that to the warden)is selected as the friendly jammer(MAXDR).Then,the optimal transmission and jamming power can be derived,followed by the optimal transmission rate obtained via the bisection method.In numerical and simulation results,it is shown that although the location of the warden is unknown,by assuming the OPTDP of the warden,the proposed OPTDP-JP can always satisfy the covertness constraint.In addition,with an uncertain warden and imperfect CSI,the covert throughput provided by OPTDP-JP is 80%higher than the existing schemes when the covertness constraint is 0.9,showing the effectiveness of OPTDP-JP.
基金supported by the National Natural Science Foundation of China(32160782 and 32060737).
文摘The principle of genomic selection(GS) entails estimating breeding values(BVs) by summing all the SNP polygenic effects. The visible/near-infrared spectroscopy(VIS/NIRS) wavelength and abundance values can directly reflect the concentrations of chemical substances, and the measurement of meat traits by VIS/NIRS is similar to the processing of genomic selection data by summing all ‘polygenic effects' associated with spectral feature peaks. Therefore, it is meaningful to investigate the incorporation of VIS/NIRS information into GS models to establish an efficient and low-cost breeding model. In this study, we measured 6 meat quality traits in 359Duroc×Landrace×Yorkshire pigs from Guangxi Zhuang Autonomous Region, China, and genotyped them with high-density SNP chips. According to the completeness of the information for the target population, we proposed 4breeding strategies applied to different scenarios: Ⅰ, only spectral and genotypic data exist for the target population;Ⅱ, only spectral data exist for the target population;Ⅲ, only spectral and genotypic data but with different prediction processes exist for the target population;and Ⅳ, only spectral and phenotypic data exist for the target population.The 4 scenarios were used to evaluate the genomic estimated breeding value(GEBV) accuracy by increasing the VIS/NIR spectral information. In the results of the 5-fold cross-validation, the genetic algorithm showed remarkable potential for preselection of feature wavelengths. The breeding efficiency of Strategies Ⅱ, Ⅲ, and Ⅳ was superior to that of traditional GS for most traits, and the GEBV prediction accuracy was improved by 32.2, 40.8 and 15.5%, respectively on average. Among them, the prediction accuracy of Strategy Ⅱ for fat(%) even improved by 50.7% compared to traditional GS. The GEBV prediction accuracy of Strategy Ⅰ was nearly identical to that of traditional GS, and the fluctuation range was less than 7%. Moreover, the breeding cost of the 4 strategies was lower than that of traditional GS methods, with Strategy Ⅳ being the lowest as it did not require genotyping.Our findings demonstrate that GS methods based on VIS/NIRS data have significant predictive potential and are worthy of further research to provide a valuable reference for the development of effective and affordable breeding strategies.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
文摘This article constructs statistical selection procedures for exponential populations that may differ in only the threshold parameters. The scale parameters of the populations are assumed common and known. The independent samples drawn from the populations are taken to be of the same size. The best population is defined as the one associated with the largest threshold parameter. In case more than one population share the largest threshold, one of these is tagged at random and denoted the best. Two procedures are developed for choosing a subset of the populations having the property that the chosen subset contains the best population with a prescribed probability. One procedure is based on the sample minimum values drawn from the populations, and another is based on the sample means from the populations. An “Indifference Zone” (IZ) selection procedure is also developed based on the sample minimum values. The IZ procedure asserts that the population with the largest test statistic (e.g., the sample minimum) is the best population. With this approach, the sample size is chosen so as to guarantee that the probability of a correct selection is no less than a prescribed probability in the parameter region where the largest threshold is at least a prescribed amount larger than the remaining thresholds. Numerical examples are given, and the computer R-codes for all calculations are given in the Appendices.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金funded by the Natural Science Foundation of China(Grant Nos.42377164 and 41972280)the Badong National Observation and Research Station of Geohazards(Grant No.BNORSG-202305).
文摘Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain conditioning factor selection method rather than systematically study this uncertainty issue.Targeted,this study aims to systematically explore the influence rules of various commonly used conditioning factor selection methods on LSP,and on this basis to innovatively propose a principle with universal application for optimal selection of conditioning factors.An'yuan County in southern China is taken as example considering 431 landslides and 29 types of conditioning factors.Five commonly used factor selection methods,namely,the correlation analysis(CA),linear regression(LR),principal component analysis(PCA),rough set(RS)and artificial neural network(ANN),are applied to select the optimal factor combinations from the original 29 conditioning factors.The factor selection results are then used as inputs of four types of common machine learning models to construct 20 types of combined models,such as CA-multilayer perceptron,CA-random forest.Additionally,multifactor-based multilayer perceptron random forest models that selecting conditioning factors based on the proposed principle of“accurate data,rich types,clear significance,feasible operation and avoiding duplication”are constructed for comparisons.Finally,the LSP uncertainties are evaluated by the accuracy,susceptibility index distribution,etc.Results show that:(1)multifactor-based models have generally higher LSP performance and lower uncertainties than those of factors selection-based models;(2)Influence degree of different machine learning on LSP accuracy is greater than that of different factor selection methods.Conclusively,the above commonly used conditioning factor selection methods are not ideal for improving LSP performance and may complicate the LSP processes.In contrast,a satisfied combination of conditioning factors can be constructed according to the proposed principle.
基金supported by the National Natural Science Foundation of China(Grant No.11971486)。
文摘We consider a single server constant retrial queue,in which a state-dependent service policy is used to control the service rate.Customer arrival follows Poisson process,while service time and retrial time are exponential distributions.Whenever the server is available,it admits the retrial customers into service based on a first-come first-served rule.The service rate adjusts in real-time based on the retrial queue length.An iterative algorithm is proposed to numerically solve the personal optimal problem in the fully observable scenario.Furthermore,we investigate the impact of parameters on the social optimal threshold.The effectiveness of the results is illustrated by two examples.
基金supports for this research were provided by the National Natural Science Foundation of China(No.12272301,12002278,U1906233)the Guangdong Basic and Applied Basic Research Foundation,China(Nos.2023A1515011970,2024A1515010256)+1 种基金the Dalian City Supports Innovation and Entrepreneurship Projects for High-Level Talents,China(2021RD16)the Key R&D Project of CSCEC,China(No.CSCEC-2020-Z-4).
文摘Fiber-reinforced composites are an ideal material for the lightweight design of aerospace structures. Especially in recent years, with the rapid development of composite additive manufacturing technology, the design optimization of variable stiffness of fiber-reinforced composite laminates has attracted widespread attention from scholars and industry. In these aerospace composite structures, numerous cutout panels and shells serve as access points for maintaining electrical, fuel, and hydraulic systems. The traditional fiber-reinforced composite laminate subtractive drilling manufacturing inevitably faces the problems of interlayer delamination, fiber fracture, and burr of the laminate. Continuous fiber additive manufacturing technology offers the potential for integrated design optimization and manufacturing with high structural performance. Considering the integration of design and manufacturability in continuous fiber additive manufacturing, the paper proposes linear and nonlinear filtering strategies based on the Normal Distribution Fiber Optimization (NDFO) material interpolation scheme to overcome the challenge of discrete fiber optimization results, which are difficult to apply directly to continuous fiber additive manufacturing. With minimizing structural compliance as the objective function, the proposed approach provides a strategy to achieve continuity of discrete fiber paths in the variable stiffness design optimization of composite laminates with regular and irregular holes. In the variable stiffness design optimization model, the number of candidate fiber laying angles in the NDFO material interpolation scheme is considered as design variable. The sensitivity information of structural compliance with respect to the number of candidate fiber laying angles is obtained using the analytical sensitivity analysis method. Based on the proposed variable stiffness design optimization method for complex perforated composite laminates, the numerical examples consider the variable stiffness design optimization of typical non-perforated and perforated composite laminates with circular, square, and irregular holes, and systematically discuss the number of candidate discrete fiber laying angles, discrete fiber continuous filtering strategies, and filter radius on structural compliance, continuity, and manufacturability. The optimized discrete fiber angles of variable stiffness laminates are converted into continuous fiber laying paths using a streamlined process for continuous fiber additive manufacturing. Meanwhile, the optimized non-perforated and perforated MBB beams after discrete fiber continuous treatment, are manufactured using continuous fiber co-extrusion additive manufacturing technology to verify the effectiveness of the variable stiffness fiber optimization framework proposed in this paper.
文摘In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.
基金supported by Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University(Grant No.102/01003002031)Re-accompanying Funding Project of Academic Achievements of Jingdezhen Ceramic University(Grant Nos.215/20506277,215/20506341)。
文摘The complete convergence for weighted sums of sequences of independent,identically distributed random variables under sublinear expectation space is studied.By moment inequality and truncation methods,we establish the equivalent conditions of complete convergence for weighted sums of sequences of independent,identically distributed random variables under sublinear expectation space.The results complement the corresponding results in probability space to those for sequences of independent,identically distributed random variables under sublinear expectation space.
基金supported by the National Natural Science Foundation of China(32270670,32288101,32271186,and 32200482)the National Basic Research Program of China(2015FY111700)the CAMS Innovation Fund for Medical Sciences(2019-I2M-5-066).
文摘Mitochondria play a key role in lipid metabolism,and mitochondrial DNA(mtDNA)mutations are thus considered to affect obesity susceptibility by altering oxidative phosphorylation and mitochondrial function.In this study,we investigate mtDNA variants that may affect obesity risk in 2877 Han Chinese individuals from 3 independent populations.The association analysis of 16 basal mtDNA haplogroups with body mass index,waist circumference,and waist-to-hip ratio reveals that only haplogroup M7 is significantly negatively correlated with all three adiposity-related anthropometric traits in the overall cohort,verified by the analysis of a single population,i.e.,the Zhengzhou population.Furthermore,subhaplogroup analysis suggests that M7b1a1 is the most likely haplogroup associated with a decreased obesity risk,and the variation T12811C(causing Y159H in ND5)harbored in M7b1a1 may be the most likely candidate for altering the mitochondrial function.Specifically,we find that proportionally more nonsynonymous mutations accumulate in M7b1a1 carriers,indicating that M7b1a1 is either under positive selection or subject to a relaxation of selective constraints.We also find that nuclear variants,especially in DACT2 and PIEZO1,may functionally interact with M7b1a1.
基金supported by National Science and Technology Major Project,China(No.2017-IV-0007-0044)National Natural Science Foundation of China(No.52175142),National Natural Science Foundation of China(No.52305170)Natural Science Foundation of Sichuan Province,China(No.2022NSFSC1885)。
文摘With the application of 2.5D Woven Variable Thickness Composites(2.5DWVTC)in aviation and other fields,the issue of strength failure in this composite type has become a focal point.First,a three-step modeling approach is proposed to rapidly construct full-scale meso-finite element models for Outer Reduction Yarn Woven Composites(ORYWC)and Inner Reduction Yarn Woven Composites(IRYWC).Then,six independent damage variables are identified:yarn fiber tension/compression,yarn matrix tension/compression,and resin matrix tension/compression.These variables are utilized to establish the constitutive equation of woven composites,considering the coupling effects of microscopic damage.Finally,combined with the Hashin failure criterion and von Mises failure criterion,the strength prediction model is implemented in ANSYS using APDL language to simulate the strength failure process of 2.5DWVTC.The results show that the predicted stiffness and strength values of various parts of ORYWC and IRYWC are in good agreement with the relevant test results.
基金supported by the National Social Science Fundation(Grant No.21BTJ040)the Project of Outstanding Young People in University of Anhui Province(Grant Nos.2023AH020037,SLXY2024A001).
文摘In this paper,by utilizing the Marcinkiewicz-Zygmund inequality and Rosenthal-type inequality of negatively superadditive dependent(NSD)random arrays and truncated method,we investigate the complete f-moment convergence of NSD random variables.We establish and improve a general result on the complete f-moment convergence for Sung’s type randomly weighted sums of NSD random variables under some general assumptions.As an application,we show the complete consistency for the randomly weighted estimator in a nonparametric regression model based on NSD errors.
文摘Non-orthogonal multiple access(NOMA)is a promising technology for the next generation wireless communication networks.The benefits of this technology can be further enhanced through deployment in conjunction with multiple-input multipleoutput(MIMO)systems.Antenna selection plays a critical role in MIMO–NOMA systems as it has the potential to significantly reduce the cost and complexity associated with radio frequency chains.This paper considers antenna selection for downlink MIMO–NOMA networks with multiple-antenna basestation(BS)and multiple-antenna user equipments(UEs).An iterative antenna selection scheme is developed for a two-user system,and to determine the initial power required for this selection scheme,a power estimation method is also proposed.The proposed algorithm is then extended to a general multiuser NOMA system.Numerical results demonstrate that the proposed antenna selection algorithm achieves near-optimal performance with much lower computational complexity in both two-user and multiuser scenarios.