期刊文献+
共找到123,760篇文章
< 1 2 250 >
每页显示 20 50 100
Assessment of molecular markers and marker-assisted selection for drought tolerance in barley(Hordeum vulgare L.) 被引量:2
1
作者 Akmaral Baidyussen Gulmira Khassanova +11 位作者 Maral Utebayev Satyvaldy Jatayev Rystay Kushanova Sholpan Khalbayeva Aigul Amangeldiyeva Raushan Yerzhebayeva KulpashBulatova Carly Schramm Peter Anderson Colin L.D.Jenkins Kathleen LSoole Yuri Shavrukov 《Journal of Integrative Agriculture》 SCIE CSCD 2024年第1期20-38,共19页
This review updates the present status of the field of molecular markers and marker-assisted selection(MAS),using the example of drought tolerance in barley.The accuracy of selected quantitative trait loci(QTLs),candi... This review updates the present status of the field of molecular markers and marker-assisted selection(MAS),using the example of drought tolerance in barley.The accuracy of selected quantitative trait loci(QTLs),candidate genes and suggested markers was assessed in the barley genome cv.Morex.Six common strategies are described for molecular marker development,candidate gene identification and verification,and their possible applications in MAS to improve the grain yield and yield components in barley under drought stress.These strategies are based on the following five principles:(1)Molecular markers are designated as genomic‘tags’,and their‘prediction’is strongly dependent on their distance from a candidate gene on genetic or physical maps;(2)plants react differently under favourable and stressful conditions or depending on their stage of development;(3)each candidate gene must be verified by confirming its expression in the relevant conditions,e.g.,drought;(4)the molecular marker identified must be validated for MAS for tolerance to drought stress and improved grain yield;and(5)the small number of molecular markers realized for MAS in breeding,from among the many studies targeting candidate genes,can be explained by the complex nature of drought stress,and multiple stress-responsive genes in each barley genotype that are expressed differentially depending on many other factors. 展开更多
关键词 BARLEY candidate genes drought tolerance gene verification via expression grain yield marker-assisted selection(MAS) molecular markers quantitative trait loci(QTLs) strategy for MAS
在线阅读 下载PDF
Effects of feature selection and normalization on network intrusion detection 被引量:2
2
作者 Mubarak Albarka Umar Zhanfang Chen +1 位作者 Khaled Shuaib Yan Liu 《Data Science and Management》 2025年第1期23-39,共17页
The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e... The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates. 展开更多
关键词 CYBERSECURITY Intrusion detection system Machine learning Deep learning Feature selection NORMALIZATION
在线阅读 下载PDF
Influence of different data selection criteria on internal geomagnetic field modeling 被引量:4
3
作者 HongBo Yao JuYuan Xu +3 位作者 Yi Jiang Qing Yan Liang Yin PengFei Liu 《Earth and Planetary Physics》 2025年第3期541-549,共9页
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i... Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications. 展开更多
关键词 Macao Science Satellite-1 SWARM geomagnetic field modeling data selection core field crustal field
在线阅读 下载PDF
Joint jammer selection and power optimization in covert communications against a warden with uncertain locations 被引量:1
4
作者 Zhijun Han Yiqing Zhou +3 位作者 Yu Zhang Tong-Xing Zheng Ling Liu Jinglin Shi 《Digital Communications and Networks》 2025年第4期1113-1123,共11页
In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(... In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(CSI),which is difficult to achieve in practice.To be more practical,it is important to investigate covert communications against a warden with uncertain locations and imperfect CSI,which makes it difficult for legitimate transceivers to estimate the detection probability of the warden.First,the uncertainty caused by the unknown warden location must be removed,and the Optimal Detection Position(OPTDP)of the warden is derived which can provide the best detection performance(i.e.,the worst case for a covert communication).Then,to further avoid the impractical assumption of perfect CSI,the covert throughput is maximized using only the channel distribution information.Given this OPTDP based worst case for covert communications,the jammer selection,the jamming power,the transmission power,and the transmission rate are jointly optimized to maximize the covert throughput(OPTDP-JP).To solve this coupling problem,a Heuristic algorithm based on Maximum Distance Ratio(H-MAXDR)is proposed to provide a sub-optimal solution.First,according to the analysis of the covert throughput,the node with the maximum distance ratio(i.e.,the ratio of the distances from the jammer to the receiver and that to the warden)is selected as the friendly jammer(MAXDR).Then,the optimal transmission and jamming power can be derived,followed by the optimal transmission rate obtained via the bisection method.In numerical and simulation results,it is shown that although the location of the warden is unknown,by assuming the OPTDP of the warden,the proposed OPTDP-JP can always satisfy the covertness constraint.In addition,with an uncertain warden and imperfect CSI,the covert throughput provided by OPTDP-JP is 80%higher than the existing schemes when the covertness constraint is 0.9,showing the effectiveness of OPTDP-JP. 展开更多
关键词 Covert communications Uncertain warden Jammer selection Power optimization Throughput maximization
在线阅读 下载PDF
Genomic selection for meat quality traits based on VIS/NIR spectral information 被引量:1
5
作者 Xi Tang Lei Xie +8 位作者 Min Yan Longyun Li Tianxiong Yao Siyi Liu Wenwu Xu Shijun Xiao Nengshui Ding Zhiyan Zhang Lusheng Huang 《Journal of Integrative Agriculture》 2025年第1期235-245,共11页
The principle of genomic selection(GS) entails estimating breeding values(BVs) by summing all the SNP polygenic effects. The visible/near-infrared spectroscopy(VIS/NIRS) wavelength and abundance values can directly re... The principle of genomic selection(GS) entails estimating breeding values(BVs) by summing all the SNP polygenic effects. The visible/near-infrared spectroscopy(VIS/NIRS) wavelength and abundance values can directly reflect the concentrations of chemical substances, and the measurement of meat traits by VIS/NIRS is similar to the processing of genomic selection data by summing all ‘polygenic effects' associated with spectral feature peaks. Therefore, it is meaningful to investigate the incorporation of VIS/NIRS information into GS models to establish an efficient and low-cost breeding model. In this study, we measured 6 meat quality traits in 359Duroc×Landrace×Yorkshire pigs from Guangxi Zhuang Autonomous Region, China, and genotyped them with high-density SNP chips. According to the completeness of the information for the target population, we proposed 4breeding strategies applied to different scenarios: Ⅰ, only spectral and genotypic data exist for the target population;Ⅱ, only spectral data exist for the target population;Ⅲ, only spectral and genotypic data but with different prediction processes exist for the target population;and Ⅳ, only spectral and phenotypic data exist for the target population.The 4 scenarios were used to evaluate the genomic estimated breeding value(GEBV) accuracy by increasing the VIS/NIR spectral information. In the results of the 5-fold cross-validation, the genetic algorithm showed remarkable potential for preselection of feature wavelengths. The breeding efficiency of Strategies Ⅱ, Ⅲ, and Ⅳ was superior to that of traditional GS for most traits, and the GEBV prediction accuracy was improved by 32.2, 40.8 and 15.5%, respectively on average. Among them, the prediction accuracy of Strategy Ⅱ for fat(%) even improved by 50.7% compared to traditional GS. The GEBV prediction accuracy of Strategy Ⅰ was nearly identical to that of traditional GS, and the fluctuation range was less than 7%. Moreover, the breeding cost of the 4 strategies was lower than that of traditional GS methods, with Strategy Ⅳ being the lowest as it did not require genotyping.Our findings demonstrate that GS methods based on VIS/NIRS data have significant predictive potential and are worthy of further research to provide a valuable reference for the development of effective and affordable breeding strategies. 展开更多
关键词 VIS/NIR genomic selection GEBV machine learning PIG meat quality
在线阅读 下载PDF
Congruent Feature Selection Method to Improve the Efficacy of Machine Learning-Based Classification in Medical Image Processing
6
作者 Mohd Anjum Naoufel Kraiem +2 位作者 Hong Min Ashit Kumar Dutta Yousef Ibrahim Daradkeh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期357-384,共28页
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp... Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset. 展开更多
关键词 Computer vision feature selection machine learning region detection texture analysis image classification medical images
在线阅读 下载PDF
Selection Rules for Exponential Population Threshold Parameters
7
作者 Gary C. McDonald Jezerca Hodaj 《Applied Mathematics》 2025年第1期1-14,共14页
This article constructs statistical selection procedures for exponential populations that may differ in only the threshold parameters. The scale parameters of the populations are assumed common and known. The independ... This article constructs statistical selection procedures for exponential populations that may differ in only the threshold parameters. The scale parameters of the populations are assumed common and known. The independent samples drawn from the populations are taken to be of the same size. The best population is defined as the one associated with the largest threshold parameter. In case more than one population share the largest threshold, one of these is tagged at random and denoted the best. Two procedures are developed for choosing a subset of the populations having the property that the chosen subset contains the best population with a prescribed probability. One procedure is based on the sample minimum values drawn from the populations, and another is based on the sample means from the populations. An “Indifference Zone” (IZ) selection procedure is also developed based on the sample minimum values. The IZ procedure asserts that the population with the largest test statistic (e.g., the sample minimum) is the best population. With this approach, the sample size is chosen so as to guarantee that the probability of a correct selection is no less than a prescribed probability in the parameter region where the largest threshold is at least a prescribed amount larger than the remaining thresholds. Numerical examples are given, and the computer R-codes for all calculations are given in the Appendices. 展开更多
关键词 Weibull Distribution Probability of Correct selection Minimum Statisticselection Procedure Means selection Procedure Subset Size IndifferenceZone selection Rule Least Favorable Configuration
在线阅读 下载PDF
Optimization method of conditioning factors selection and combination for landslide susceptibility prediction 被引量:1
8
作者 Faming Huang Keji Liu +4 位作者 Shuihua Jiang Filippo Catani Weiping Liu Xuanmei Fan Jinsong Huang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第2期722-746,共25页
Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain c... Landslide susceptibility prediction(LSP)is significantly affected by the uncertainty issue of landslide related conditioning factor selection.However,most of literature only performs comparative studies on a certain conditioning factor selection method rather than systematically study this uncertainty issue.Targeted,this study aims to systematically explore the influence rules of various commonly used conditioning factor selection methods on LSP,and on this basis to innovatively propose a principle with universal application for optimal selection of conditioning factors.An'yuan County in southern China is taken as example considering 431 landslides and 29 types of conditioning factors.Five commonly used factor selection methods,namely,the correlation analysis(CA),linear regression(LR),principal component analysis(PCA),rough set(RS)and artificial neural network(ANN),are applied to select the optimal factor combinations from the original 29 conditioning factors.The factor selection results are then used as inputs of four types of common machine learning models to construct 20 types of combined models,such as CA-multilayer perceptron,CA-random forest.Additionally,multifactor-based multilayer perceptron random forest models that selecting conditioning factors based on the proposed principle of“accurate data,rich types,clear significance,feasible operation and avoiding duplication”are constructed for comparisons.Finally,the LSP uncertainties are evaluated by the accuracy,susceptibility index distribution,etc.Results show that:(1)multifactor-based models have generally higher LSP performance and lower uncertainties than those of factors selection-based models;(2)Influence degree of different machine learning on LSP accuracy is greater than that of different factor selection methods.Conclusively,the above commonly used conditioning factor selection methods are not ideal for improving LSP performance and may complicate the LSP processes.In contrast,a satisfied combination of conditioning factors can be constructed according to the proposed principle. 展开更多
关键词 Landslide susceptibility prediction Conditioning factors selection Support vector machine Random forest Rough set Artificial neural network
在线阅读 下载PDF
A splicing algorithm for best subset selection in sliced inverse regression
9
作者 Borui Tang Jin Zhu +1 位作者 Tingyin Wang Junxian Zhu 《中国科学技术大学学报》 北大核心 2025年第5期22-34,21,I0001,共15页
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re... In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors. 展开更多
关键词 splicing technique best subset selection sliced inverse regression nonconvex optimization sparsity constraint optimal conditions
在线阅读 下载PDF
Natural selection shaped the protective effect of the mtDNA lineage against obesity in Han Chinese populations
10
作者 Ziwei Chen Lu Chen +8 位作者 Jingze Tan Yizhen Mao Meng Hao Yi Li Yi Wang Jinxi Li Jiucun Wang Li Jin Hong-Xiang Zheng 《Journal of Genetics and Genomics》 2025年第4期539-548,共10页
Mitochondria play a key role in lipid metabolism,and mitochondrial DNA(mtDNA)mutations are thus considered to affect obesity susceptibility by altering oxidative phosphorylation and mitochondrial function.In this stud... Mitochondria play a key role in lipid metabolism,and mitochondrial DNA(mtDNA)mutations are thus considered to affect obesity susceptibility by altering oxidative phosphorylation and mitochondrial function.In this study,we investigate mtDNA variants that may affect obesity risk in 2877 Han Chinese individuals from 3 independent populations.The association analysis of 16 basal mtDNA haplogroups with body mass index,waist circumference,and waist-to-hip ratio reveals that only haplogroup M7 is significantly negatively correlated with all three adiposity-related anthropometric traits in the overall cohort,verified by the analysis of a single population,i.e.,the Zhengzhou population.Furthermore,subhaplogroup analysis suggests that M7b1a1 is the most likely haplogroup associated with a decreased obesity risk,and the variation T12811C(causing Y159H in ND5)harbored in M7b1a1 may be the most likely candidate for altering the mitochondrial function.Specifically,we find that proportionally more nonsynonymous mutations accumulate in M7b1a1 carriers,indicating that M7b1a1 is either under positive selection or subject to a relaxation of selective constraints.We also find that nuclear variants,especially in DACT2 and PIEZO1,may functionally interact with M7b1a1. 展开更多
关键词 Mitochondrial DNA OBESITY Association analysis Natural selection selective pressure
原文传递
A New Antenna Selection Scheme for MIMO–NOMA Systems with Multiple-Antenna Users
11
作者 Ehsan Alemzadeh Amir Masoud Rabiei 《China Communications》 2025年第2期160-172,共13页
Non-orthogonal multiple access(NOMA)is a promising technology for the next generation wireless communication networks.The benefits of this technology can be further enhanced through deployment in conjunction with mult... Non-orthogonal multiple access(NOMA)is a promising technology for the next generation wireless communication networks.The benefits of this technology can be further enhanced through deployment in conjunction with multiple-input multipleoutput(MIMO)systems.Antenna selection plays a critical role in MIMO–NOMA systems as it has the potential to significantly reduce the cost and complexity associated with radio frequency chains.This paper considers antenna selection for downlink MIMO–NOMA networks with multiple-antenna basestation(BS)and multiple-antenna user equipments(UEs).An iterative antenna selection scheme is developed for a two-user system,and to determine the initial power required for this selection scheme,a power estimation method is also proposed.The proposed algorithm is then extended to a general multiuser NOMA system.Numerical results demonstrate that the proposed antenna selection algorithm achieves near-optimal performance with much lower computational complexity in both two-user and multiuser scenarios. 展开更多
关键词 antenna selection MIMO NOMA power allocation
在线阅读 下载PDF
Detecting Anomalies in FinTech: A Graph Neural Network and Feature Selection Perspective
12
作者 Vinh Truong Hoang Nghia Dinh +3 位作者 Viet-Tuan Le Kiet Tran-Trung Bay Nguyen Van Kittikhun Meethongjan 《Computers, Materials & Continua》 2026年第1期207-246,共40页
The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduce... The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduces significant vulnerabilities,including fraud,money laundering,and market manipulation.Traditional anomaly detection techniques often fail to capture the relational and dynamic characteristics of financial data.Graph Neural Networks(GNNs),capable of modeling intricate interdependencies among entities,have emerged as a powerful framework for detecting subtle and sophisticated anomalies.However,the high-dimensionality and inherent noise of FinTech datasets demand robust feature selection strategies to improve model scalability,performance,and interpretability.This paper presents a comprehensive survey of GNN-based approaches for anomaly detection in FinTech,with an emphasis on the synergistic role of feature selection.We examine the theoretical foundations of GNNs,review state-of-the-art feature selection techniques,analyze their integration with GNNs,and categorize prevalent anomaly types in FinTech applications.In addition,we discuss practical implementation challenges,highlight representative case studies,and propose future research directions to advance the field of graph-based anomaly detection in financial systems. 展开更多
关键词 GNN SECURITY ECOMMERCE FinTech abnormal detection feature selection
在线阅读 下载PDF
Interpretable Federated Learning Model for Cyber Intrusion Detection in Smart Cities with Privacy-Preserving Feature Selection
13
作者 Muhammad Sajid Farooq Muhammad Saleem +4 位作者 M.A.Khan Muhammad Farrukh Khan Shahan Yamin Siddiqui Muhammad Shoukat Aslam Khan M.Adnan 《Computers, Materials & Continua》 2025年第12期5183-5206,共24页
The rapid evolution of smart cities through IoT,cloud computing,and connected infrastructures has significantly enhanced sectors such as transportation,healthcare,energy,and public safety,but also increased exposure t... The rapid evolution of smart cities through IoT,cloud computing,and connected infrastructures has significantly enhanced sectors such as transportation,healthcare,energy,and public safety,but also increased exposure to sophisticated cyber threats.The diversity of devices,high data volumes,and real-time operational demands complicate security,requiring not just robust intrusion detection but also effective feature selection for relevance and scalability.Traditional Machine Learning(ML)based Intrusion Detection System(IDS)improves detection but often lacks interpretability,limiting stakeholder trust and timely responses.Moreover,centralized feature selection in conventional IDS compromises data privacy and fails to accommodate the decentralized nature of smart city infrastructures.To address these limitations,this research introduces an Interpretable Federated Learning(FL)based Cyber Intrusion Detection model tailored for smart city applications.The proposed system leverages privacy-preserving feature selection,where each client node independently identifies top-ranked features using ML models integrated with SHAP-based explainability.These local feature subsets are then aggregated at a central server to construct a global model without compromising sensitive data.Furthermore,the global model is enhanced with Explainable AI(XAI)techniques such as SHAP and LIME,offering both global interpretability and instance-level transparency for cyber threat decisions.Experimental results demonstrate that the proposed global model achieves a high detection accuracy of 98.51%,with a significantly low miss rate of 1.49%,outperforming existing models while ensuring explainability,privacy,and scalability across smart city infrastructures. 展开更多
关键词 Explainable AI SHAP LIME federated learning feature selection
在线阅读 下载PDF
Energy Efficient VM Selection Using CSOA-VM Model in Cloud Data Centers
14
作者 Mandeep Singh Devgan Tajinder Kumar +3 位作者 Purushottam Sharma Xiaochun Cheng Shashi Bhushan Vishal Garg 《CAAI Transactions on Intelligence Technology》 2025年第4期1217-1234,共18页
The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud d... The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques. 展开更多
关键词 cloud computing cloud datacenter energy consumption VM selection
在线阅读 下载PDF
Observe natural selection by evolutionary experiments in crops
15
作者 Tian Wu Shifeng Cheng 《aBIOTECH》 2025年第2期381-387,共7页
Evolutionary experiments provide a unique lens through which to observe the impacts of natural selection on crop evolution,domestication,and adaptation through empirical evidence.Enabled by modern technologies—such a... Evolutionary experiments provide a unique lens through which to observe the impacts of natural selection on crop evolution,domestication,and adaptation through empirical evidence.Enabled by modern technologies—such as the development of large-scale,structured evolving populations,high-throughput phenotyping,and genomics-driven genetics studies—the transition from theoretical evolutionary biology to practical application is now possible for staple crops.The century-long Barley Composite Cross II(CCII)competition experiment has offered invaluable insights into understanding the genomic and phenotypic basis of natural and artificial selection driven by environmental adaptation during crop evolution and domestication.These experiments enable scientists to measure evolutionary dynamics,in real time,of genetic diversity,adaptation of fitness-associated traits,and the trade-offs inherent in selective processes.Beyond advancing our understanding of evolutionary biology and agricultural practices,these studies provide critical insights into addressing global challenges,from ensuring food security to fostering resilience in human societies. 展开更多
关键词 Evolutionary experiment BARLEY DIVERSITY Natural selection Local adaptation
原文传递
Optimizing Forecast Accuracy in Cryptocurrency Markets:Evaluating Feature Selection Techniques for Technical Indicators
16
作者 Ahmed El Youssefi Abdelaaziz Hessane +1 位作者 Imad Zeroual Yousef Farhaoui 《Computers, Materials & Continua》 2025年第5期3411-3433,共23页
This study provides a systematic investigation into the influence of feature selection methods on cryptocurrency price forecasting models employing technical indicators.In this work,over 130 technical indicators—cove... This study provides a systematic investigation into the influence of feature selection methods on cryptocurrency price forecasting models employing technical indicators.In this work,over 130 technical indicators—covering momentum,volatility,volume,and trend-related technical indicators—are subjected to three distinct feature selection approaches.Specifically,mutual information(MI),recursive feature elimination(RFE),and random forest importance(RFI).By extracting an optimal set of 20 predictors,the proposed framework aims to mitigate redundancy and overfitting while enhancing interpretability.These feature subsets are integrated into support vector regression(SVR),Huber regressors,and k-nearest neighbors(KNN)models to forecast the prices of three leading cryptocurrencies—Bitcoin(BTC/USDT),Ethereum(ETH/USDT),and Binance Coin(BNB/USDT)—across horizons ranging from 1 to 20 days.Model evaluation employs the coefficient of determination(R2)and the root mean squared logarithmic error(RMSLE),alongside a walk-forward validation scheme to approximate real-world trading contexts.Empirical results indicate that incorporating momentum and volatility measures substantially improves predictive accuracy,with particularly pronounced effects observed at longer forecast windows.Moreover,indicators related to volume and trend provide incremental benefits in select market conditions.Notably,an 80%–85% reduction in the original feature set frequently maintains or enhances model performance relative to the complete indicator set.These findings highlight the critical role of targeted feature selection in addressing high-dimensional financial data challenges while preserving model robustness.This research advances the field of cryptocurrency forecasting by offering a rigorous comparison of feature selection methods and their effects on multiple digital assets and prediction horizons.The outcomes highlight the importance of dimension-reduction strategies in developing more efficient and resilient forecasting algorithms.Future efforts should incorporate high-frequency data and explore alternative selection techniques to further refine predictive accuracy in this highly volatile domain. 展开更多
关键词 Cryptocurrency forecasting technical indicator feature selection walk-forward VOLATILITY MOMENTUM TREND
在线阅读 下载PDF
A Method for Fast Feature Selection Utilizing Cross-Similarity within the Context of Fuzzy Relations
17
作者 Wenchang Yu Xiaoqin Ma +1 位作者 Zheqing Zhang Qinli Zhang 《Computers, Materials & Continua》 2025年第4期1195-1218,共24页
Feature selection methods rooted in rough sets confront two notable limitations:their high computa-tional complexity and sensitivity to noise,rendering them impractical for managing large-scale and noisy datasets.The ... Feature selection methods rooted in rough sets confront two notable limitations:their high computa-tional complexity and sensitivity to noise,rendering them impractical for managing large-scale and noisy datasets.The primary issue stems from these methods’undue reliance on all samples.To overcome these challenges,we introduce the concept of cross-similarity grounded in a robust fuzzy relation and design a rapid and robust feature selection algorithm.Firstly,we construct a robust fuzzy relation by introducing a truncation parameter.Then,based on this fuzzy relation,we propose the concept of cross-similarity,which emphasizes the sample-to-sample similarity relations that uniquely determine feature importance,rather than considering all such relations equally.After studying the manifestations and properties of cross-similarity across different fuzzy granularities,we propose a forward greedy feature selection algorithm that leverages cross-similarity as the foundation for information measurement.This algorithm significantly reduces the time complexity from O(m2n2)to O(mn2).Experimental findings reveal that the average runtime of five state-of-the-art comparison algorithms is roughly 3.7 times longer than our algorithm,while our algorithm achieves an average accuracy that surpasses those of the five comparison algorithms by approximately 3.52%.This underscores the effectiveness of our approach.This paper paves the way for applying feature selection algorithms grounded in fuzzy rough sets to large-scale gene datasets. 展开更多
关键词 Fuzzy rough sets feature selection cross-similarity fuzzy relations
在线阅读 下载PDF
The quasi-fiducial model selection for Kriging model
18
作者 Chen Fan Shuqin Zhang Xinmin Li 《Statistical Theory and Related Fields》 2025年第3期285-296,共12页
Kriging models are widely employed due to their simplicity and flexibility in a variety of fields.To gain more distributional information about the unknown parameters,we focus on constructing the fiducial distribution... Kriging models are widely employed due to their simplicity and flexibility in a variety of fields.To gain more distributional information about the unknown parameters,we focus on constructing the fiducial distribution of Kriging model parameters.To solve the challenge of constructing the fiducial marginal distribution for the spatially related parameter,we substitute the Bayesian posterior distribution for the fiducial distribution of this spatially related parameter and present a quasi-fiducial distribution for Kriging model parameters.A Gibbs sampling algorithm is given to get the samples of the quasi-fiducial distribution.Then a model selection criterion based on the quasi-fiducial distribution is proposed.Numerical studies demonstrate that the proposed method is superior to the Lasso and Elastic Net. 展开更多
关键词 Kriging model fiducial inference slice sampling model selection
原文传递
FedCW: Client Selection with Adaptive Weight in Heterogeneous Federated Learning
19
作者 Haotian Wu Jiaming Pei Jinhai Li 《Computers, Materials & Continua》 2026年第1期1551-1570,共20页
With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy... With the increasing complexity of vehicular networks and the proliferation of connected vehicles,Federated Learning(FL)has emerged as a critical framework for decentralized model training while preserving data privacy.However,efficient client selection and adaptive weight allocation in heterogeneous and non-IID environments remain challenging.To address these issues,we propose Federated Learning with Client Selection and Adaptive Weighting(FedCW),a novel algorithm that leverages adaptive client selection and dynamic weight allocation for optimizing model convergence in real-time vehicular networks.FedCW selects clients based on their Euclidean distance from the global model and dynamically adjusts aggregation weights to optimize both data diversity and model convergence.Experimental results show that FedCW significantly outperforms existing FL algorithms such as FedAvg,FedProx,and SCAFFOLD,particularly in non-IID settings,achieving faster convergence,higher accuracy,and reduced communication overhead.These findings demonstrate that FedCW provides an effective solution for enhancing the performance of FL in heterogeneous,edge-based computing environments. 展开更多
关键词 Federated learning non-IID client selection weight allocation vehicular networks
在线阅读 下载PDF
Application of Multiple Correlations Analysis in Portfolio Selection
20
作者 Ruili Sun Junpeng Jia Shiguo Huang 《Proceedings of Business and Economic Studies》 2025年第4期305-319,共15页
Portfolio selection based on the global minimum variance(GMV)model remains a significant focus in financial research.The covariance matrix,central to the GMV model,determines portfolio weights,and its accurate estimat... Portfolio selection based on the global minimum variance(GMV)model remains a significant focus in financial research.The covariance matrix,central to the GMV model,determines portfolio weights,and its accurate estimation is key to effective strategies.Based on the decomposition form of the covariance matrix.This paper introduces semi-variance for improved financial asymmetric risk measurement;addresses asymmetry in financial asset correlations using distance,asymmetric,and Chatterjee correlations to refine covariance matrices;and proposes three new covariance matrix models to enhance risk assessment and portfolio selection strategies.Testing with data from 30 stocks across various sectors of the Chinese market confirms the strong performance of the proposed strategies. 展开更多
关键词 Portfolio selection GMV model Semi-variance Asymmetric correlation Chatterjee correlation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部