Various index structures have recently been proposed to facilitate high-dimensional KNN queries, among which the techniques of approximate vector presentation and one-dimensional (1D) transformation can break the curs...Various index structures have recently been proposed to facilitate high-dimensional KNN queries, among which the techniques of approximate vector presentation and one-dimensional (1D) transformation can break the curse of dimensionality. Based on the two techniques above, a novel high-dimensional index is proposed, called Bit-code and Distance based index (BD). BD is based on a special partitioning strategy which is optimized for high-dimensional data. By the definitions of bit code and transformation function, a high-dimensional vector can be first approximately represented and then transformed into a 1D vector, the key managed by a B+-tree. A new KNN search algorithm is also proposed that exploits the bit code and distance to prune the search space more effectively. Results of extensive experiments using both synthetic and real data demonstrated that BD out- performs the existing index structures for KNN search in high-dimensional spaces.展开更多
k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this v...k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this value correctly is critical for building effective k-means models.The use of the traditional elbow method to help identify this value has a long-standing literature.However,when using this method with certain datasets,smooth curves may appear,making it challenging to identify the k-value due to its unclear nature.On the other hand,various internal validation indexes,which are proposed as a solution to this issue,may be inconsistent.Although various techniques for solving smooth elbow challenges exist,k-hyperparameter tuning in high-dimensional spaces still remains intractable and an open research issue.In this paper,we have first reviewed the existing techniques for solving smooth elbow challenges.The identified research gaps are then utilized in the development of the new technique.The new technique,referred to as the ensemble-based technique of a self-adapting autoencoder and internal validation indexes,is then validated in high-dimensional space clustering.The optimal k-value,tuned by this technique using a voting scheme,is a trade-off between the number of clusters visualized in the autoencoder’s latent space,k-value from the ensemble internal validation index score and one that generates a value of 0 or close to 0 on the derivative f″′(k)(1+f′(k)^(2))−3 f″(k)^(2)f″((k)2f′(k),at the elbow.Experimental results based on the Cochran’s Q test,ANOVA,and McNemar’s score indicate a relatively good performance of the newly developed technique in k-hyperparameter tuning.展开更多
The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method...The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method to assess the overall significance of the coefficients.The authors establish that the proposed test is asymptotically normal under both the null hypothesis and local alternatives.Based on the locally concerned U-statistic,the authors further develop a globally concerned U-statistic to test whether the coefficient function is zero.A stochastic perturbation method is employed to approximate the distribution of the globally concerned test statistic.Monte Carlo simulations demonstrate the validity of the proposed test in finite samples.展开更多
Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is q...Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is quite challenging to make statistical inference on distributed high-dimensional QR with missing data due to the distributed nature,sparsity and missingness of data and nondifferentiable quantile loss function.To overcome the challenge,this paper develops a communicationefficient method to select variables and estimate parameters by utilizing a smooth function to approximate the non-differentiable quantile loss function and incorporating the idea of the inverse probability weighting and the penalty function.The proposed approach has three merits.First,it is both computationally and communicationally efficient because only the first-and second-order information of the approximate objective function are communicated at each iteration.Second,the proposed estimators possess the oracle property after a limited number of iterations without constraint on the number of machines.Third,the proposed method simultaneously selects variables and estimates parameters within a distributed framework,ensuring robustness to the specified response probability or propensity score function of the missing data mechanism.Simulation studies and a real example are used to illustrate the effectiveness of the proposed methodologies.展开更多
Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entangleme...When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entanglement can be recovered completely when the high-dimensional spacetime is exploited,in the case that the acceleration is not too large,since the entanglement loss rate caused by the large acceleration is faster than the recovery process.We also calculate the entanglement change caused by the anti-Unruh effect and found that the lost entanglement could just be recovered part by the anti-Unruh effect,and the anti-Unruh effect could only appear for a finite range of acceleration when the interaction time scale is approximately shorter than the reciprocal of the energy gap in two dimensional spacetime.The limit case of zero acceleration is also investigated,which gives an analytical interpretation for the increase or recovery of entanglement.展开更多
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subsp...Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.展开更多
文章基于CNKI、Web of Science两大数据库2005—2024年收录的图书馆学习空间研究文献,运用CiteSpace、Bicomb、SPSS软件进行可视化分析。结果显示,国外研究以技术驱动创新为主导,聚焦智能化技术与跨学科应用;而国内研究呈现理论创新与...文章基于CNKI、Web of Science两大数据库2005—2024年收录的图书馆学习空间研究文献,运用CiteSpace、Bicomb、SPSS软件进行可视化分析。结果显示,国外研究以技术驱动创新为主导,聚焦智能化技术与跨学科应用;而国内研究呈现理论创新与实践探索并重,通过大数据、AI等技术形成本土化解决方案。文章提出通过数字技术与人文关怀的平衡发展、国际化视野与本土化创新的有机结合、理论研究与实践应用的深度融合,为我国图书馆学习空间的研究提供借鉴与参考。展开更多
We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-samp...We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned.展开更多
The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase scree...The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.展开更多
It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cy...It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.展开更多
In this paper,We study the global attractor and its properties on in nite lattice dynamical system FitzHugh-Nagumo in a weighted space lσ^2×lσ^2.We prove the existence and uniqueness of the solution to the latt...In this paper,We study the global attractor and its properties on in nite lattice dynamical system FitzHugh-Nagumo in a weighted space lσ^2×lσ^2.We prove the existence and uniqueness of the solution to the lattice dynamical system FitzHugh-Nagumo in lσ^2×lσ^2.Then we get a bounded absorbing set,which suggests the existence of global attractors.Finally,we study the uniform boundedness and the upper semicontinuity of the global attractor.展开更多
Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio...Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.展开更多
Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from nume...Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.展开更多
The goal of this paper is to establish the boundedness of the p-adic fractional integral operator with rough kernel I_(β,Ω′)^(p)and its commutators generated by b∈Λ_(γ)(Q_(p)^(n))(0<γ<1)and the I_(β,Ω′...The goal of this paper is to establish the boundedness of the p-adic fractional integral operator with rough kernel I_(β,Ω′)^(p)and its commutators generated by b∈Λ_(γ)(Q_(p)^(n))(0<γ<1)and the I_(β,Ω′)^(p) on grand p-adic Herz spaces.展开更多
This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mi...This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.展开更多
基金Project (No. [2005]555) supported by the Hi-Tech Research and De-velopment Program (863) of China
文摘Various index structures have recently been proposed to facilitate high-dimensional KNN queries, among which the techniques of approximate vector presentation and one-dimensional (1D) transformation can break the curse of dimensionality. Based on the two techniques above, a novel high-dimensional index is proposed, called Bit-code and Distance based index (BD). BD is based on a special partitioning strategy which is optimized for high-dimensional data. By the definitions of bit code and transformation function, a high-dimensional vector can be first approximately represented and then transformed into a 1D vector, the key managed by a B+-tree. A new KNN search algorithm is also proposed that exploits the bit code and distance to prune the search space more effectively. Results of extensive experiments using both synthetic and real data demonstrated that BD out- performs the existing index structures for KNN search in high-dimensional spaces.
文摘k-means is a popular clustering algorithm because of its simplicity and scalability to handle large datasets.However,one of its setbacks is the challenge of identifying the correct k-hyperparameter value.Tuning this value correctly is critical for building effective k-means models.The use of the traditional elbow method to help identify this value has a long-standing literature.However,when using this method with certain datasets,smooth curves may appear,making it challenging to identify the k-value due to its unclear nature.On the other hand,various internal validation indexes,which are proposed as a solution to this issue,may be inconsistent.Although various techniques for solving smooth elbow challenges exist,k-hyperparameter tuning in high-dimensional spaces still remains intractable and an open research issue.In this paper,we have first reviewed the existing techniques for solving smooth elbow challenges.The identified research gaps are then utilized in the development of the new technique.The new technique,referred to as the ensemble-based technique of a self-adapting autoencoder and internal validation indexes,is then validated in high-dimensional space clustering.The optimal k-value,tuned by this technique using a voting scheme,is a trade-off between the number of clusters visualized in the autoencoder’s latent space,k-value from the ensemble internal validation index score and one that generates a value of 0 or close to 0 on the derivative f″′(k)(1+f′(k)^(2))−3 f″(k)^(2)f″((k)2f′(k),at the elbow.Experimental results based on the Cochran’s Q test,ANOVA,and McNemar’s score indicate a relatively good performance of the newly developed technique in k-hyperparameter tuning.
基金supported by the National Social Science Foundation of China under Grant No.23&ZD126National Science Foundation of China under Grant No.12471256+1 种基金Natural Science Foundation of Shanxi Province under Grant No.202203021221219Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi under Grant No.2023L164。
文摘The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method to assess the overall significance of the coefficients.The authors establish that the proposed test is asymptotically normal under both the null hypothesis and local alternatives.Based on the locally concerned U-statistic,the authors further develop a globally concerned U-statistic to test whether the coefficient function is zero.A stochastic perturbation method is employed to approximate the distribution of the globally concerned test statistic.Monte Carlo simulations demonstrate the validity of the proposed test in finite samples.
基金supported by the National Key R&D Program of China under Grant No.2022YFA1003701the Open Research Fund of Yunnan Key Laboratory of Statistical Modeling and Data Analysis,Yunnan University under Grant No.SMDAYB2023004。
文摘Quantile regression(QR)has become an important tool to measure dependence of response variable's quantiles on a number of predictors for heterogeneous data,especially heavy-tailed data and outliers.However,it is quite challenging to make statistical inference on distributed high-dimensional QR with missing data due to the distributed nature,sparsity and missingness of data and nondifferentiable quantile loss function.To overcome the challenge,this paper develops a communicationefficient method to select variables and estimate parameters by utilizing a smooth function to approximate the non-differentiable quantile loss function and incorporating the idea of the inverse probability weighting and the penalty function.The proposed approach has three merits.First,it is both computationally and communicationally efficient because only the first-and second-order information of the approximate objective function are communicated at each iteration.Second,the proposed estimators possess the oracle property after a limited number of iterations without constraint on the number of machines.Third,the proposed method simultaneously selects variables and estimates parameters within a distributed framework,ensuring robustness to the specified response probability or propensity score function of the missing data mechanism.Simulation studies and a real example are used to illustrate the effectiveness of the proposed methodologies.
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
基金supported by the National Natural Science Foundation of China(Grant Nos.12375057,11947301,and 12047502)the Fundamental Research Funds for the Central UniversitiesChina University of Geosciences(Wuhan)(Grant No.G1323523064)。
文摘When atoms are accelerated in the vacuum,entanglement among atoms will degrade compared with the initial situation before the acceleration.In this study,we propose a novel and interesting view that the lost entanglement can be recovered completely when the high-dimensional spacetime is exploited,in the case that the acceleration is not too large,since the entanglement loss rate caused by the large acceleration is faster than the recovery process.We also calculate the entanglement change caused by the anti-Unruh effect and found that the lost entanglement could just be recovered part by the anti-Unruh effect,and the anti-Unruh effect could only appear for a finite range of acceleration when the interaction time scale is approximately shorter than the reciprocal of the energy gap in two dimensional spacetime.The limit case of zero acceleration is also investigated,which gives an analytical interpretation for the increase or recovery of entanglement.
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
基金supported in part by the National Natural Science Foundation of China (Nos. 61303074, 61309013)the Programs for Science, National Key Basic Research and Development Program ("973") of China (No. 2012CB315900)Technology Development of Henan province (Nos.12210231003, 13210231002)
文摘Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.
文摘文章基于CNKI、Web of Science两大数据库2005—2024年收录的图书馆学习空间研究文献,运用CiteSpace、Bicomb、SPSS软件进行可视化分析。结果显示,国外研究以技术驱动创新为主导,聚焦智能化技术与跨学科应用;而国内研究呈现理论创新与实践探索并重,通过大数据、AI等技术形成本土化解决方案。文章提出通过数字技术与人文关怀的平衡发展、国际化视野与本土化创新的有机结合、理论研究与实践应用的深度融合,为我国图书馆学习空间的研究提供借鉴与参考。
文摘We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned.
基金supported by the Project of the Hubei Provincial Department of Science and Technology(Grant Nos.2022CFB957,2022CFB475)the National Natural Science Foundation of China(Grant No.11847118)。
文摘The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.
基金Supported by the National Natural Science Foundation of China(12201446)the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(22KJB110005)the Shuangchuang Program of Jiangsu Province(JSSCBS20220898)。
文摘It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.
基金Supported by The Scientic Research Foundation Funded by Hunan Provincial Education Department under grant 19A503Partially supported by Hunan Provincial Exploration of Undergraduate Research Learning and Innovative Experiment Project:2018XTUSJ008Hunan Provincial Natural Science Foundation of China under grant 2015JJ2144.
文摘In this paper,We study the global attractor and its properties on in nite lattice dynamical system FitzHugh-Nagumo in a weighted space lσ^2×lσ^2.We prove the existence and uniqueness of the solution to the lattice dynamical system FitzHugh-Nagumo in lσ^2×lσ^2.Then we get a bounded absorbing set,which suggests the existence of global attractors.Finally,we study the uniform boundedness and the upper semicontinuity of the global attractor.
基金supported in part by the Young Scientists Fund of the National Natural Science Foundation of China(Grant Nos.82304253)(and 82273709)the Foundation for Young Talents in Higher Education of Guangdong Province(Grant No.2022KQNCX021)the PhD Starting Project of Guangdong Medical University(Grant No.GDMUB2022054).
文摘Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.
基金supported by Fundamental Research Program of Shanxi Province(Nos.202203021211088,202403021212254,202403021221109)Graduate Research Innovation Project in Shanxi Province(No.2024KY616).
文摘Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.
基金Supported by Natural Science Foundation of China(12461021)。
文摘The goal of this paper is to establish the boundedness of the p-adic fractional integral operator with rough kernel I_(β,Ω′)^(p)and its commutators generated by b∈Λ_(γ)(Q_(p)^(n))(0<γ<1)and the I_(β,Ω′)^(p) on grand p-adic Herz spaces.
基金supported by the National Natural Science Foundation of China(No.12372045)the National Key Research and the Development Program of China(Nos.2023YFC2205900,2023YFC2205901)。
文摘This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.