Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by ...Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by enlarging the receptive field,which indicates how the convolution process extracts features in a high dimensional feature space.However,its functionality is restricted to the spatial dimension and network depth,limiting further improvements in network performance due to insufficient information interaction and representation.Crucially,the potential of high dimensional feature space in the channel dimension and the exploration of network width/resolution remain largely untapped.In this paper,we consider nonlinear transforms from the perspective of feature space,defining high-dimensional feature spaces in different dimensions and investigating the specific effects.Firstly,we introduce the dimension increasing and decreasing transforms in both channel and spatial dimensions to obtain high dimensional feature space and achieve better feature extraction.Secondly,we design a channel-spatial fusion residual transform(CSR),which incorporates multi-dimensional transforms for a more effective representation.Furthermore,we simplify the proposed fusion transform to obtain a slim architecture(CSR-sm),balancing network complexity and compression performance.Finally,we build the overall network with stacked CSR transforms to achieve better compression and reconstruction.Experimental results demonstrate that the proposed method can achieve superior ratedistortion performance compared to the existing LIC methods and traditional codecs.Specifically,our proposed method achieves 9.38%BD-rate reduction over VVC on Kodak dataset.展开更多
An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sp...An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.展开更多
Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data ...Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.展开更多
In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. B...In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. By the criterion algorithm, the global controllability can be determined in finite steps of arithmetic operations. The algorithm is imposed on the coefficients of the polynomials only and the analysis technique is based on Sturm Theorem in real algebraic geometry and its modern progress. Finally, the authors will give some examples to show the application of our results.展开更多
To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed p...To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed paradigm consists of three main stages:categorization of high dimensional data,high-dimensionality-trait-driven feature extraction,and high-dimensionality-trait-driven classifier selection.In the first stage,according to the definition of high-dimensionality and the relationship between sample size and feature dimensions,the high-dimensionality traits of credit dataset are further categorized into two types:100<feature dimensions<sample size,and feature dimensions≥sample size.In the second stage,some typical feature extraction methods are tested regarding the two categories of high dimensionality.In the final stage,four types of classifiers are performed to evaluate credit risk considering different high-dimensionality traits.For the purpose of illustration and verification,credit classification experiments are performed on two publicly available credit risk datasets,and the results show that the proposed high-dimensionality-trait-driven learning paradigm for feature extraction and classifier selection is effective in handling high-dimensional credit classification issues and improving credit classification accuracy relative to the benchmark models listed in this study.展开更多
Markowitz Portfolio theory under-estimates the risk associated with the return of a portfolio in case of high dimensional data. El Karoui mathematically proved this in [1] and suggested improved estimators for unbiase...Markowitz Portfolio theory under-estimates the risk associated with the return of a portfolio in case of high dimensional data. El Karoui mathematically proved this in [1] and suggested improved estimators for unbiased estimation of this risk under specific model assumptions. Norm constrained portfolios have recently been studied to keep the effective dimension low. In this paper we consider three sets of high dimensional data, the stock market prices for three countries, namely US, UK and India. We compare the Markowitz efficient frontier to those obtained by unbiasedness corrections and imposing norm-constraints in these real data scenarios. We also study the out-of-sample performance of the different procedures. We find that the 2-norm constrained portfolio has best overall performance.展开更多
This paper proposes a test procedure for testing the regression coefficients in high dimensional partially linear models based on the F-statistic. In the partially linear model, the authors first estimate the unknown ...This paper proposes a test procedure for testing the regression coefficients in high dimensional partially linear models based on the F-statistic. In the partially linear model, the authors first estimate the unknown nonlinear component by some nonparametric methods and then generalize the F-statistic to test the regression coefficients under some regular conditions. During this procedure, the estimation of the nonlinear component brings much challenge to explore the properties of generalized F-test. The authors obtain some asymptotic properties of the generalized F-test in more general cases,including the asymptotic normality and the power of this test with p/n ∈(0, 1) without normality assumption. The asymptotic result is general and by adding some constraint conditions we can obtain the similar conclusions in high dimensional linear models. Through simulation studies, the authors demonstrate good finite-sample performance of the proposed test in comparison with the theoretical results. The practical utility of our method is illustrated by a real data example.展开更多
Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper...Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.展开更多
This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly express...This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly expressed.Then,the authors propose an empirical likelihood method to test regression coefficients.The authors derive the asymptotic chi-squared distribution with two degrees of freedom of the proposed test statistics under the null hypothesis.In addition,the method is extended to test with nuisance parameters.Simulations show that the proposed method have a good performance in control of type-I error rate and power.The proposed method is also employed to analyze a data of Skin Cutaneous Melanoma(SKCM).展开更多
Covariance matrix plays an important role in risk management, asset pricing, and portfolio allocation. Covariance matrix estimation becomes challenging when the dimensionality is comparable or much larger than the sam...Covariance matrix plays an important role in risk management, asset pricing, and portfolio allocation. Covariance matrix estimation becomes challenging when the dimensionality is comparable or much larger than the sample size. A widely used approach for reducing dimensionality is based on multi-factor models. Although it has been well studied and quite successful in many applications, the quality of the estimated covariance matrix is often degraded due to a nontrivial amount of missing data in the factor matrix for both technical and cost reasons. Since the factor matrix is only approximately low rank or even has full rank, existing matrix completion algorithms are not applicable. We consider a new matrix completion paradigm using the factor models directly and apply the alternating direction method of multipliers for the recovery. Numerical experiments show that the nuclear-norm matrix completion approaches are not suitable but our proposed models and algorithms are promising.展开更多
We propose a two-step variable selection procedure for censored quantile regression with high dimensional predictors. To account for censoring data in high dimensional case, we employ effective dimension reduction and...We propose a two-step variable selection procedure for censored quantile regression with high dimensional predictors. To account for censoring data in high dimensional case, we employ effective dimension reduction and the ideas of informative subset idea. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. Simulation study and real data analysis are conducted to evaluate the finite sample performance of the proposed approach.展开更多
Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approac...Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.展开更多
Dynamic transient responses of rotating twisted plate under the air-blast loading and step loading respectively considering the geometric nonlinear relationships are investigated using classical shallow shell theory.B...Dynamic transient responses of rotating twisted plate under the air-blast loading and step loading respectively considering the geometric nonlinear relationships are investigated using classical shallow shell theory.By applying energy principle,a novel high dimensional nonlinear dynamic system of the rotating cantilever twisted plate is derived for the first time.The use of variable mode functions by polynomial functions according to the twist angles and geometric of the plate makes it more accurate to describe the dynamic system than that using the classic cantilever beam functions and the free-free beam functions.The comparison researches are carried out between the present results and other literatures to validate present model,formulation and computer process.Equations of motion describing the transient high dimensional nonlinear dynamic response are reduced to a four degree of freedom dynamic system which expressed by out-plane displacement.The effects of twisted angle,stagger angle,rotation speed,load intensity and viscous damping on nonlinear dynamic transient responses of the twisted plate have been investigated.It’s important to note that although the homogeneous and isotropic material is applied here,it might be helpful for laminated composite,functionally graded material as long as the equivalent material parameters are obtained.展开更多
The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the req...The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the request. However, some active query subspaces may contain no query results at all, those are called false active query subspaces. It is obvious that the performance of query processing degrades in the presence of false active query subspaces. Our experiments show that this problem becomes seriously when the data are high dimensional and the number of accesses to false active subspaces increases as the dimensionality increases. In order to solve this problem, this paper proposes a space mapping approach to reducing such unnecessary accesses. A given query space can be refined by filtering within its mapped space. To do so, a mapping strategy called maxgap is proposed to improve the efficiency of the refinement processing. Based on the mapping strategy, an index structure called MS-tree and algorithms of query processing are presented in this paper. Finally, the performance of MS-tree is compared with that of other competitors in terms of range queries on a real data set.展开更多
This paper aims to develop a new robust U-type test for high dimensional regression coefficients using the estimated U-statistic of order two and refitted cross-validation error variance estimation. It is proved that ...This paper aims to develop a new robust U-type test for high dimensional regression coefficients using the estimated U-statistic of order two and refitted cross-validation error variance estimation. It is proved that the limiting null distribution of the proposed new test is normal under two kinds of ordinary models.We further study the local power of the proposed test and compare with other competitive tests for high dimensional data. The idea of refitted cross-validation approach is utilized to reduce the bias of sample variance in the estimation of the test statistic. Our theoretical results indicate that the proposed test can have even more substantial power gain than the test by Zhong and Chen(2011) when testing a hypothesis with outlying observations and heavy tailed distributions. We assess the finite-sample performance of the proposed test by examining its size and power via Monte Carlo studies. We also illustrate the application of the proposed test by an empirical analysis of a real data example.展开更多
Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the co...Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the collinearity problem. Combining the strengths of nonconvex penalties and ridge regression(abbreviated as NPR), we study the oracle property of the NPR estimator in high dimensional settings with highly correlated predictors, where the dimensionality of covariates pn is allowed to increase exponentially with the sample size n. Simulation studies and a real data example are presented to verify the performance of the NPR method.展开更多
Super-resolution(SR)imaging has been widely used in several fields like remote sensing and microscopy.However,it is challenging for existing SR approaches to capture SR images in a single shot,especially in dynamic im...Super-resolution(SR)imaging has been widely used in several fields like remote sensing and microscopy.However,it is challenging for existing SR approaches to capture SR images in a single shot,especially in dynamic imaging scenarios.展开更多
Tailoring multiple degrees-of-freedom(DoFs)to achieve high-dimensional laser field is crucial for advancing optical technologies.While recent advancements have demonstrated the ability to manipulate a limited number o...Tailoring multiple degrees-of-freedom(DoFs)to achieve high-dimensional laser field is crucial for advancing optical technologies.While recent advancements have demonstrated the ability to manipulate a limited number of DoFs,most existing methods rely on bulky optical components or intricate systems that employ time-consuming iterative methods and,most critically,the on-demand tailoring of multiple DoFs simultaneously through a compact,single element—remains underexplored.In this study,we propose an intelligent hybrid strategy that enables the simultaneous and customizable manipulation of six DoFs:wave vector,initial phase,spatial mode,amplitude,orbital angular momentum(OAM)and spin angular momentum(SAM).Our approach advances in phase-only property,which facilitates tailoring strategy experimentally demonstrated on a compact metasurface.A fabricated sample is tailored to realize arbitrary manipulation across six DoFs,constructing a 288-dimensional space.Notably,since the OAM eigenstates constitute an infinite dimensional Hilbert space,this proposal can be further extended to even higher dimensions.Proof-of-principle experiments confirm the effectiveness in manipulation capability and dimensionality.We envision that this powerful tailoring ability offers immense potential for multifunctional photonic devices across both classical and quantum scenarios and such compactness extending the dimensional capabilities for integration on-chip requirements.展开更多
Dear Editor,This letter presents a novel latent factorization model for high dimensional and incomplete (HDI) tensor, namely the neural Tucker factorization (Neu Tuc F), which is a generic neural network-based latent-...Dear Editor,This letter presents a novel latent factorization model for high dimensional and incomplete (HDI) tensor, namely the neural Tucker factorization (Neu Tuc F), which is a generic neural network-based latent-factorization-of-tensors model under the Tucker decomposition framework.展开更多
Two-dimensional(2D)nanomaterials,including graphene,titanium carbide(MXenes),montmorillonite,and boron nitride,etc.,have excellent mechanical,electrical,and thermal properties and good biocompatibility,which show prom...Two-dimensional(2D)nanomaterials,including graphene,titanium carbide(MXenes),montmorillonite,and boron nitride,etc.,have excellent mechanical,electrical,and thermal properties and good biocompatibility,which show promising applications in aerospace,flexible electronics,and biomedicine[1,2].It remains a great challenge to scalable assemble the 2D nanomaterials into highperformance macroforms for realizing these commercial applications.Natural nacre is a typical 2D nanocomposite composed of 95 vol%aragonite flakes and 5 vol%biopolymer and possesses unique mechanical properties owing to its ordered layered structure and sophisticated interface interactions[3].Inspired by the relationship between microstructure and macro-property of nacre,various assembly strategies have been developed to fabricate high-performance 2D nanocomposites by improving interlayer connectivity,alignment。展开更多
基金supported by the Key Program of the National Natural Science Foundation of China(Grant No.62031013)Guangdong Province Key Construction Discipline Scientific Research Capacity Improvement Project(Grant No.2022ZDJS117).
文摘Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by enlarging the receptive field,which indicates how the convolution process extracts features in a high dimensional feature space.However,its functionality is restricted to the spatial dimension and network depth,limiting further improvements in network performance due to insufficient information interaction and representation.Crucially,the potential of high dimensional feature space in the channel dimension and the exploration of network width/resolution remain largely untapped.In this paper,we consider nonlinear transforms from the perspective of feature space,defining high-dimensional feature spaces in different dimensions and investigating the specific effects.Firstly,we introduce the dimension increasing and decreasing transforms in both channel and spatial dimensions to obtain high dimensional feature space and achieve better feature extraction.Secondly,we design a channel-spatial fusion residual transform(CSR),which incorporates multi-dimensional transforms for a more effective representation.Furthermore,we simplify the proposed fusion transform to obtain a slim architecture(CSR-sm),balancing network complexity and compression performance.Finally,we build the overall network with stacked CSR transforms to achieve better compression and reconstruction.Experimental results demonstrate that the proposed method can achieve superior ratedistortion performance compared to the existing LIC methods and traditional codecs.Specifically,our proposed method achieves 9.38%BD-rate reduction over VVC on Kodak dataset.
文摘An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.
基金Project(RDF 11-02-03)supported by the Research Development Fund of XJTLU,China
文摘Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.
基金supported by the Natural Science Foundation of China under Grant Nos.60804008,61174048and 11071263the Fundamental Research Funds for the Central Universities and Guangdong Province Key Laboratory of Computational Science at Sun Yat-Sen University
文摘In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. By the criterion algorithm, the global controllability can be determined in finite steps of arithmetic operations. The algorithm is imposed on the coefficients of the polynomials only and the analysis technique is based on Sturm Theorem in real algebraic geometry and its modern progress. Finally, the authors will give some examples to show the application of our results.
基金This work is partially supported by grants from the Key Program of National Natural Science Foundation of China(NSFC Nos.71631005 and 71731009)the Major Program of the National Social Science Foundation of China(No.19ZDA103).
文摘To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed paradigm consists of three main stages:categorization of high dimensional data,high-dimensionality-trait-driven feature extraction,and high-dimensionality-trait-driven classifier selection.In the first stage,according to the definition of high-dimensionality and the relationship between sample size and feature dimensions,the high-dimensionality traits of credit dataset are further categorized into two types:100<feature dimensions<sample size,and feature dimensions≥sample size.In the second stage,some typical feature extraction methods are tested regarding the two categories of high dimensionality.In the final stage,four types of classifiers are performed to evaluate credit risk considering different high-dimensionality traits.For the purpose of illustration and verification,credit classification experiments are performed on two publicly available credit risk datasets,and the results show that the proposed high-dimensionality-trait-driven learning paradigm for feature extraction and classifier selection is effective in handling high-dimensional credit classification issues and improving credit classification accuracy relative to the benchmark models listed in this study.
文摘Markowitz Portfolio theory under-estimates the risk associated with the return of a portfolio in case of high dimensional data. El Karoui mathematically proved this in [1] and suggested improved estimators for unbiased estimation of this risk under specific model assumptions. Norm constrained portfolios have recently been studied to keep the effective dimension low. In this paper we consider three sets of high dimensional data, the stock market prices for three countries, namely US, UK and India. We compare the Markowitz efficient frontier to those obtained by unbiasedness corrections and imposing norm-constraints in these real data scenarios. We also study the out-of-sample performance of the different procedures. We find that the 2-norm constrained portfolio has best overall performance.
基金supported by the Natural Science Foundation of China under Grant Nos.11231010,11471223,11501586BCMIIS and Key Project of Beijing Municipal Educational Commission under Grant No.KZ201410028030
文摘This paper proposes a test procedure for testing the regression coefficients in high dimensional partially linear models based on the F-statistic. In the partially linear model, the authors first estimate the unknown nonlinear component by some nonparametric methods and then generalize the F-statistic to test the regression coefficients under some regular conditions. During this procedure, the estimation of the nonlinear component brings much challenge to explore the properties of generalized F-test. The authors obtain some asymptotic properties of the generalized F-test in more general cases,including the asymptotic normality and the power of this test with p/n ∈(0, 1) without normality assumption. The asymptotic result is general and by adding some constraint conditions we can obtain the similar conclusions in high dimensional linear models. Through simulation studies, the authors demonstrate good finite-sample performance of the proposed test in comparison with the theoretical results. The practical utility of our method is illustrated by a real data example.
基金Supported by the National Key Research and Development Program of China(2016YFC0100300)the National Natural Science Foundation of China(61402371,61771369)+1 种基金the Natural Science Basic Research Plan in Shaanxi Province of China(2017JM6008)the Fundamental Research Funds for the Central Universities of China(3102017zy032,3102018zy020)
文摘Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.
基金supported by the University of Chinese Academy of Sciences under Grant No.Y95401TXX2Beijing Natural Science Foundation under Grant No.Z190004Key Program of Joint Funds of the National Natural Science Foundation of China under Grant No.U19B2040。
文摘This paper considers tests for regression coefficients in high dimensional partially linear Models.The authors first use the B-spline method to estimate the unknown smooth function so that it could be linearly expressed.Then,the authors propose an empirical likelihood method to test regression coefficients.The authors derive the asymptotic chi-squared distribution with two degrees of freedom of the proposed test statistics under the null hypothesis.In addition,the method is extended to test with nuisance parameters.Simulations show that the proposed method have a good performance in control of type-I error rate and power.The proposed method is also employed to analyze a data of Skin Cutaneous Melanoma(SKCM).
基金supported by National Natural Science Foundation of China(Grant Nos.10971122,11101274 and 11322109)Scientific and Technological Projects of Shandong Province(Grant No.2009GG10001012)Excellent Young Scientist Foundation of Shandong Province(Grant No.BS2012SF025)
文摘Covariance matrix plays an important role in risk management, asset pricing, and portfolio allocation. Covariance matrix estimation becomes challenging when the dimensionality is comparable or much larger than the sample size. A widely used approach for reducing dimensionality is based on multi-factor models. Although it has been well studied and quite successful in many applications, the quality of the estimated covariance matrix is often degraded due to a nontrivial amount of missing data in the factor matrix for both technical and cost reasons. Since the factor matrix is only approximately low rank or even has full rank, existing matrix completion algorithms are not applicable. We consider a new matrix completion paradigm using the factor models directly and apply the alternating direction method of multipliers for the recovery. Numerical experiments show that the nuclear-norm matrix completion approaches are not suitable but our proposed models and algorithms are promising.
基金supported by National Natural Science Foundation of China (Grant Nos. 11401383, 11301391 and 11271080)
文摘We propose a two-step variable selection procedure for censored quantile regression with high dimensional predictors. To account for censoring data in high dimensional case, we employ effective dimension reduction and the ideas of informative subset idea. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. Simulation study and real data analysis are conducted to evaluate the finite sample performance of the proposed approach.
文摘Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.
基金support of National Natural Science Foundation of China through grant Nos.11872127,11832002 and 11732005,Fundamental Research Program of Shenzhen Municipality No.JCYJ20160608153749600 and the Project of Highlevel Innovative Team Building Plan for Beijing Municipal Colleges and Universities No.IDHT20180513 and the project of Qin Xin Talents Cultivation Program,Beijing Information Science&Technology University QXTCP A201901.
文摘Dynamic transient responses of rotating twisted plate under the air-blast loading and step loading respectively considering the geometric nonlinear relationships are investigated using classical shallow shell theory.By applying energy principle,a novel high dimensional nonlinear dynamic system of the rotating cantilever twisted plate is derived for the first time.The use of variable mode functions by polynomial functions according to the twist angles and geometric of the plate makes it more accurate to describe the dynamic system than that using the classic cantilever beam functions and the free-free beam functions.The comparison researches are carried out between the present results and other literatures to validate present model,formulation and computer process.Equations of motion describing the transient high dimensional nonlinear dynamic response are reduced to a four degree of freedom dynamic system which expressed by out-plane displacement.The effects of twisted angle,stagger angle,rotation speed,load intensity and viscous damping on nonlinear dynamic transient responses of the twisted plate have been investigated.It’s important to note that although the homogeneous and isotropic material is applied here,it might be helpful for laminated composite,functionally graded material as long as the equivalent material parameters are obtained.
基金Supported by National Basic Research Program of China (Grant No.2006CB303103)the National Natural Science Foundation of China (Grant Nos.60873011,60802026,60773219,60773021)the High Technology Program (Grant No.2007AA01Z192)
文摘The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the request. However, some active query subspaces may contain no query results at all, those are called false active query subspaces. It is obvious that the performance of query processing degrades in the presence of false active query subspaces. Our experiments show that this problem becomes seriously when the data are high dimensional and the number of accesses to false active subspaces increases as the dimensionality increases. In order to solve this problem, this paper proposes a space mapping approach to reducing such unnecessary accesses. A given query space can be refined by filtering within its mapped space. To do so, a mapping strategy called maxgap is proposed to improve the efficiency of the refinement processing. Based on the mapping strategy, an index structure called MS-tree and algorithms of query processing are presented in this paper. Finally, the performance of MS-tree is compared with that of other competitors in terms of range queries on a real data set.
基金supported by National Natural Science Foundation of China (Grant Nos. 11071022, 11231010 and 11471223)Beijing Center for Mathematics and Information Interdisciplinary ScienceKey Project of Beijing Municipal Educational Commission (Grant No. KZ201410028030)
文摘This paper aims to develop a new robust U-type test for high dimensional regression coefficients using the estimated U-statistic of order two and refitted cross-validation error variance estimation. It is proved that the limiting null distribution of the proposed new test is normal under two kinds of ordinary models.We further study the local power of the proposed test and compare with other competitive tests for high dimensional data. The idea of refitted cross-validation approach is utilized to reduce the bias of sample variance in the estimation of the test statistic. Our theoretical results indicate that the proposed test can have even more substantial power gain than the test by Zhong and Chen(2011) when testing a hypothesis with outlying observations and heavy tailed distributions. We assess the finite-sample performance of the proposed test by examining its size and power via Monte Carlo studies. We also illustrate the application of the proposed test by an empirical analysis of a real data example.
基金Supported by the National Natural Science Foundation of China(Grant No.11401340)China Postdoctoral Science Foundation(Grant No.2014M561892)+1 种基金the Foundation of Qufu Normal University(Grant Nos.bsqd2012041xkj201304)
文摘Nonconvex penalties including the smoothly clipped absolute deviation penalty and the minimax concave penalty enjoy the properties of unbiasedness, continuity and sparsity,and the ridge regression can deal with the collinearity problem. Combining the strengths of nonconvex penalties and ridge regression(abbreviated as NPR), we study the oracle property of the NPR estimator in high dimensional settings with highly correlated predictors, where the dimensionality of covariates pn is allowed to increase exponentially with the sample size n. Simulation studies and a real data example are presented to verify the performance of the NPR method.
基金National Natural Science Foundation of China(62201165,62475270,62471147)National Key Research and Development Program of China(2024YFF0505601,2024YFF0505602,2024YFF0505600).
文摘Super-resolution(SR)imaging has been widely used in several fields like remote sensing and microscopy.However,it is challenging for existing SR approaches to capture SR images in a single shot,especially in dynamic imaging scenarios.
基金supported by the National Key Research and Development Program of China(2022YFB3607700)National Natural Science Foundation of China(62350011,62375014)+1 种基金Beijing Natural Science Foundation(1232031)Special Fund for Basic Scientific Research of Central Universities of China(2024CX11002).
文摘Tailoring multiple degrees-of-freedom(DoFs)to achieve high-dimensional laser field is crucial for advancing optical technologies.While recent advancements have demonstrated the ability to manipulate a limited number of DoFs,most existing methods rely on bulky optical components or intricate systems that employ time-consuming iterative methods and,most critically,the on-demand tailoring of multiple DoFs simultaneously through a compact,single element—remains underexplored.In this study,we propose an intelligent hybrid strategy that enables the simultaneous and customizable manipulation of six DoFs:wave vector,initial phase,spatial mode,amplitude,orbital angular momentum(OAM)and spin angular momentum(SAM).Our approach advances in phase-only property,which facilitates tailoring strategy experimentally demonstrated on a compact metasurface.A fabricated sample is tailored to realize arbitrary manipulation across six DoFs,constructing a 288-dimensional space.Notably,since the OAM eigenstates constitute an infinite dimensional Hilbert space,this proposal can be further extended to even higher dimensions.Proof-of-principle experiments confirm the effectiveness in manipulation capability and dimensionality.We envision that this powerful tailoring ability offers immense potential for multifunctional photonic devices across both classical and quantum scenarios and such compactness extending the dimensional capabilities for integration on-chip requirements.
基金supported by the National Natural Science Foundation of China(62272078)Chongqing Natural Science Foundation(CSTB2023NSCQ-LZX0069)the Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202300210)
文摘Dear Editor,This letter presents a novel latent factorization model for high dimensional and incomplete (HDI) tensor, namely the neural Tucker factorization (Neu Tuc F), which is a generic neural network-based latent-factorization-of-tensors model under the Tucker decomposition framework.
文摘Two-dimensional(2D)nanomaterials,including graphene,titanium carbide(MXenes),montmorillonite,and boron nitride,etc.,have excellent mechanical,electrical,and thermal properties and good biocompatibility,which show promising applications in aerospace,flexible electronics,and biomedicine[1,2].It remains a great challenge to scalable assemble the 2D nanomaterials into highperformance macroforms for realizing these commercial applications.Natural nacre is a typical 2D nanocomposite composed of 95 vol%aragonite flakes and 5 vol%biopolymer and possesses unique mechanical properties owing to its ordered layered structure and sophisticated interface interactions[3].Inspired by the relationship between microstructure and macro-property of nacre,various assembly strategies have been developed to fabricate high-performance 2D nanocomposites by improving interlayer connectivity,alignment。