The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable dete...Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable detection. This process requires two critical steps: optical-elevation data co-registration and aboveground elevation calculation. These two steps are still challenging to some extent. Therefore, this paper introduces optical-elevation data co-registration and normalization techniques for generating a dataset that facilitates elevation-based building detection. For achieving accurate co-registration, a dense set of stereo-based elevations is generated and co-registered to their relevant image based on their corresponding image locations. To normalize these co-registered elevations, the bare-earth elevations are detected based on classification information of some terrain-level features after achieving the image co-registration. The developed method was executed and validated. After implementation, 80% overall-quality of detection result was achieved with 94% correct detection. Together, the developed techniques successfully facilitate the incorporation of stereo-based elevations for detecting buildings in VHR remote sensing images.展开更多
The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biolog...The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biology, geology and geography. To overcome the encountered difficulties upon fitting the autologistic regression model to analyze such data via Bayesian and/or Markov chain Monte Carlo (MCMC) techniques, the Gaussian latent variable model has been enrolled in the methodology. Assuming a normal distribution for the latent random variable may not be realistic and wrong, normal assumptions might cause bias in parameter estimates and affect the accuracy of results and inferences. Thus, it entails more flexible prior distributions for the latent variable in the spatial models. A review of the recent literature in spatial statistics shows that there is an increasing tendency in presenting models that are involving skew distributions, especially skew-normal ones. In this study, a skew-normal latent variable modeling was developed in Bayesian analysis of the spatially correlated binary data that were acquired on uncorrelated lattices. The proposed methodology was applied in inspecting spatial dependency and related factors of tooth caries occurrences in a sample of students of Yasuj University of Medical Sciences, Yasuj, Iran. The results indicated that the skew-normal latent variable model had validity and it made a decent criterion that fitted caries data.展开更多
In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illuminati...In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.展开更多
Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripp...Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.展开更多
In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier ...In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.展开更多
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m...Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.展开更多
Consider tile partial linear model Y=Xβ+ g(T) + e. Wilers Y is at risk of being censored from the right, g is an unknown smoothing function on [0,1], β is a 1-dimensional parameter to be estimated and e is an unobse...Consider tile partial linear model Y=Xβ+ g(T) + e. Wilers Y is at risk of being censored from the right, g is an unknown smoothing function on [0,1], β is a 1-dimensional parameter to be estimated and e is an unobserved error. In Ref[1,2], it wes proved that the estimator for the asymptotic variance of βn(βn) is consistent. In this paper, we establish the limit distribution and the law of the iterated logarithm for,En, and obtain the convergest rates for En and the strong uniform convergent rates for gn(gn).展开更多
Consider a semiparametric regression model Y_i=X_iβ+g(t_i)+e_i, 1 ≤ i ≤ n, where Y_i is censored on the right by another random variable C_i with known or unknown distribution G. The wavelet estimators of param...Consider a semiparametric regression model Y_i=X_iβ+g(t_i)+e_i, 1 ≤ i ≤ n, where Y_i is censored on the right by another random variable C_i with known or unknown distribution G. The wavelet estimators of parameter and nonparametric part are given by the wavelet smoothing and the synthetic data methods. Under general conditions, the asymptotic normality for the wavelet estimators and the convergence rates for the wavelet estimators of nonparametric components are investigated. A numerical example is given.展开更多
Based on the 16d-composite MODIS (moderate resolution imaging spectroradiometer)-NDVI(normalized difference vegetation index) time-series data in 2004, vegetation in North Tibet Plateau was classified and seasonal...Based on the 16d-composite MODIS (moderate resolution imaging spectroradiometer)-NDVI(normalized difference vegetation index) time-series data in 2004, vegetation in North Tibet Plateau was classified and seasonal variations on the pixels selected from different vegetation type were analyzed. The Savitzky-Golay filtering algorithm was applied to perform a filtration processing for MODIS-NDVI time-series data. The processed time-series curves can reflect a real variation trend of vegetation growth. The NDVI time-series curves of coniferous forest, high-cold meadow, high-cold meadow steppe and high-cold steppe all appear a mono-peak model during vegetation growth with the maximum peak occurring in August. A decision-tree classification model was established according to either NDVI time-series data or land surface temperature data. And then, both classifying and processing for vegetations were carried out through the model based on NDVI time-series curves. An accuracy test illustrates that classification results are of high accuracy and credibility and the model is conducive for studying a climate variation and estimating a vegetation production at regional even global scale.展开更多
In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan...In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.展开更多
Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between le...Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.展开更多
为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。...为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。该方法首先对数据集进行数据增强和边界填充处理,其次对典型的CNN(Convolutional Neural Networks)网络结构进行改进,移除了卷积层组中的池化层,仅保留了卷积层和BN(Batch Normalization)层,并适量增加卷积层组。为了验证模型的有效性和准确性,设计了6组不同的神经网络结构对模型进行训练。实验结果表明,在相同训练周期数下,推荐使用的model-6模型表现最佳,测试准确率高达90.17%,突破了长期以来经典CNN在CIFAR-10数据集上难于达到90%准确率的瓶颈,为图像分类识别提供了新的解决方案和模型参考。展开更多
A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error ...A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error of the estimator are studied.展开更多
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
文摘Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable detection. This process requires two critical steps: optical-elevation data co-registration and aboveground elevation calculation. These two steps are still challenging to some extent. Therefore, this paper introduces optical-elevation data co-registration and normalization techniques for generating a dataset that facilitates elevation-based building detection. For achieving accurate co-registration, a dense set of stereo-based elevations is generated and co-registered to their relevant image based on their corresponding image locations. To normalize these co-registered elevations, the bare-earth elevations are detected based on classification information of some terrain-level features after achieving the image co-registration. The developed method was executed and validated. After implementation, 80% overall-quality of detection result was achieved with 94% correct detection. Together, the developed techniques successfully facilitate the incorporation of stereo-based elevations for detecting buildings in VHR remote sensing images.
文摘The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biology, geology and geography. To overcome the encountered difficulties upon fitting the autologistic regression model to analyze such data via Bayesian and/or Markov chain Monte Carlo (MCMC) techniques, the Gaussian latent variable model has been enrolled in the methodology. Assuming a normal distribution for the latent random variable may not be realistic and wrong, normal assumptions might cause bias in parameter estimates and affect the accuracy of results and inferences. Thus, it entails more flexible prior distributions for the latent variable in the spatial models. A review of the recent literature in spatial statistics shows that there is an increasing tendency in presenting models that are involving skew distributions, especially skew-normal ones. In this study, a skew-normal latent variable modeling was developed in Bayesian analysis of the spatially correlated binary data that were acquired on uncorrelated lattices. The proposed methodology was applied in inspecting spatial dependency and related factors of tooth caries occurrences in a sample of students of Yasuj University of Medical Sciences, Yasuj, Iran. The results indicated that the skew-normal latent variable model had validity and it made a decent criterion that fitted caries data.
基金This paper is supported by the National Natural Science Foundation ofChina (No .40371107) .
文摘In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2019-2016-0-00313)supervised by the IITP(Institute for Information&communication Technology Promotion)+1 种基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT and Future Planning(2017R1E1A1A01074345).
文摘Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.
基金This work was supported by Research Support Fund(RSF)of Symbiosis International(Deemed University),Pune,India。
文摘In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.
基金Under the auspices of National Natural Science Foundation of China(No.41230751,41101547)Scientific Research Foundation of Graduate School of Nanjing University(No.2012CL14)
文摘Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.
文摘Consider tile partial linear model Y=Xβ+ g(T) + e. Wilers Y is at risk of being censored from the right, g is an unknown smoothing function on [0,1], β is a 1-dimensional parameter to be estimated and e is an unobserved error. In Ref[1,2], it wes proved that the estimator for the asymptotic variance of βn(βn) is consistent. In this paper, we establish the limit distribution and the law of the iterated logarithm for,En, and obtain the convergest rates for En and the strong uniform convergent rates for gn(gn).
基金Supported by the National Natural Science Foundation of China (11071022)the Key Project of Hubei Provincial Department of Education (D20092207)
文摘Consider a semiparametric regression model Y_i=X_iβ+g(t_i)+e_i, 1 ≤ i ≤ n, where Y_i is censored on the right by another random variable C_i with known or unknown distribution G. The wavelet estimators of parameter and nonparametric part are given by the wavelet smoothing and the synthetic data methods. Under general conditions, the asymptotic normality for the wavelet estimators and the convergence rates for the wavelet estimators of nonparametric components are investigated. A numerical example is given.
基金the Frontier Program of the Knowledge Innovation Program of Chinese Academy of Sciences
文摘Based on the 16d-composite MODIS (moderate resolution imaging spectroradiometer)-NDVI(normalized difference vegetation index) time-series data in 2004, vegetation in North Tibet Plateau was classified and seasonal variations on the pixels selected from different vegetation type were analyzed. The Savitzky-Golay filtering algorithm was applied to perform a filtration processing for MODIS-NDVI time-series data. The processed time-series curves can reflect a real variation trend of vegetation growth. The NDVI time-series curves of coniferous forest, high-cold meadow, high-cold meadow steppe and high-cold steppe all appear a mono-peak model during vegetation growth with the maximum peak occurring in August. A decision-tree classification model was established according to either NDVI time-series data or land surface temperature data. And then, both classifying and processing for vegetations were carried out through the model based on NDVI time-series curves. An accuracy test illustrates that classification results are of high accuracy and credibility and the model is conducive for studying a climate variation and estimating a vegetation production at regional even global scale.
文摘In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.
文摘Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.
文摘为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。该方法首先对数据集进行数据增强和边界填充处理,其次对典型的CNN(Convolutional Neural Networks)网络结构进行改进,移除了卷积层组中的池化层,仅保留了卷积层和BN(Batch Normalization)层,并适量增加卷积层组。为了验证模型的有效性和准确性,设计了6组不同的神经网络结构对模型进行训练。实验结果表明,在相同训练周期数下,推荐使用的model-6模型表现最佳,测试准确率高达90.17%,突破了长期以来经典CNN在CIFAR-10数据集上难于达到90%准确率的瓶颈,为图像分类识别提供了新的解决方案和模型参考。
文摘A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error of the estimator are studied.