Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating ...Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating the radiometric inconsistency.The radiometric trans-forming relation between the subject image and the reference image is an essential aspect of RRN.Aimed at accurate radiometric transforming relation modeling,the learning-based nonlinear regression method,Support Vector machine Regression(SVR)is used for fitting the complicated radiometric transforming relation for the coarse-resolution data-referenced RRN.To evaluate the effectiveness of the proposed method,a series of experiments are performed,including two synthetic data experiments and one real data experiment.And the proposed method is compared with other methods that use linear regression,Artificial Neural Network(ANN)or Random Forest(RF)for radiometric transforming relation modeling.The results show that the proposed method performs well on fitting the radiometric transforming relation and could enhance the RRN performance.展开更多
Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable dete...Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable detection. This process requires two critical steps: optical-elevation data co-registration and aboveground elevation calculation. These two steps are still challenging to some extent. Therefore, this paper introduces optical-elevation data co-registration and normalization techniques for generating a dataset that facilitates elevation-based building detection. For achieving accurate co-registration, a dense set of stereo-based elevations is generated and co-registered to their relevant image based on their corresponding image locations. To normalize these co-registered elevations, the bare-earth elevations are detected based on classification information of some terrain-level features after achieving the image co-registration. The developed method was executed and validated. After implementation, 80% overall-quality of detection result was achieved with 94% correct detection. Together, the developed techniques successfully facilitate the incorporation of stereo-based elevations for detecting buildings in VHR remote sensing images.展开更多
In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illuminati...In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.展开更多
Edge detection and enhancement techniques are commonly used in recognizing the edge of geologic bodies using potential field data. We present a new edge recognition technology based on the normalized vertical derivati...Edge detection and enhancement techniques are commonly used in recognizing the edge of geologic bodies using potential field data. We present a new edge recognition technology based on the normalized vertical derivative of the total horizontal derivative which has the functions of both edge detection and enhancement techniques. First, we calculate the total horizontal derivative (THDR) of the potential-field data and then compute the n-order vertical derivative (VDRn) of the THDR. For the n-order vertical derivative, the peak value of total horizontal derivative (PTHDR) is obtained using a threshold value greater than 0. This PTHDR can be used for edge detection. Second, the PTHDR value is divided by the total horizontal derivative and normalized by the maximum value. Finally, we used different kinds of numerical models to verify the effectiveness and reliability of the new edge recognition technology.展开更多
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier ...In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.展开更多
The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biolog...The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biology, geology and geography. To overcome the encountered difficulties upon fitting the autologistic regression model to analyze such data via Bayesian and/or Markov chain Monte Carlo (MCMC) techniques, the Gaussian latent variable model has been enrolled in the methodology. Assuming a normal distribution for the latent random variable may not be realistic and wrong, normal assumptions might cause bias in parameter estimates and affect the accuracy of results and inferences. Thus, it entails more flexible prior distributions for the latent variable in the spatial models. A review of the recent literature in spatial statistics shows that there is an increasing tendency in presenting models that are involving skew distributions, especially skew-normal ones. In this study, a skew-normal latent variable modeling was developed in Bayesian analysis of the spatially correlated binary data that were acquired on uncorrelated lattices. The proposed methodology was applied in inspecting spatial dependency and related factors of tooth caries occurrences in a sample of students of Yasuj University of Medical Sciences, Yasuj, Iran. The results indicated that the skew-normal latent variable model had validity and it made a decent criterion that fitted caries data.展开更多
为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。...为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。该方法首先对数据集进行数据增强和边界填充处理,其次对典型的CNN(Convolutional Neural Networks)网络结构进行改进,移除了卷积层组中的池化层,仅保留了卷积层和BN(Batch Normalization)层,并适量增加卷积层组。为了验证模型的有效性和准确性,设计了6组不同的神经网络结构对模型进行训练。实验结果表明,在相同训练周期数下,推荐使用的model-6模型表现最佳,测试准确率高达90.17%,突破了长期以来经典CNN在CIFAR-10数据集上难于达到90%准确率的瓶颈,为图像分类识别提供了新的解决方案和模型参考。展开更多
单一物探方法在解释时不可避免地存在多解性,尤其是在复杂地质条件区。通常对同一测线不同方法的数据分别解释,再基于解释成果,综合分析,相互佐证,是一种简单的组合分析法。虽然考虑了不同方法的数据特征,但未能从数据层级挖掘其中更深...单一物探方法在解释时不可避免地存在多解性,尤其是在复杂地质条件区。通常对同一测线不同方法的数据分别解释,再基于解释成果,综合分析,相互佐证,是一种简单的组合分析法。虽然考虑了不同方法的数据特征,但未能从数据层级挖掘其中更深层次的特征,解释成果是多个数据剖面,显示也不直观。为此,文中提出一种基于稀疏自编码器(Sparse Auto Encoders,SAE)的多方法工程物探数据融合方法。SAE是一种深度网络算法,通过不断学习,自动挖掘蕴含在数据中的深层次特征。融合数据兼备了多种物探数据中蕴含的物性参数特征,充分挖掘了数据中的地质信息,有效降低了解释的多解性,并能做到更直观地显示,可以更加全面地反映地质异常体的特征。展开更多
基金This research was funded by the National Natural Science Fund of China[grant number 41701415]Science fund project of Wuhan Institute of Technology[grant number K201724]Science and Technology Development Funds Project of Department of Transportation of Hubei Province[grant number 201900001].
文摘Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating the radiometric inconsistency.The radiometric trans-forming relation between the subject image and the reference image is an essential aspect of RRN.Aimed at accurate radiometric transforming relation modeling,the learning-based nonlinear regression method,Support Vector machine Regression(SVR)is used for fitting the complicated radiometric transforming relation for the coarse-resolution data-referenced RRN.To evaluate the effectiveness of the proposed method,a series of experiments are performed,including two synthetic data experiments and one real data experiment.And the proposed method is compared with other methods that use linear regression,Artificial Neural Network(ANN)or Random Forest(RF)for radiometric transforming relation modeling.The results show that the proposed method performs well on fitting the radiometric transforming relation and could enhance the RRN performance.
文摘Building detection in very high resolution (VHR) images is crucial for mapping and analysing urban environments. Since buildings are elevated objects, elevation data need to be integrated with images for reliable detection. This process requires two critical steps: optical-elevation data co-registration and aboveground elevation calculation. These two steps are still challenging to some extent. Therefore, this paper introduces optical-elevation data co-registration and normalization techniques for generating a dataset that facilitates elevation-based building detection. For achieving accurate co-registration, a dense set of stereo-based elevations is generated and co-registered to their relevant image based on their corresponding image locations. To normalize these co-registered elevations, the bare-earth elevations are detected based on classification information of some terrain-level features after achieving the image co-registration. The developed method was executed and validated. After implementation, 80% overall-quality of detection result was achieved with 94% correct detection. Together, the developed techniques successfully facilitate the incorporation of stereo-based elevations for detecting buildings in VHR remote sensing images.
基金This paper is supported by the National Natural Science Foundation ofChina (No .40371107) .
文摘In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
基金supported by the National Science and Technology Major Projects (2008ZX05025)the Project of National Oil and Gas Resources Strategic Constituency Survey and Evaluation of the Ministry of Land and Resources,China (XQ-2007-05)
文摘Edge detection and enhancement techniques are commonly used in recognizing the edge of geologic bodies using potential field data. We present a new edge recognition technology based on the normalized vertical derivative of the total horizontal derivative which has the functions of both edge detection and enhancement techniques. First, we calculate the total horizontal derivative (THDR) of the potential-field data and then compute the n-order vertical derivative (VDRn) of the THDR. For the n-order vertical derivative, the peak value of total horizontal derivative (PTHDR) is obtained using a threshold value greater than 0. This PTHDR can be used for edge detection. Second, the PTHDR value is divided by the total horizontal derivative and normalized by the maximum value. Finally, we used different kinds of numerical models to verify the effectiveness and reliability of the new edge recognition technology.
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
基金This work was supported by Research Support Fund(RSF)of Symbiosis International(Deemed University),Pune,India。
文摘In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.
文摘The analysis of spatially correlated binary data observed on lattices is an interesting topic that catches the attention of many scholars of different scientific fields like epidemiology, medicine, agriculture, biology, geology and geography. To overcome the encountered difficulties upon fitting the autologistic regression model to analyze such data via Bayesian and/or Markov chain Monte Carlo (MCMC) techniques, the Gaussian latent variable model has been enrolled in the methodology. Assuming a normal distribution for the latent random variable may not be realistic and wrong, normal assumptions might cause bias in parameter estimates and affect the accuracy of results and inferences. Thus, it entails more flexible prior distributions for the latent variable in the spatial models. A review of the recent literature in spatial statistics shows that there is an increasing tendency in presenting models that are involving skew distributions, especially skew-normal ones. In this study, a skew-normal latent variable modeling was developed in Bayesian analysis of the spatially correlated binary data that were acquired on uncorrelated lattices. The proposed methodology was applied in inspecting spatial dependency and related factors of tooth caries occurrences in a sample of students of Yasuj University of Medical Sciences, Yasuj, Iran. The results indicated that the skew-normal latent variable model had validity and it made a decent criterion that fitted caries data.
文摘为解决传统神经网络在CIFAR-10(Canadian Institute For Advanced Research)数据集上进行图像分类识别时,存在的模型准确率较低和训练过程易发生过拟合现象等问题,提出了一种将卷积神经网络和批归一化相结合的新神经网络结构构建方法。该方法首先对数据集进行数据增强和边界填充处理,其次对典型的CNN(Convolutional Neural Networks)网络结构进行改进,移除了卷积层组中的池化层,仅保留了卷积层和BN(Batch Normalization)层,并适量增加卷积层组。为了验证模型的有效性和准确性,设计了6组不同的神经网络结构对模型进行训练。实验结果表明,在相同训练周期数下,推荐使用的model-6模型表现最佳,测试准确率高达90.17%,突破了长期以来经典CNN在CIFAR-10数据集上难于达到90%准确率的瓶颈,为图像分类识别提供了新的解决方案和模型参考。
文摘单一物探方法在解释时不可避免地存在多解性,尤其是在复杂地质条件区。通常对同一测线不同方法的数据分别解释,再基于解释成果,综合分析,相互佐证,是一种简单的组合分析法。虽然考虑了不同方法的数据特征,但未能从数据层级挖掘其中更深层次的特征,解释成果是多个数据剖面,显示也不直观。为此,文中提出一种基于稀疏自编码器(Sparse Auto Encoders,SAE)的多方法工程物探数据融合方法。SAE是一种深度网络算法,通过不断学习,自动挖掘蕴含在数据中的深层次特征。融合数据兼备了多种物探数据中蕴含的物性参数特征,充分挖掘了数据中的地质信息,有效降低了解释的多解性,并能做到更直观地显示,可以更加全面地反映地质异常体的特征。