In wireless communication,the problem of authenticating the transmitter’s identity is challeng-ing,especially for those terminal devices in which the security schemes based on cryptography are approxi-mately unfeasib...In wireless communication,the problem of authenticating the transmitter’s identity is challeng-ing,especially for those terminal devices in which the security schemes based on cryptography are approxi-mately unfeasible owing to limited resources.In this paper,a physical layer authentication scheme is pro-posed to detect whether there is anomalous access by the attackers disguised as legitimate users.Explicitly,channel state information(CSI)is used as a form of fingerprint to exploit spatial discrimination among de-vices in the wireless network and machine learning(ML)technology is employed to promote the improve-ment of authentication accuracy.Considering that the falsified messages are not accessible for authenticator during the training phase,deep support vector data de-scription(Deep SVDD)is selected to solve the one-class classification(OCC)problem.Simulation results show that Deep SVDD based scheme can tackle the challenges of physical layer authentication in wireless communication environments.展开更多
There are multiple operating modes in the real industrial process, and the collected data follow the complex multimodal distribution, so most traditional process monitoring methods are no longer applicable because the...There are multiple operating modes in the real industrial process, and the collected data follow the complex multimodal distribution, so most traditional process monitoring methods are no longer applicable because their presumptions are that sampled-data should obey the single Gaussian distribution or non-Gaussian distribution. In order to solve these problems, a novel weighted local standardization(WLS) strategy is proposed to standardize the multimodal data, which can eliminate the multi-mode characteristics of the collected data, and normalize them into unimodal data distribution. After detailed analysis of the raised data preprocessing strategy, a new algorithm using WLS strategy with support vector data description(SVDD) is put forward to apply for multi-mode monitoring process. Unlike the strategy of building multiple local models, the developed method only contains a model without the prior knowledge of multi-mode process. To demonstrate the proposed method's validity, it is applied to a numerical example and a Tennessee Eastman(TE) process. Finally, the simulation results show that the WLS strategy is very effective to standardize multimodal data, and the WLS-SVDD monitoring method has great advantages over the traditional SVDD and PCA combined with a local standardization strategy(LNS-PCA) in multi-mode process monitoring.展开更多
High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new comp...High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points.展开更多
Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geogra...Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geographic information systems(GIS)platforms into consideration.In order to address the problem,a spatial decomposition approach for accelerating buffer analysis of vector data is proposed.The relationship between the number of vertices of each feature and the buffer analysis computing time is analyzed to generate computational intensity transformation functions(CITFs).Then,computational intensity grids(CIGs)of polyline and polygon are constructed based on the relative CITFs.Using the corresponding CIGs,a spatial decomposition method for parallel buffer analysis is developed.Based on the computational intensity of the features and the sub-domains generated in the decomposition,the features are averagely assigned within the sub-domains into parallel buffer analysis tasks for load balance.Compared with typical regular domain decomposition methods,the new approach accomplishes greater balanced decomposition of computational intensity for parallel buffer analysis and achieves near-linear speedups.展开更多
With the rapid growth of the Internet,the copyright protection problem occurs frequently,and unauthorized copying and distributing of geospatial data threaten the investments of data producers.Digital watermarking is ...With the rapid growth of the Internet,the copyright protection problem occurs frequently,and unauthorized copying and distributing of geospatial data threaten the investments of data producers.Digital watermarking is a possible solution to solve this issue.However,watermarking causes modifications in the original data resulting in distortion and affects accuracy,which is very important to geospatial vector data.This article provides distortion assessment of watermarked geospatial data using wavelet-based invisible watermarking.Eight wavelets at different wavelet decomposition levels are used for accuracy evaluation with the help of error measures such as maximum error and mean square error.Normalized correlation is used as a similarity index between original and extracted watermark.It is observed that the increase in the strength of embedding increases visual degradation.Haar wavelet outperforms the other wavelets,and the third wavelet decomposition level is proved to be optimal level for watermarking.展开更多
Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the...Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.展开更多
Spatial vector data with high-precision and wide-coverage has exploded globally,such as land cover,social media,and other data-sets,which provides a good opportunity to enhance the national macroscopic decision-making...Spatial vector data with high-precision and wide-coverage has exploded globally,such as land cover,social media,and other data-sets,which provides a good opportunity to enhance the national macroscopic decision-making,social supervision,public services,and emergency capabilities.Simultaneously,it also brings great challenges in management technology for big spatial vector data(BSVD).In recent years,a large number of new concepts,parallel algorithms,processing tools,platforms,and applications have been proposed and developed to improve the value of BSVD from both academia and industry.To better understand BSVD and take advantage of its value effectively,this paper presents a review that surveys recent studies and research work in the data management field for BSVD.In this paper,we discuss and itemize this topic from three aspects according to different information technical levels of big spatial vector data management.It aims to help interested readers to learn about the latest research advances and choose the most suitable big data technologies and approaches depending on their system architectures.To support them more fully,firstly,we identify new concepts and ideas from numerous scholars about geographic information system to focus on BSVD scope in the big data era.Then,we conclude systematically not only the most recent published literatures but also a global view of main spatial technologies of BSVD,including data storage and organization,spatial index,processing methods,and spatial analysis.Finally,based on the above commentary and related work,several opportunities and challenges are listed as the future research interests and directions for reference.展开更多
This study proposes a virtual globe-based vector data model named the quaternary quadrangle vector tile model(QQVTM)in order to better manage,visualize,and analyze massive amounts of global multi-scale vector data.The...This study proposes a virtual globe-based vector data model named the quaternary quadrangle vector tile model(QQVTM)in order to better manage,visualize,and analyze massive amounts of global multi-scale vector data.The model integrates the quaternary quadrangle mesh(a discrete global grid system)and global image,terrain,and vector data.A QQVTM-based organization method is presented to organize global multi-scale vector data,including linear and polygonal vector data.In addition,tilebased reconstruction algorithms are designed to search and stitch the vector fragments scattered in tiles to reconstruct and store the entire vector geometries to support vector query and 3D analysis of global datasets.These organized vector data are in turn visualized and queried using a geometry-based approach.Our experimental results demonstrate that the QQVTM can satisfy the requirements for global vector data organization,visualization,and querying.Moreover,the QQVTM performs better than unorganized 2D vectors regarding rendering efficiency and better than the latitude–longitude-based approach regarding data redundancy.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. T...Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. This paper proposed an approach to organizing and transmitting multi-scale vector river network data via the Internet progressively. This approach takes account of two levels of importance, i.e. the importance of river branches and the importance of the points belonging to each river branch, and forms data packages ac- cording to these. Our experiments have shown that the proposed approach can reduce 90% of original data while preserving the river structure well.展开更多
Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall per...Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.展开更多
Hybrid data assimilation (DA) is a method seeing more use in recent hydrology and water resources research. In this study, a DA method coupled with the support vector machines (SVMs) and the ensemble Kalman filter...Hybrid data assimilation (DA) is a method seeing more use in recent hydrology and water resources research. In this study, a DA method coupled with the support vector machines (SVMs) and the ensemble Kalman filter (EnKF) technology was used for the prediction of soil moisture in different soil layers: 0-5 cm, 30 cm, 50 cm, 100 cm, 200 cm, and 300 cm. The SVM methodology was first used to train the ground measurements of soil moisture and meteorological parameters from the Meilin study area, in East China, to construct soil moisture statistical prediction models. Subsequent observations and their statistics were used for predictions, with two approaches: the SVM predictor and the SVM-EnKF model made by coupling the SVM model with the EnKF technique using the DA method. Validation results showed that the proposed SVM-EnKF model can improve the prediction results of soil moisture in different layers, from the surface to the root zone.展开更多
Geophysical data sets are growing at an ever-increasing rate,requiring computationally efficient data selection (thinning) methods to preserve essential information.Satellites,such as WindSat,provide large data sets...Geophysical data sets are growing at an ever-increasing rate,requiring computationally efficient data selection (thinning) methods to preserve essential information.Satellites,such as WindSat,provide large data sets for assessing the accuracy and computational efficiency of data selection techniques.A new data thinning technique,based on support vector regression (SVR),is developed and tested.To manage large on-line satellite data streams,observations from WindSat are formed into subsets by Voronoi tessellation and then each is thinned by SVR (TSVR).Three experiments are performed.The first confirms the viability of TSVR for a relatively small sample,comparing it to several commonly used data thinning methods (random selection,averaging and Barnes filtering),producing a 10% thinning rate (90% data reduction),low mean absolute errors (MAE) and large correlations with the original data.A second experiment,using a larger dataset,shows TSVR retrievals with MAE < 1 m s-1 and correlations ≥ 0.98.TSVR was an order of magnitude faster than the commonly used thinning methods.A third experiment applies a two-stage pipeline to TSVR,to accommodate online data.The pipeline subsets reconstruct the wind field with the same accuracy as the second experiment,is an order of magnitude faster than the nonpipeline TSVR.Therefore,pipeline TSVR is two orders of magnitude faster than commonly used thinning methods that ingest the entire data set.This study demonstrates that TSVR pipeline thinning is an accurate and computationally efficient alternative to commonly used data selection techniques.展开更多
Support vector machines and a Kalman-like observer are used for fault detection and isolation in a variable speed horizontalaxis wind turbine composed of three blades and a full converter. The support vector approach ...Support vector machines and a Kalman-like observer are used for fault detection and isolation in a variable speed horizontalaxis wind turbine composed of three blades and a full converter. The support vector approach is data-based and is therefore robust to process knowledge. It is based on structural risk minimization which enhances generalization even with small training data set and it allows for process nonlinearity by using flexible kernels. In this work, a radial basis function is used as the kernel. Different parts of the process are investigated including actuators and sensors faults. With duplicated sensors, sensor faults in blade pitch positions,generator and rotor speeds can be detected. Faults of type stuck measurements can be detected in 2 sampling periods. The detection time of offset/scaled measurements depends on the severity of the fault and on the process dynamics when the fault occurs. The converter torque actuator fault can be detected within 2 sampling periods. Faults in the actuators of the pitch systems represents a higher difficulty for fault detection which is due to the fact that such faults only affect the transitory state(which is very fast) but not the final stationary state. Therefore, two methods are considered and compared for fault detection and isolation of this fault: support vector machines and a Kalman-like observer. Advantages and disadvantages of each method are discussed. On one hand, support vector machines training of transitory states would require a big amount of data in different situations, but the fault detection and isolation results are robust to variations in the input/operating point. On the other hand, the observer is model-based, and therefore does not require training, and it allows identification of the fault level, which is interesting for fault reconfiguration. But the observability of the system is ensured under specific conditions, related to the dynamics of the inputs and outputs. The whole fault detection and isolation scheme is evaluated using a wind turbine benchmark with a real sequence of wind speed.展开更多
To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thu...To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thus the kernel function is data-dependent. With a random initial parameter, the kernel function is modified repeatedly until a satisfactory result is achieved. Compared with the conventional model, the improved approach does not need to select parameters of the kernel function. Sim- ulation is carried out for the one-dimension continuous function and a case of strong earthquakes. The results show that the improved approach has better learning ability and forecasting precision than the traditional model. With the increase of the iteration number, the figure of merit decreases and converges. The speed of convergence depends on the parameters used in the algorithm.展开更多
This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together wi...This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together with non_spatial fundamental database.Data composition and reasoning are two key steps of 4D data fusion.Finally,this paper gives a real case: Ezhou Waterlog_Draining Decision Support System (EWDSS) with two application models,i.e.,DEM application model,water generating and draining model.展开更多
Land resources are facing crises of being misused,especially for an intersection area between town and country,and land control has to be enforced.This paper presents a development of data mining method for land contr...Land resources are facing crises of being misused,especially for an intersection area between town and country,and land control has to be enforced.This paper presents a development of data mining method for land control.A vector_match method for the prerequisite of data mining i.e., data cleaning is proposed,which deals with both character and numeric data via vectorizing character_string and matching number.A minimal decision algorithm of rough set is used to discover the knowledge hidden in the data warehouse.In order to monitor land use dynamically and accurately,it is suggested to set up a real_time land control system based on GPS,digital photogrammetry and online data mining.Finally,the means is applied in the intersection area between town and country of Wuhan city,and a set of knowledge about land control is discovered.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from min...The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from mine to geology and their complicated relations, and proposes several new kinds of spatial objects including cross-section, column body and digital surface model to represent some special spatial phenomena like tunnels and irregular surfaces of an ore body. An integrated data structure including vector, raster and object-oriented data models is used to represent various 3-D spatial objects and their relations. The integrated data structure and object-oriented data model can be used as bases to design and realize a 3-D geographic information system.展开更多
基金partially supported by the National Key Research and Development Project under Grant2020YFB1806805Social Development Projects of Jiangsu Science and Technology Department under Grant No.BE2018704
文摘In wireless communication,the problem of authenticating the transmitter’s identity is challeng-ing,especially for those terminal devices in which the security schemes based on cryptography are approxi-mately unfeasible owing to limited resources.In this paper,a physical layer authentication scheme is pro-posed to detect whether there is anomalous access by the attackers disguised as legitimate users.Explicitly,channel state information(CSI)is used as a form of fingerprint to exploit spatial discrimination among de-vices in the wireless network and machine learning(ML)technology is employed to promote the improve-ment of authentication accuracy.Considering that the falsified messages are not accessible for authenticator during the training phase,deep support vector data de-scription(Deep SVDD)is selected to solve the one-class classification(OCC)problem.Simulation results show that Deep SVDD based scheme can tackle the challenges of physical layer authentication in wireless communication environments.
基金Project(61374140)supported by the National Natural Science Foundation of China
文摘There are multiple operating modes in the real industrial process, and the collected data follow the complex multimodal distribution, so most traditional process monitoring methods are no longer applicable because their presumptions are that sampled-data should obey the single Gaussian distribution or non-Gaussian distribution. In order to solve these problems, a novel weighted local standardization(WLS) strategy is proposed to standardize the multimodal data, which can eliminate the multi-mode characteristics of the collected data, and normalize them into unimodal data distribution. After detailed analysis of the raised data preprocessing strategy, a new algorithm using WLS strategy with support vector data description(SVDD) is put forward to apply for multi-mode monitoring process. Unlike the strategy of building multiple local models, the developed method only contains a model without the prior knowledge of multi-mode process. To demonstrate the proposed method's validity, it is applied to a numerical example and a Tennessee Eastman(TE) process. Finally, the simulation results show that the WLS strategy is very effective to standardize multimodal data, and the WLS-SVDD monitoring method has great advantages over the traditional SVDD and PCA combined with a local standardization strategy(LNS-PCA) in multi-mode process monitoring.
基金Supported by the National High-tech R&D Program of China(NO.2007AA120501)
文摘High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points.
基金the National Natural Science Foundation of China(No.41971356,41701446)National Key Research and Development Program of China(No.2017YFB0503600,2018YFB0505500,2017YFC0602204).
文摘Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geographic information systems(GIS)platforms into consideration.In order to address the problem,a spatial decomposition approach for accelerating buffer analysis of vector data is proposed.The relationship between the number of vertices of each feature and the buffer analysis computing time is analyzed to generate computational intensity transformation functions(CITFs).Then,computational intensity grids(CIGs)of polyline and polygon are constructed based on the relative CITFs.Using the corresponding CIGs,a spatial decomposition method for parallel buffer analysis is developed.Based on the computational intensity of the features and the sub-domains generated in the decomposition,the features are averagely assigned within the sub-domains into parallel buffer analysis tasks for load balance.Compared with typical regular domain decomposition methods,the new approach accomplishes greater balanced decomposition of computational intensity for parallel buffer analysis and achieves near-linear speedups.
文摘With the rapid growth of the Internet,the copyright protection problem occurs frequently,and unauthorized copying and distributing of geospatial data threaten the investments of data producers.Digital watermarking is a possible solution to solve this issue.However,watermarking causes modifications in the original data resulting in distortion and affects accuracy,which is very important to geospatial vector data.This article provides distortion assessment of watermarked geospatial data using wavelet-based invisible watermarking.Eight wavelets at different wavelet decomposition levels are used for accuracy evaluation with the help of error measures such as maximum error and mean square error.Normalized correlation is used as a similarity index between original and extracted watermark.It is observed that the increase in the strength of embedding increases visual degradation.Haar wavelet outperforms the other wavelets,and the third wavelet decomposition level is proved to be optimal level for watermarking.
基金National Natural Science Foundation of China(No.61374140)the Youth Foundation of National Natural Science Foundation of China(No.61403072)
文摘Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.
基金This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences[grant number XDA19020201].
文摘Spatial vector data with high-precision and wide-coverage has exploded globally,such as land cover,social media,and other data-sets,which provides a good opportunity to enhance the national macroscopic decision-making,social supervision,public services,and emergency capabilities.Simultaneously,it also brings great challenges in management technology for big spatial vector data(BSVD).In recent years,a large number of new concepts,parallel algorithms,processing tools,platforms,and applications have been proposed and developed to improve the value of BSVD from both academia and industry.To better understand BSVD and take advantage of its value effectively,this paper presents a review that surveys recent studies and research work in the data management field for BSVD.In this paper,we discuss and itemize this topic from three aspects according to different information technical levels of big spatial vector data management.It aims to help interested readers to learn about the latest research advances and choose the most suitable big data technologies and approaches depending on their system architectures.To support them more fully,firstly,we identify new concepts and ideas from numerous scholars about geographic information system to focus on BSVD scope in the big data era.Then,we conclude systematically not only the most recent published literatures but also a global view of main spatial technologies of BSVD,including data storage and organization,spatial index,processing methods,and spatial analysis.Finally,based on the above commentary and related work,several opportunities and challenges are listed as the future research interests and directions for reference.
基金the National Natural Science Foundation of China[grant number 41171314],[grant number 41023001]the Fundamental Research Funds for the Central Universities[grant number 2014619020203].Comments from the anonymous reviewers and editor are appreciated.
文摘This study proposes a virtual globe-based vector data model named the quaternary quadrangle vector tile model(QQVTM)in order to better manage,visualize,and analyze massive amounts of global multi-scale vector data.The model integrates the quaternary quadrangle mesh(a discrete global grid system)and global image,terrain,and vector data.A QQVTM-based organization method is presented to organize global multi-scale vector data,including linear and polygonal vector data.In addition,tilebased reconstruction algorithms are designed to search and stitch the vector fragments scattered in tiles to reconstruct and store the entire vector geometries to support vector query and 3D analysis of global datasets.These organized vector data are in turn visualized and queried using a geometry-based approach.Our experimental results demonstrate that the QQVTM can satisfy the requirements for global vector data organization,visualization,and querying.Moreover,the QQVTM performs better than unorganized 2D vectors regarding rendering efficiency and better than the latitude–longitude-based approach regarding data redundancy.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
文摘Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. This paper proposed an approach to organizing and transmitting multi-scale vector river network data via the Internet progressively. This approach takes account of two levels of importance, i.e. the importance of river branches and the importance of the points belonging to each river branch, and forms data packages ac- cording to these. Our experiments have shown that the proposed approach can reduce 90% of original data while preserving the river structure well.
文摘Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.
基金supported by the National Basic Research Program of China (the 973 Program,Grant No.2010CB951101)the Program for Changjiang Scholars and Innovative Research Teams in Universities,the Ministry of Education,China (Grant No. IRT0717)
文摘Hybrid data assimilation (DA) is a method seeing more use in recent hydrology and water resources research. In this study, a DA method coupled with the support vector machines (SVMs) and the ensemble Kalman filter (EnKF) technology was used for the prediction of soil moisture in different soil layers: 0-5 cm, 30 cm, 50 cm, 100 cm, 200 cm, and 300 cm. The SVM methodology was first used to train the ground measurements of soil moisture and meteorological parameters from the Meilin study area, in East China, to construct soil moisture statistical prediction models. Subsequent observations and their statistics were used for predictions, with two approaches: the SVM predictor and the SVM-EnKF model made by coupling the SVM model with the EnKF technique using the DA method. Validation results showed that the proposed SVM-EnKF model can improve the prediction results of soil moisture in different layers, from the surface to the root zone.
基金NOAA Grant NA17RJ1227 and NSF Grant EIA-0205628 for providing financial support for this worksupported by RSF Grant 14-41-00039
文摘Geophysical data sets are growing at an ever-increasing rate,requiring computationally efficient data selection (thinning) methods to preserve essential information.Satellites,such as WindSat,provide large data sets for assessing the accuracy and computational efficiency of data selection techniques.A new data thinning technique,based on support vector regression (SVR),is developed and tested.To manage large on-line satellite data streams,observations from WindSat are formed into subsets by Voronoi tessellation and then each is thinned by SVR (TSVR).Three experiments are performed.The first confirms the viability of TSVR for a relatively small sample,comparing it to several commonly used data thinning methods (random selection,averaging and Barnes filtering),producing a 10% thinning rate (90% data reduction),low mean absolute errors (MAE) and large correlations with the original data.A second experiment,using a larger dataset,shows TSVR retrievals with MAE < 1 m s-1 and correlations ≥ 0.98.TSVR was an order of magnitude faster than the commonly used thinning methods.A third experiment applies a two-stage pipeline to TSVR,to accommodate online data.The pipeline subsets reconstruct the wind field with the same accuracy as the second experiment,is an order of magnitude faster than the nonpipeline TSVR.Therefore,pipeline TSVR is two orders of magnitude faster than commonly used thinning methods that ingest the entire data set.This study demonstrates that TSVR pipeline thinning is an accurate and computationally efficient alternative to commonly used data selection techniques.
文摘Support vector machines and a Kalman-like observer are used for fault detection and isolation in a variable speed horizontalaxis wind turbine composed of three blades and a full converter. The support vector approach is data-based and is therefore robust to process knowledge. It is based on structural risk minimization which enhances generalization even with small training data set and it allows for process nonlinearity by using flexible kernels. In this work, a radial basis function is used as the kernel. Different parts of the process are investigated including actuators and sensors faults. With duplicated sensors, sensor faults in blade pitch positions,generator and rotor speeds can be detected. Faults of type stuck measurements can be detected in 2 sampling periods. The detection time of offset/scaled measurements depends on the severity of the fault and on the process dynamics when the fault occurs. The converter torque actuator fault can be detected within 2 sampling periods. Faults in the actuators of the pitch systems represents a higher difficulty for fault detection which is due to the fact that such faults only affect the transitory state(which is very fast) but not the final stationary state. Therefore, two methods are considered and compared for fault detection and isolation of this fault: support vector machines and a Kalman-like observer. Advantages and disadvantages of each method are discussed. On one hand, support vector machines training of transitory states would require a big amount of data in different situations, but the fault detection and isolation results are robust to variations in the input/operating point. On the other hand, the observer is model-based, and therefore does not require training, and it allows identification of the fault level, which is interesting for fault reconfiguration. But the observability of the system is ensured under specific conditions, related to the dynamics of the inputs and outputs. The whole fault detection and isolation scheme is evaluated using a wind turbine benchmark with a real sequence of wind speed.
基金Project supported by the National Natural Science Foundation of China (No. 50578168)the Natural Science Foundation of CQ CSTC (No. 2007BB2396)
文摘To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thus the kernel function is data-dependent. With a random initial parameter, the kernel function is modified repeatedly until a satisfactory result is achieved. Compared with the conventional model, the improved approach does not need to select parameters of the kernel function. Sim- ulation is carried out for the one-dimension continuous function and a case of strong earthquakes. The results show that the improved approach has better learning ability and forecasting precision than the traditional model. With the increase of the iteration number, the figure of merit decreases and converges. The speed of convergence depends on the parameters used in the algorithm.
文摘This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together with non_spatial fundamental database.Data composition and reasoning are two key steps of 4D data fusion.Finally,this paper gives a real case: Ezhou Waterlog_Draining Decision Support System (EWDSS) with two application models,i.e.,DEM application model,water generating and draining model.
基金ProjectsupportedbyResearchGrantofHongkongPolytechricUniversity (No .1 .34 .37.970 9) andNationalNatureScienceFoundationofChi
文摘Land resources are facing crises of being misused,especially for an intersection area between town and country,and land control has to be enforced.This paper presents a development of data mining method for land control.A vector_match method for the prerequisite of data mining i.e., data cleaning is proposed,which deals with both character and numeric data via vectorizing character_string and matching number.A minimal decision algorithm of rough set is used to discover the knowledge hidden in the data warehouse.In order to monitor land use dynamically and accurately,it is suggested to set up a real_time land control system based on GPS,digital photogrammetry and online data mining.Finally,the means is applied in the intersection area between town and country of Wuhan city,and a set of knowledge about land control is discovered.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金Project supported by the National Natural Science Foundation of China (No.49871066)
文摘The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from mine to geology and their complicated relations, and proposes several new kinds of spatial objects including cross-section, column body and digital surface model to represent some special spatial phenomena like tunnels and irregular surfaces of an ore body. An integrated data structure including vector, raster and object-oriented data models is used to represent various 3-D spatial objects and their relations. The integrated data structure and object-oriented data model can be used as bases to design and realize a 3-D geographic information system.