In this paper,the geoecological information-modeling system(GIMS)is described as possible improvement of the Big Data approach.The main GIMS function is the use of algorithms and models that capture the fundamental pr...In this paper,the geoecological information-modeling system(GIMS)is described as possible improvement of the Big Data approach.The main GIMS function is the use of algorithms and models that capture the fundamental processes controlling the evolution of the climate-nature-society(CNSS)system.The GIMS structure includes 24 blocks that realize a series of models and algorithms for global big data processing and analysis.The CNSS global model is the basic block of the GIMS.The operational tools of GIMS are demonstrated by examining several scenarios associated with the reconstruction of forest areas.It is shown that significant impacts on forests can lead to global climate variations on a large scale.展开更多
Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Proj...Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Project' (SESADP). The result of the project was the creation of datasets and statistical outputs for the years 2011 to 2014 to meet Eurostat's annual earnings statistics requirements and the Structure of Earnings Survey (SES) Regulation. Record linking across the Census and various public sector datasets enabled the necessary information to be acquired to meet the Eurostat earnings requirements. However, the risk of statistical disclosure (i.e. identifying an individual on the dataset) is high unless privacy and confidentiality safe-guards are built into the data matching process. This paper looks at the three methods of linking records on big datasets employed on the SESADP, and how to anonymise the data to protect the identity of the individuals, where potentially disclosive variables exist.展开更多
Since the State Council issued the Action Plan on Promoting the Development of the Big Data Industry,big data-enabled information integration and processing applications have increasingly become the basic strategic re...Since the State Council issued the Action Plan on Promoting the Development of the Big Data Industry,big data-enabled information integration and processing applications have increasingly become the basic strategic resources for the building of a modern governance system in China.When it comes to poverty reduction,given that we are currently at a critical stage in the battle to eradicate poverty,it's important that we apply the big data way of thinking and big data technology to the development and integration of poverty alleviation resources.This paper examines the need to apply big data technology in targeted poverty alleviation and discusses how big data technology can be integrated into targeted poverty alleviation programs and how the big data way of thinking meshes with the idea of targeted poverty alleviation.It is believed that the application of big data technology can significantly improve the results of targeted poverty alleviation programs and that the building of big data-powered poverty alleviation platforms is a new approach to implementing the targeted poverty alleviation strategy.This paper calls for changing our way of thinking regarding targeted poverty alleviation and points out the directions for targeted poverty alleviation in the age of big data,with a view to promoting the extensive application of big data technology in the field of poverty reduction and improving the results of poverty alleviation and eradication programs.展开更多
The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for mul...The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.展开更多
In this paper we present a series of monthly gravity field solutions from Gravity Recovery and Climate Experiment(GRACE) range measurements using modified short arc approach,in which the ambiguity of range measureme...In this paper we present a series of monthly gravity field solutions from Gravity Recovery and Climate Experiment(GRACE) range measurements using modified short arc approach,in which the ambiguity of range measurements is eliminated via differentiating two adjacent range measurements.The data used for developing our monthly gravity field model are same as Tongji-GRACEOl model except that the range measurements are used to replace the range rate measurements,and our model is truncated to degree and order 60,spanning Jan.2004 to Dec.2010 also same as Tongji-GRACE01 model.Based on the comparison results of the C_(2,0),C_(2,1),S_(2,1),and C_(15,15),S_(15,15),time series and the global mass change signals as well as the mass change time series in Amazon area of our model with those of Tongji-GRACE01 model,we can conclude that our monthly gravity field model is comparable with Tongji-GRACE01 monthly model.展开更多
As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North Am...As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.展开更多
In this study, rural poverty in Iran is investigated applying a multidimensional approach, association rules mining technique, and Levine, F and Tukey tests to household data of 2008. The results indicate that poverty...In this study, rural poverty in Iran is investigated applying a multidimensional approach, association rules mining technique, and Levine, F and Tukey tests to household data of 2008. The results indicate that poverty in its multi-dimensions is an epidemic problem in rural Iran. The results also exhibit that there are 11 patterns of poverty in the rural areas including four main patterns with 99.62% coverage and seven sub-patterns with nearly 0.38% coverage. In these patterns, housing and household education are the most important dimensions of poverty and income poverty is the least important dimension. Government income support policy to households, in enforcement the law of targeting subsidies, cannot be regarded as pro poor policy but it follows other political aspects.展开更多
Challenges in Big Data analysis arise due to the way the data are recorded, maintained, processed and stored. We demonstrate that a hierarchical, multivariate, statistical machine learning algorithm, namely Boosted Re...Challenges in Big Data analysis arise due to the way the data are recorded, maintained, processed and stored. We demonstrate that a hierarchical, multivariate, statistical machine learning algorithm, namely Boosted Regression Tree (BRT) can address Big Data challenges to drive decision making. The challenge of this study is lack of interoperability since the data, a collection of GIS shapefiles, remotely sensed imagery, and aggregated and interpolated spatio-temporal information, are stored in monolithic hardware components. For the modelling process, it was necessary to create one common input file. By merging the data sources together, a structured but noisy input file, showing inconsistencies and redundancies, was created. Here, it is shown that BRT can process different data granularities, heterogeneous data and missingness. In particular, BRT has the advantage of dealing with missing data by default by allowing a split on whether or not a value is missing as well as what the value is. Most importantly, the BRT offers a wide range of possibilities regarding the interpretation of results and variable selection is automatically performed by considering how frequently a variable is used to define a split in the tree. A comparison with two similar regression models (Random Forests and Least Absolute Shrinkage and Selection Operator, LASSO) shows that BRT outperforms these in this instance. BRT can also be a starting point for sophisticated hierarchical modelling in real world scenarios. For example, a single or ensemble approach of BRT could be tested with existing models in order to improve results for a wide range of data-driven decisions and applications.展开更多
Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclu...Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
I want to know God’s thoughts.The rest are details. Albert Einstein The first reference to landslides deeply goes long far for ages.For centuries mankind try to solve the secret of landslides and get answers to quest...I want to know God’s thoughts.The rest are details. Albert Einstein The first reference to landslides deeply goes long far for ages.For centuries mankind try to solve the secret of landslides and get answers to questions:Where?When?How?Where could the nature have hidden the keys to landslide secret? Perhaps the latest achievements of science and technology could bring us closer to the clues of展开更多
A Bayesian method for estimating human error probability(HEP) is presented.The main idea of the method is incorporating human performance data into the HEP estimation process.By integrating human performance data an...A Bayesian method for estimating human error probability(HEP) is presented.The main idea of the method is incorporating human performance data into the HEP estimation process.By integrating human performance data and prior information about human performance together,a more accurate and specific HEP estimation can be achieved.For the time-unrelated task without rigorous time restriction,the HEP estimated by the common-used human reliability analysis(HRA) methods or expert judgments is collected as the source of prior information.And for the time-related task with rigorous time restriction,the human error is expressed as non-response making.Therefore,HEP is the time curve of non-response probability(NRP).The prior information is collected from system safety and reliability specifications or by expert judgments.The(joint) posterior distribution of HEP or NRP-related parameter(s) is constructed after prior information has been collected.Based on the posterior distribution,the point or interval estimation of HEP/NRP is obtained.Two illustrative examples are introduced to demonstrate the practicality of the aforementioned approach.展开更多
The data nodes with heterogeneous database in early warning system for grain security seriously hampered the effective data collection in this system. In this article,the existing middleware technologies was analyzed,...The data nodes with heterogeneous database in early warning system for grain security seriously hampered the effective data collection in this system. In this article,the existing middleware technologies was analyzed,the problem-solution approach of heterogeneous data sharing was discussed through middleware technologies. Based on this method,and according to the characteristics of early warning system for grain security,the technology of data sharing in this system were researched and explored to solve the issues of collection of heterogeneous data sharing.展开更多
基金This study was partly supported by the Russian Fund for Basic Researches[Project No.16-01-000213-a].
文摘In this paper,the geoecological information-modeling system(GIMS)is described as possible improvement of the Big Data approach.The main GIMS function is the use of algorithms and models that capture the fundamental processes controlling the evolution of the climate-nature-society(CNSS)system.The GIMS structure includes 24 blocks that realize a series of models and algorithms for global big data processing and analysis.The CNSS global model is the basic block of the GIMS.The operational tools of GIMS are demonstrated by examining several scenarios associated with the reconstruction of forest areas.It is shown that significant impacts on forests can lead to global climate variations on a large scale.
文摘Privacy protection for big data linking is discussed here in relation to the Central Statistics Office (CSO), Ireland's, big data linking project titled the 'Structure of Earnings Survey - Administrative Data Project' (SESADP). The result of the project was the creation of datasets and statistical outputs for the years 2011 to 2014 to meet Eurostat's annual earnings statistics requirements and the Structure of Earnings Survey (SES) Regulation. Record linking across the Census and various public sector datasets enabled the necessary information to be acquired to meet the Eurostat earnings requirements. However, the risk of statistical disclosure (i.e. identifying an individual on the dataset) is high unless privacy and confidentiality safe-guards are built into the data matching process. This paper looks at the three methods of linking records on big datasets employed on the SESADP, and how to anonymise the data to protect the identity of the individuals, where potentially disclosive variables exist.
基金part of the"Study on Improving the Results of Targeted Poverty Alleviation Programs in Guangxi,Guizhou and Yunnan"(15BMZ057)a 2015 general research program funded by the National Social Sciences Fund of China+3 种基金"Exploring the Implementation of the Targeted Poverty Alleviation Strategy and Study on Improving the Implementation Methods in Guangxi"(XBS16035)a program funded by the Guangxi University Research Fund"Study on Dynamic Management Model for Targeted Poverty Alleviation in the Age of Big Data"(201610593296)a program funded by Guagnxi’s College Student Innovation and Entrepreneurship Training Project
文摘Since the State Council issued the Action Plan on Promoting the Development of the Big Data Industry,big data-enabled information integration and processing applications have increasingly become the basic strategic resources for the building of a modern governance system in China.When it comes to poverty reduction,given that we are currently at a critical stage in the battle to eradicate poverty,it's important that we apply the big data way of thinking and big data technology to the development and integration of poverty alleviation resources.This paper examines the need to apply big data technology in targeted poverty alleviation and discusses how big data technology can be integrated into targeted poverty alleviation programs and how the big data way of thinking meshes with the idea of targeted poverty alleviation.It is believed that the application of big data technology can significantly improve the results of targeted poverty alleviation programs and that the building of big data-powered poverty alleviation platforms is a new approach to implementing the targeted poverty alleviation strategy.This paper calls for changing our way of thinking regarding targeted poverty alleviation and points out the directions for targeted poverty alleviation in the age of big data,with a view to promoting the extensive application of big data technology in the field of poverty reduction and improving the results of poverty alleviation and eradication programs.
基金supported by the Program of Introducing Talents of Disciplines to Universities of the Ministry of Education and State Administration of the Foreign Experts Affairs of China (the 111 Project, Grant No.B08048)the Special Basic Research Fund for Methodology in Hydrology of the Ministry of Sciences and Technology of China (Grant No. 2011IM011000)
文摘The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.
基金sponsored by National Natural Science Foundation of China(41474017)National Key Basic Research Program of China(973 Program+3 种基金2012CB957703)sponsored by National Natural Science Foundation of China(41274035)State Key Laboratory of Geodesy and Earth's Dynamics(SKLGED2013-3-2-Z,SKLGED2014-1-3-E)State Key Laboratory of Geo-Information Engineering(SKLGIE2014-M-1-2)
文摘In this paper we present a series of monthly gravity field solutions from Gravity Recovery and Climate Experiment(GRACE) range measurements using modified short arc approach,in which the ambiguity of range measurements is eliminated via differentiating two adjacent range measurements.The data used for developing our monthly gravity field model are same as Tongji-GRACEOl model except that the range measurements are used to replace the range rate measurements,and our model is truncated to degree and order 60,spanning Jan.2004 to Dec.2010 also same as Tongji-GRACE01 model.Based on the comparison results of the C_(2,0),C_(2,1),S_(2,1),and C_(15,15),S_(15,15),time series and the global mass change signals as well as the mass change time series in Amazon area of our model with those of Tongji-GRACE01 model,we can conclude that our monthly gravity field model is comparable with Tongji-GRACE01 monthly model.
基金supported by National Natural Science Foundation of China(Grant Nos.41431070,41174016,41274026,41274024,41321063)National Key Basic Research Program of China(973 Program,2012CB957703)+1 种基金CAS/SAFEA International Partnership Program for Creative Research Teams(KZZD-EW-TZ-05)The Chinese Academy of Sciences
文摘As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.
文摘In this study, rural poverty in Iran is investigated applying a multidimensional approach, association rules mining technique, and Levine, F and Tukey tests to household data of 2008. The results indicate that poverty in its multi-dimensions is an epidemic problem in rural Iran. The results also exhibit that there are 11 patterns of poverty in the rural areas including four main patterns with 99.62% coverage and seven sub-patterns with nearly 0.38% coverage. In these patterns, housing and household education are the most important dimensions of poverty and income poverty is the least important dimension. Government income support policy to households, in enforcement the law of targeting subsidies, cannot be regarded as pro poor policy but it follows other political aspects.
文摘Challenges in Big Data analysis arise due to the way the data are recorded, maintained, processed and stored. We demonstrate that a hierarchical, multivariate, statistical machine learning algorithm, namely Boosted Regression Tree (BRT) can address Big Data challenges to drive decision making. The challenge of this study is lack of interoperability since the data, a collection of GIS shapefiles, remotely sensed imagery, and aggregated and interpolated spatio-temporal information, are stored in monolithic hardware components. For the modelling process, it was necessary to create one common input file. By merging the data sources together, a structured but noisy input file, showing inconsistencies and redundancies, was created. Here, it is shown that BRT can process different data granularities, heterogeneous data and missingness. In particular, BRT has the advantage of dealing with missing data by default by allowing a split on whether or not a value is missing as well as what the value is. Most importantly, the BRT offers a wide range of possibilities regarding the interpretation of results and variable selection is automatically performed by considering how frequently a variable is used to define a split in the tree. A comparison with two similar regression models (Random Forests and Least Absolute Shrinkage and Selection Operator, LASSO) shows that BRT outperforms these in this instance. BRT can also be a starting point for sophisticated hierarchical modelling in real world scenarios. For example, a single or ensemble approach of BRT could be tested with existing models in order to improve results for a wide range of data-driven decisions and applications.
基金supported by Grant-in-Aid for Scientific Research(A)(#24240015A)
文摘Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘I want to know God’s thoughts.The rest are details. Albert Einstein The first reference to landslides deeply goes long far for ages.For centuries mankind try to solve the secret of landslides and get answers to questions:Where?When?How?Where could the nature have hidden the keys to landslide secret? Perhaps the latest achievements of science and technology could bring us closer to the clues of
基金supported by the Specialized Research Fund for the Doctoral Program of Higher Education(20114307120032)the National Natural Science Foundation of China(71201167)
文摘A Bayesian method for estimating human error probability(HEP) is presented.The main idea of the method is incorporating human performance data into the HEP estimation process.By integrating human performance data and prior information about human performance together,a more accurate and specific HEP estimation can be achieved.For the time-unrelated task without rigorous time restriction,the HEP estimated by the common-used human reliability analysis(HRA) methods or expert judgments is collected as the source of prior information.And for the time-related task with rigorous time restriction,the human error is expressed as non-response making.Therefore,HEP is the time curve of non-response probability(NRP).The prior information is collected from system safety and reliability specifications or by expert judgments.The(joint) posterior distribution of HEP or NRP-related parameter(s) is constructed after prior information has been collected.Based on the posterior distribution,the point or interval estimation of HEP/NRP is obtained.Two illustrative examples are introduced to demonstrate the practicality of the aforementioned approach.
基金Supported by Monitoring and Early warning System for Grain Security in Henan (0613024000)
文摘The data nodes with heterogeneous database in early warning system for grain security seriously hampered the effective data collection in this system. In this article,the existing middleware technologies was analyzed,the problem-solution approach of heterogeneous data sharing was discussed through middleware technologies. Based on this method,and according to the characteristics of early warning system for grain security,the technology of data sharing in this system were researched and explored to solve the issues of collection of heterogeneous data sharing.