In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues...In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues of cross-sectional dependence, and introduces the concepts of weak and strong cross-sectional dependence. Then, the main attention is primarily paid to spatial and factor approaches for modeling cross-sectional dependence for both linear and nonlinear (nonparametric and semiparametric) panel data models. Finally, we conclude with some speculations on future research directions.展开更多
Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,col...Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,column-oriented stores,document-oriented stores and graph databases.This paper makes a comparative analysis of the characteristics of the relational data model and NoSQL data models,and gives the design and implementation of different data models combined with cases,so that students can master the relevant theories and application methods of the database model.展开更多
Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol ...Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated e...Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated effects. The asymptotic properties of the fluctuation statistics in two cases are developed under the null and local alternative hypothesis. Furthermore, the consistency of the change point estimator is proven. Monte Carlo simulation shows that the fluctuation test can control the probability of type I error in most cases, and the empirical power is high in case of small and moderate sample sizes. An application of the procedure to a real data is presented.展开更多
The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases b...The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases because of its unfixed attribute sizes. XML is a matured technology and can be an elegant solution for such challenge. Representing data in XML trigger a question about storage efficiency. The goal of this work is to provide a straightforward answer to such a question. To this end, we compare three different storage models for the parametric temporal data model and show that XML is not worse than any other approaches. Furthermore, XML outperforms the other storages under certain conditions. Therefore, our simulation results provide a positive indication that the myth about XML is not true in the parametric temporal data model.展开更多
An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introdu...An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.展开更多
In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new...In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new and successful educational data mining models were analyzed and compared. These models have better performance than traditional models( such as Knowledge Tracing Model) in efficiency,comprehensiveness,ease of use,stability and so on. Then,the neural network algorithm was conducted to explore the feasibility of the application of educational data mining in student management,and the results show that it has enough predictive accuracy and reliability to be put into practice. In the end,the possibility and prospect of the application of educational data mining in teaching management system for university students was assessed.展开更多
This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictabi...This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictability of water quality thus plays a crucial role in managing our ecosystems to make informed decisions and,hence,proper environmental management.This study addresses these challenges by proposing an effective machine learning methodology applied to the“Water Quality”public dataset.The methodology has modeled the dataset suitable for providing prediction classification analysis with high values of the evaluating parameters such as accuracy,sensitivity,and specificity.The proposed methodology is based on two novel approaches:(a)the SMOTE method to deal with unbalanced data and(b)the skillfully involved classical machine learning models.This paper uses Random Forests,Decision Trees,XGBoost,and Support Vector Machines because they can handle large datasets,train models for handling skewed datasets,and provide high accuracy in water quality classification.A key contribution of this work is the use of custom sampling strategies within the SMOTE approach,which significantly enhanced performance metrics and improved class imbalance handling.The results demonstrate significant improvements in predictive performance,achieving the highest reported metrics:accuracy(98.92%vs.96.06%),sensitivity(98.3%vs.71.26%),and F1 score(98.37%vs.79.74%)using the XGBoost model.These improvements underscore the effectiveness of our custom SMOTE sampling strategies in addressing class imbalance.The findings contribute to environmental management by enabling ecology specialists to develop more accurate strategies for monitoring,assessing,and managing drinking water quality,ensuring better ecosystem and public health outcomes.展开更多
Objective To study the causal relationship between R&D investment and enterprise performance of domestic pharmaceutical enterprises.Methods Panel data model was adopted for empirical analysis.Results and Conclusio...Objective To study the causal relationship between R&D investment and enterprise performance of domestic pharmaceutical enterprises.Methods Panel data model was adopted for empirical analysis.Results and Conclusion Increasing the R&D investment intensity of pharmaceutical enterprises in the Yangtze River Delta and Zhejiang by 1%will increase their profit margins by 0.79%and 0.46%.On the contrary,if the profit margin increases by 1%,the R&D investment intensity will increase by 0.25%and 0.19%.If the profit margin of pharmaceutical enterprises in Beijing,Tianjin,Hebei,Chengdu,Chongqing and other regions increases by 1%,the R&D investment intensity will increase by 0.14%,0.07%and 0.1%,respectively,which are lower than those in the Yangtze River Delta and Zhejiang.The relationship between R&D investment and enterprise performance of pharmaceutical enterprises in the Yangtze River Delta and Zhejiang Province is Granger causality,showing a two-way positive effect.Profits and R&D investment of pharmaceutical enterprises in Beijing,Tianjin,Hebei,Chengdu,Chongqing and other regions are also Granger causality.But in the Pearl River Delta,profits and R&D investment have not passed the stability test,it is impossible to determine the causality between them.展开更多
Data-driven techniques are reshaping blast furnace iron-making process(BFIP)modeling,but their“black-box”nature often obscures interpretability and accuracy.To overcome these limitations,our mechanism and data co-dr...Data-driven techniques are reshaping blast furnace iron-making process(BFIP)modeling,but their“black-box”nature often obscures interpretability and accuracy.To overcome these limitations,our mechanism and data co-driven strategy(MDCDS)enhances model transparency and molten iron quality(MIQ)prediction.By zoning the furnace and applying mechanism-based features for material and thermal trends,coupled with a novel stationary broad feature learning system(StaBFLS),interference caused by nonstationary process characteristics are mitigated and the intrinsic information embedded in BFIP is mined.Subsequently,by integrating stationary feature representation with mechanism features,our temporal matching broad learning system(TMBLS)aligns process and quality variables using MIQ as the target.This integration allows us to establish process monitoring statistics using both mechanism and data-driven features,as well as detect modeling deviations.Validated against real-world BFIP data,our MDCDS model demonstrates consistent process alignment,robust feature extraction,and improved MIQ modeling—Yielding better fault detection.Additionally,we offer detailed insights into the validation process,including parameter baselining and optimization.展开更多
The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific req...The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance.展开更多
To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm ...To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.展开更多
On the study of the basic characteristics of geological objects and the special requirement for computing 3D geological model, this paper gives an object-oriented 3D topologic data model. In this model, the geological...On the study of the basic characteristics of geological objects and the special requirement for computing 3D geological model, this paper gives an object-oriented 3D topologic data model. In this model, the geological objects are divided into four object classes: point, line, area and volume. The volume class is further divided into four subclasses: the composite volume, the complex volume, the simple volume and the component. Twelve kinds of topological relations and the related data structures are designed for the geological objects.展开更多
In the application development of database,sharing information a- mong different DBMSs is an important and meaningful technical subject. This paper analyzes the schema definition and physical organization of popu- lar...In the application development of database,sharing information a- mong different DBMSs is an important and meaningful technical subject. This paper analyzes the schema definition and physical organization of popu- lar relational DBMSs and suggests the use of an intermediary schema.This technology provides many advantages such as powerful extensibility and ease in the integration of data conversions among different DBMSs etc.This pa- per introduces the data conversion system under DOS and XENIX operating systems.展开更多
The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data...The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data integrity of the system are seriously constrained by it′s 'No Read Up, No Write Down' property in the basic MLS model. In order to eliminate the covert channels, the polyinstantiation and the cover story are used in the new data model. The read and write rules have been redefined for improving the agility and usability of the system based on the MLS model. All the methods in the improved data model make the system more secure, agile and usable.展开更多
A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model...A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.展开更多
A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lowe...A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.展开更多
To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace...To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace management,such as data model iDM and query language iTrails,with the grid data access middleware OGSA-DAI,a grid dataspace management prototype system is built,in which tasks like data accessing,Abstraction,indexing,services management and answer-query are implemented by the OGSA-DAI workflows.Experimental results show that it is feasible to apply a dataspace management mechanism to the grid environment.Dataspace meets the grid data management needs in that it hides the heterogeneity and distribution of grid data and can adapt to the dynamic characteristics of the grid.The proposed grid dataspace management provides a new method for grid data management.展开更多
Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data...Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.展开更多
基金Supported by the National Natural Science Foundation of China(71131008(Key Project)and 71271179)
文摘In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues of cross-sectional dependence, and introduces the concepts of weak and strong cross-sectional dependence. Then, the main attention is primarily paid to spatial and factor approaches for modeling cross-sectional dependence for both linear and nonlinear (nonparametric and semiparametric) panel data models. Finally, we conclude with some speculations on future research directions.
基金This work was partly supported through the collaborative education projects of production and learning,and 2019 Sichuan teaching reform and research project,and teaching reform and research project of University of Electronic Science and technology in 2019.
文摘Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,column-oriented stores,document-oriented stores and graph databases.This paper makes a comparative analysis of the characteristics of the relational data model and NoSQL data models,and gives the design and implementation of different data models combined with cases,so that students can master the relevant theories and application methods of the database model.
基金supported by the National Nature Science Foundation of China(U21A20166)the Science and Technology Development Foundation of Jilin Province(20230508095RC)+2 种基金the Major Science and Technology Projects of Jilin Province and Changchun City(20220301033GX)the Development and Reform Commission Foundation of Jilin Province(2023C034-3)the Interdisciplinary Integration and Innovation Project of JLU(JLUXKJC2020202).
文摘Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金supported by the National Natural Science Foundation of China under Grant Nos. 11801438,12161072 and 12171388the Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 2023-JC-YB-058the Innovation Capability Support Program of Shaanxi under Grant No. 2020PT-023。
文摘Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated effects. The asymptotic properties of the fluctuation statistics in two cases are developed under the null and local alternative hypothesis. Furthermore, the consistency of the change point estimator is proven. Monte Carlo simulation shows that the fluctuation test can control the probability of type I error in most cases, and the empirical power is high in case of small and moderate sample sizes. An application of the procedure to a real data is presented.
基金supported by the National Research Foundation in Korea through contract N-12-NM-IR05
文摘The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases because of its unfixed attribute sizes. XML is a matured technology and can be an elegant solution for such challenge. Representing data in XML trigger a question about storage efficiency. The goal of this work is to provide a straightforward answer to such a question. To this end, we compare three different storage models for the parametric temporal data model and show that XML is not worse than any other approaches. Furthermore, XML outperforms the other storages under certain conditions. Therefore, our simulation results provide a positive indication that the myth about XML is not true in the parametric temporal data model.
文摘An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.
基金Sponsored by the Ability Enhancement Project of Teaching Staff in Harbin Institute of Technology(Grant No.06)
文摘In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new and successful educational data mining models were analyzed and compared. These models have better performance than traditional models( such as Knowledge Tracing Model) in efficiency,comprehensiveness,ease of use,stability and so on. Then,the neural network algorithm was conducted to explore the feasibility of the application of educational data mining in student management,and the results show that it has enough predictive accuracy and reliability to be put into practice. In the end,the possibility and prospect of the application of educational data mining in teaching management system for university students was assessed.
文摘This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictability of water quality thus plays a crucial role in managing our ecosystems to make informed decisions and,hence,proper environmental management.This study addresses these challenges by proposing an effective machine learning methodology applied to the“Water Quality”public dataset.The methodology has modeled the dataset suitable for providing prediction classification analysis with high values of the evaluating parameters such as accuracy,sensitivity,and specificity.The proposed methodology is based on two novel approaches:(a)the SMOTE method to deal with unbalanced data and(b)the skillfully involved classical machine learning models.This paper uses Random Forests,Decision Trees,XGBoost,and Support Vector Machines because they can handle large datasets,train models for handling skewed datasets,and provide high accuracy in water quality classification.A key contribution of this work is the use of custom sampling strategies within the SMOTE approach,which significantly enhanced performance metrics and improved class imbalance handling.The results demonstrate significant improvements in predictive performance,achieving the highest reported metrics:accuracy(98.92%vs.96.06%),sensitivity(98.3%vs.71.26%),and F1 score(98.37%vs.79.74%)using the XGBoost model.These improvements underscore the effectiveness of our custom SMOTE sampling strategies in addressing class imbalance.The findings contribute to environmental management by enabling ecology specialists to develop more accurate strategies for monitoring,assessing,and managing drinking water quality,ensuring better ecosystem and public health outcomes.
基金Shenyang Pharmaceutical University Young and Middle aged Teacher Career Development Support PlanPublic Welfare Research Fund for Scientific Undertakings of Liaoning Province in 2022(Soft Science Research Plan)(No.2022JH4/10100040).
文摘Objective To study the causal relationship between R&D investment and enterprise performance of domestic pharmaceutical enterprises.Methods Panel data model was adopted for empirical analysis.Results and Conclusion Increasing the R&D investment intensity of pharmaceutical enterprises in the Yangtze River Delta and Zhejiang by 1%will increase their profit margins by 0.79%and 0.46%.On the contrary,if the profit margin increases by 1%,the R&D investment intensity will increase by 0.25%and 0.19%.If the profit margin of pharmaceutical enterprises in Beijing,Tianjin,Hebei,Chengdu,Chongqing and other regions increases by 1%,the R&D investment intensity will increase by 0.14%,0.07%and 0.1%,respectively,which are lower than those in the Yangtze River Delta and Zhejiang.The relationship between R&D investment and enterprise performance of pharmaceutical enterprises in the Yangtze River Delta and Zhejiang Province is Granger causality,showing a two-way positive effect.Profits and R&D investment of pharmaceutical enterprises in Beijing,Tianjin,Hebei,Chengdu,Chongqing and other regions are also Granger causality.But in the Pearl River Delta,profits and R&D investment have not passed the stability test,it is impossible to determine the causality between them.
基金supported in part by the National Natural Science Foundation of China(61933015,61703371,62273030)the Central University Basic Research Fund of China(K20200002)(for NGICS Platform,Zhejiang University)the Social Development Project of Zhejiang Provincial Public Technology Research(LGF19F030004,LGG21F030015).
文摘Data-driven techniques are reshaping blast furnace iron-making process(BFIP)modeling,but their“black-box”nature often obscures interpretability and accuracy.To overcome these limitations,our mechanism and data co-driven strategy(MDCDS)enhances model transparency and molten iron quality(MIQ)prediction.By zoning the furnace and applying mechanism-based features for material and thermal trends,coupled with a novel stationary broad feature learning system(StaBFLS),interference caused by nonstationary process characteristics are mitigated and the intrinsic information embedded in BFIP is mined.Subsequently,by integrating stationary feature representation with mechanism features,our temporal matching broad learning system(TMBLS)aligns process and quality variables using MIQ as the target.This integration allows us to establish process monitoring statistics using both mechanism and data-driven features,as well as detect modeling deviations.Validated against real-world BFIP data,our MDCDS model demonstrates consistent process alignment,robust feature extraction,and improved MIQ modeling—Yielding better fault detection.Additionally,we offer detailed insights into the validation process,including parameter baselining and optimization.
文摘The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance.
文摘To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.
文摘On the study of the basic characteristics of geological objects and the special requirement for computing 3D geological model, this paper gives an object-oriented 3D topologic data model. In this model, the geological objects are divided into four object classes: point, line, area and volume. The volume class is further divided into four subclasses: the composite volume, the complex volume, the simple volume and the component. Twelve kinds of topological relations and the related data structures are designed for the geological objects.
文摘In the application development of database,sharing information a- mong different DBMSs is an important and meaningful technical subject. This paper analyzes the schema definition and physical organization of popu- lar relational DBMSs and suggests the use of an intermediary schema.This technology provides many advantages such as powerful extensibility and ease in the integration of data conversions among different DBMSs etc.This pa- per introduces the data conversion system under DOS and XENIX operating systems.
文摘The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data integrity of the system are seriously constrained by it′s 'No Read Up, No Write Down' property in the basic MLS model. In order to eliminate the covert channels, the polyinstantiation and the cover story are used in the new data model. The read and write rules have been redefined for improving the agility and usability of the system based on the MLS model. All the methods in the improved data model make the system more secure, agile and usable.
文摘A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.
文摘A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.
文摘To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace management,such as data model iDM and query language iTrails,with the grid data access middleware OGSA-DAI,a grid dataspace management prototype system is built,in which tasks like data accessing,Abstraction,indexing,services management and answer-query are implemented by the OGSA-DAI workflows.Experimental results show that it is feasible to apply a dataspace management mechanism to the grid environment.Dataspace meets the grid data management needs in that it hides the heterogeneity and distribution of grid data and can adapt to the dynamic characteristics of the grid.The proposed grid dataspace management provides a new method for grid data management.
文摘Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.