The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured d...The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured data usually have some abnormalities. When the abnor mal data are eliminated by filtering, blanks are created. The grey generation an d GM(1,1) are used to create new data for these blanks. For the uneven data sequ en ce created by measuring error, the mean generation is used to smooth it and then the stepwise and smooth generations are used to improve the data sequence.展开更多
A data identifier(DID)is an essential tag or label in all kinds of databases—particularly those related to integrated computational materials engineering(ICME),inheritable integrated intelligent manufacturing(I3M),an...A data identifier(DID)is an essential tag or label in all kinds of databases—particularly those related to integrated computational materials engineering(ICME),inheritable integrated intelligent manufacturing(I3M),and the Industrial Internet ofThings.With the guidance and quick acceleration of the developme nt of advanced materials,as envisioned by official documents worldwide,more investigations are required to construct relative numerical standards for material informatics.This work proposes a universal DID format consisting of a set of build chains,which aligns with the classical form of identifier in both international and national standards,such as ISO/IEC 29168-1:2000,GB/T 27766-2011,GA/T 543.2-2011,GM/T 0006-2012,GJB 7365-2011,SL 325-2014,SL 607-201&WS 363.2-2011,and QX/T 39-2005.Each build chain is made up of capital letters and numbers,with no symbols.Moreover,the total length of each build chain is not restricted,which follows the formation of the Universal Coded Character Set in the international standard of ISO/IEC 10646.Based on these rules,the proposed DID is flexible and convenient for extendi ng and sharing in and between various cloud-based platforms.Accordingly,classical two-dimensional(2D)codes,including the Hanxin Code,Lots Perception Matrix(LP)Code,Quick Response(Q.R)code,Grid Matrix(GM)code,and Data Matrix(DM)Code,can be constructed and precisely recognized and/or decoded by either smart phones or specific machines.By utilizing these 2D codes as the fingerprints of a set of data linked with cloud-based platforms,progress and updates in the composition-processing-structure-property-performance workflow process can be tracked spontaneously,paving a path to accelerate the discovery and manufacture of advanced materials and enhance research productivity,performance,and collaboration.展开更多
The nature of the measured data varies among different disciplines of geosciences.In rock engineering,features of data play a leading role in determining the feasible methods of its proper manipulation.The present stu...The nature of the measured data varies among different disciplines of geosciences.In rock engineering,features of data play a leading role in determining the feasible methods of its proper manipulation.The present study focuses on resolving one of the major deficiencies of conventional neural networks(NNs)in dealing with rock engineering data.Herein,since the samples are obtained from hundreds of meters below the surface with the utmost difficulty,the number of samples is always limited.Meanwhile,the experimental analysis of these samples may result in many repetitive values and 0 s.However,conventional neural networks are incapable of making robust models in the presence of such data.On the other hand,these networks strongly depend on the initial weights and bias values for making reliable predictions.With this in mind,the current research introduces a novel kind of neural network processing framework for the geological that does not suffer from the limitations of the conventional NNs.The introduced single-data-based feature engineering network extracts all the information wrapped in every single data point without being affected by the other points.This method,being completely different from the conventional NNs,re-arranges all the basic elements of the neuron model into a new structure.Therefore,its mathematical calculations were performed from the very beginning.Moreover,the corresponding programming codes were developed in MATLAB and Python since they could not be found in any common programming software at the time being.This new kind of network was first evaluated through computer-based simulations of rock cracks in the 3 DEC environment.After the model’s reliability was confirmed,it was adopted in two case studies for estimating respectively tensile strength and shear strength of real rock samples.These samples were coal core samples from the Southern Qinshui Basin of China,and gas hydrate-bearing sediment(GHBS)samples from the Nankai Trough of Japan.The coal samples used in the experiments underwent nuclear magnetic resonance(NMR)measurements,and Scanning Electron Microscopy(SEM)imaging to investigate their original micro and macro fractures.Once done with these experiments,measurement of the rock mechanical properties,including tensile strength,was performed using a rock mechanical test system.However,the shear strength of GHBS samples was acquired through triaxial and direct shear tests.According to the obtained result,the new network structure outperformed the conventional neural networks in both cases of simulation-based and case study estimations of the tensile and shear strength.Even though the proposed approach of the current study originally aimed at resolving the issue of having a limited dataset,its unique properties would also be applied to larger datasets from other subsurface measurements.展开更多
Background knowledge is important for data mining, especially in complicated situation. Ontological engineering is the successor of knowledge engineering. The sharable knowledge bases built on ontology can be used to ...Background knowledge is important for data mining, especially in complicated situation. Ontological engineering is the successor of knowledge engineering. The sharable knowledge bases built on ontology can be used to provide background knowledge to direct the process of data mining. This paper gives a common introduction to the method and presents a practical analysis example using SVM (support vector machine) as the classifier. Gene Ontology and the accompanying annotations compose a big knowledge base, on which many researches have been carried out. Microarray dataset is the output of DNA chip. With the help of Gene Ontology we present a more elaborate analysis on microarray data than former researchers. The method can also be used in other fields with similar scenario.展开更多
3D spatial data model and simulating are the core of 3D GIS can be adopted indifferent domains. A data model based on Quasi Tri-Prism Volume (QTPV) has been proposed. QTPVdefinition and its special cases have been dis...3D spatial data model and simulating are the core of 3D GIS can be adopted indifferent domains. A data model based on Quasi Tri-Prism Volume (QTPV) has been proposed. QTPVdefinition and its special cases have been discussed. Using QTPV and its special cases, irregularnatural geological bodies and regular subsurface engineering can be described efficiently. Theproposed model is composed of five primitives and six objects. Data structures and topologicalrelationship of the fives primitives and three objects describing stratigraphy are designed indetail. Some schemes are designed for the QTPV modelling of stratigraphy and subsurface engineeringaccording to modelling data. The model manipulation method of QTPV cutting by an arbitrary plane isdiscussed. Using VC^(++)6. 0 programming language integrated with SQL database and OpenGL graphiclibrary under windows environment, a system prototype 3DGeoMV has been developed. The experimentresult shows that the QTPV model is feasible and efficient in modelling subsurface engineering.展开更多
The key to develop 3-D GISs is the study on 3-D data model and data structure. Some of the data models and data structures have been presented by scholars. Because of the complexity of 3-D spatial phenomenon, there ar...The key to develop 3-D GISs is the study on 3-D data model and data structure. Some of the data models and data structures have been presented by scholars. Because of the complexity of 3-D spatial phenomenon, there are no perfect data structures that can describe all spatial entities. Every data structure has its own advantages and disadvantages. It is difficult to design a single data structure to meet different needs. The important subject in the3-D data models is developing a data model that has integrated vector and raster data structures. A special 3-D spatial data model based on distributing features of spatial entities should be designed. We took the geological exploration engineering as the research background and designed an integrated data model whose data structures integrats vector and raster data byadopting object-oriented technique. Research achievements are presented in this paper.展开更多
With the rapid development of science and technology,the application of intelligent technology in the field of civil engineering is more extensive,especially in the safety evaluation and management of engineering stru...With the rapid development of science and technology,the application of intelligent technology in the field of civil engineering is more extensive,especially in the safety evaluation and management of engineering structures.This paper discusses the role of intelligent technologies(such as artificial intelligence,Internet of Things,BIM,big data analysis,etc.)in the monitoring,evaluation,and maintenance of engineering structure safety.By studying the principle,application scenarios,and advantages of intelligent technology in structural safety evaluation,this paper summarizes how intelligent technology can improve engineering management efficiency and reduce safety risks,and puts forward the trend and challenge of future development.展开更多
Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk ba...Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk based Electronic Archives Management System(ODEAMS) is presented first and it has successfully solved some problems in engineering data management. Then, this paper describes some details to implement the hypertext network in ODEAMS after introducing the requirements and characters of engineering data management.展开更多
The fact that most engineering applications are developed by engineers themselves rather than computer professionals calls for the data modeling methods to be powerful enough to represent complex engineering phenomena...The fact that most engineering applications are developed by engineers themselves rather than computer professionals calls for the data modeling methods to be powerful enough to represent complex engineering phenomena, but simple enough to use. A data modeling method which can help engineers to write C++ code with high quality is introduced.展开更多
Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and e...Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.展开更多
Reverse engineering in the manufacturing field is a process in which the digitized data are obtained from an existing object model or a part of it, and then the CAD model is reconstructed. This paper presents an RBF n...Reverse engineering in the manufacturing field is a process in which the digitized data are obtained from an existing object model or a part of it, and then the CAD model is reconstructed. This paper presents an RBF neural network approach to modify and fit the digitized data. The centers for the RBF are selected by using the orthogonal least squares learning algorithm. A mathematically known surface is used for generating a number of samples for training the networks. The trained networks then generated a number of new points which were compared with the calculating points from the equations. Moreover, a series of practice digitizing curves are used to test the approach. The results showed that this approach is effective in modifying and fitting digitized data and generating data points to reconstruct the surface model.展开更多
A core element of the sustainable approach to global living quality improvement can now become the intensive and organized usage of underground space.There is a growing interest in underground building and growth worl...A core element of the sustainable approach to global living quality improvement can now become the intensive and organized usage of underground space.There is a growing interest in underground building and growth worldwide.The reduced consumption of electricity,effective preservation of green land,sustainable wastewater and sewage treatment,efficient reverse degradation of the urban environment,and reliable critical infrastructure management can improve the quality of life.At the same time,technological innovations such as artificial intelligence(AI),cloud computing(CC),the internet of things(IoT),and big data analytics(BDA)play a significant role in improved quality of life.Hence,this study aims to integrate the technological innovations in urban underground engineering to ensure a high quality of life.Thus,this study uses big data analytics to carry out the status quo of foundation treatment and proposes a conceptual framework named the BDA with IoT on urban underground engineering(BI0T-UUE).This framework connects hidden features with various high-level sensing sources and practical predictive model characterization to lower building costs,productive infrastructure management,preparedness for disasters,and modern community smart services.The IoT integration gives an optimum opportunity to work towards the functionality of‘‘digital doubles’’of secret infrastructure,both economical and scalable,with the increasing sophistication and tooling of the underworld.The simulation analysis ensures the highest efficiency and cost-effectiveness of the underground engineering with a value of 96.54%and 97.46%.展开更多
In this paper, we adopt a novel applied approach to fault analysis based on data mining theory. In our researches, global information will be introduced into the electric power system, we are using mainly cluster anal...In this paper, we adopt a novel applied approach to fault analysis based on data mining theory. In our researches, global information will be introduced into the electric power system, we are using mainly cluster analysis technology of data mining theory to resolve quickly and exactly detection of fault components and fault sections, and finally accomplish fault analysis. The main technical contributions and innovations in this paper include, introducing global information into electrical engineering, developing a new application to fault analysis in electrical engineering. Data mining theory is defined as the process of automatically extracting valid, novel, potentially useful and ultimately comprehensive information from large databases. It has been widely utilized in both academic and applied scientific researches in which the data sets are generated by experiments. Data mining theory will contribute a lot in the study of electrical engineering.展开更多
This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers ...This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers in land management and territorial planning,by first screening for areas with a higher debris flow susceptibility.Five environmental predisposing factors,namely,bedrock lithology,fracture network,quaternary deposits,slope inclination,and hydrographic network,were selected as independent parameters and their mutual interactions were described and quantified using the Rock Engineering System(RES)methodology.For each parameter,specific indexes were proposed,aiming to provide a final synthetic and representative index of debris flow susceptibility at the basin scale.The methodology was tested in four basins located in the Upper Susa Valley(NW Italian Alps)where debris flow events are the predominant natural hazard.The proposed matrix can represent a useful standardized tool,universally applicable,since it is independent of type and characteristic of the basin.展开更多
The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount o...The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount of data, which can be processed in batch or stream settings. The stream setting corresponds to the treatment of a continuous sequence of data that arrives in real-time flow and needs to be processed in real-time. The models, tools, methods and algorithms for generating intelligence from data stream culminate in the approaches of Data Stream Mining and Data Stream Learning. The activities of such approaches can be organized and structured according to Engineering principles, thus allowing the principles of Analytical Engineering, or more specifically, Analytical Engineering for Data Stream (AEDS). Thus, this article presents the AEDS conceptual framework composed of four pillars (Data, Model, Tool, People) and three processes (Acquisition, Retention, Review). The definition of these pillars and processes is carried out based on the main components of data stream setting, corresponding to four pillars, and also on the necessity to operationalize the activities of an Analytical Organization (AO) in the use of AEDS four pillars, which determines the three proposed processes. The AEDS framework favors the projects carried out in an AO, that is, its Analytical Projects (AP), to favor the delivery of results, or Analytical Deliverables (AD), carried out by the Analytical Teams (AT) in order to provide intelligence from stream data.展开更多
In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as th...In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.展开更多
Geotechnical seismic engineering belongs to the cross field of geotechnical engineering and earthquake engineering. Due to the dual effects of the variability of geotechnical objects and the complexity of dynamic load...Geotechnical seismic engineering belongs to the cross field of geotechnical engineering and earthquake engineering. Due to the dual effects of the variability of geotechnical objects and the complexity of dynamic loading, the scientific problems involved in this field have to face more uncertainties. Through the analysis on the status quo of uncertainty analysis methods in geotechnical earthquake engineering in the field of non deterministic analysis clear difficulties, weak links with all kinds of uncertainty analysis method to solve the inherent weakness of determining geotechnical earthquake engineering uncertainty problem, and can determine the establishment and improvement to provide some ideas and direction analysis model for non.展开更多
This paper describes how database information and electronic 3D models are integrated to produce power plant designs more efficiently and accurately. Engineering CAD/CAE systems have evolved from strictly 3D modeling ...This paper describes how database information and electronic 3D models are integrated to produce power plant designs more efficiently and accurately. Engineering CAD/CAE systems have evolved from strictly 3D modeling to spatial data management tools. This paper describes how process data, commodities, and location data are disseminated to the various project team members through a central integrated database. The database and 3D model also provide a cache of information that is valuable to the constructor, and operations and maintenance Personnel.展开更多
Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount...Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount of valu- able information extracted from disparate data sources to obtain the comprehensive reliability knowledge. Consid- ering the degradation failure and the catastrophic failure simultaneously, which are competing risks and can affect the reliability, a reliability evaluation model based on data fusion for aircraft engines is developed, Above the characteristics of the proposed model, reliability evaluation is more feasible than that by only utilizing failure data alone, and is also more accurate than that by only considering single failure mode. Example shows the effective- ness of the proposed model.展开更多
文摘The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured data usually have some abnormalities. When the abnor mal data are eliminated by filtering, blanks are created. The grey generation an d GM(1,1) are used to create new data for these blanks. For the uneven data sequ en ce created by measuring error, the mean generation is used to smooth it and then the stepwise and smooth generations are used to improve the data sequence.
基金This work was financially supported by the National Key Research and Development Program of China(2018YFB0703801,2018YFB0703802,2016YFB0701303,and 2016YFB0701304)CRRC Tangshan Co.,Ltd.(201750463031).Special thanks to Professor Hong Wang at Shanghai Jiao Tong University for the fruitful discussions and the constructive suggestions/comments.
文摘A data identifier(DID)is an essential tag or label in all kinds of databases—particularly those related to integrated computational materials engineering(ICME),inheritable integrated intelligent manufacturing(I3M),and the Industrial Internet ofThings.With the guidance and quick acceleration of the developme nt of advanced materials,as envisioned by official documents worldwide,more investigations are required to construct relative numerical standards for material informatics.This work proposes a universal DID format consisting of a set of build chains,which aligns with the classical form of identifier in both international and national standards,such as ISO/IEC 29168-1:2000,GB/T 27766-2011,GA/T 543.2-2011,GM/T 0006-2012,GJB 7365-2011,SL 325-2014,SL 607-201&WS 363.2-2011,and QX/T 39-2005.Each build chain is made up of capital letters and numbers,with no symbols.Moreover,the total length of each build chain is not restricted,which follows the formation of the Universal Coded Character Set in the international standard of ISO/IEC 10646.Based on these rules,the proposed DID is flexible and convenient for extendi ng and sharing in and between various cloud-based platforms.Accordingly,classical two-dimensional(2D)codes,including the Hanxin Code,Lots Perception Matrix(LP)Code,Quick Response(Q.R)code,Grid Matrix(GM)code,and Data Matrix(DM)Code,can be constructed and precisely recognized and/or decoded by either smart phones or specific machines.By utilizing these 2D codes as the fingerprints of a set of data linked with cloud-based platforms,progress and updates in the composition-processing-structure-property-performance workflow process can be tracked spontaneously,paving a path to accelerate the discovery and manufacture of advanced materials and enhance research productivity,performance,and collaboration.
文摘The nature of the measured data varies among different disciplines of geosciences.In rock engineering,features of data play a leading role in determining the feasible methods of its proper manipulation.The present study focuses on resolving one of the major deficiencies of conventional neural networks(NNs)in dealing with rock engineering data.Herein,since the samples are obtained from hundreds of meters below the surface with the utmost difficulty,the number of samples is always limited.Meanwhile,the experimental analysis of these samples may result in many repetitive values and 0 s.However,conventional neural networks are incapable of making robust models in the presence of such data.On the other hand,these networks strongly depend on the initial weights and bias values for making reliable predictions.With this in mind,the current research introduces a novel kind of neural network processing framework for the geological that does not suffer from the limitations of the conventional NNs.The introduced single-data-based feature engineering network extracts all the information wrapped in every single data point without being affected by the other points.This method,being completely different from the conventional NNs,re-arranges all the basic elements of the neuron model into a new structure.Therefore,its mathematical calculations were performed from the very beginning.Moreover,the corresponding programming codes were developed in MATLAB and Python since they could not be found in any common programming software at the time being.This new kind of network was first evaluated through computer-based simulations of rock cracks in the 3 DEC environment.After the model’s reliability was confirmed,it was adopted in two case studies for estimating respectively tensile strength and shear strength of real rock samples.These samples were coal core samples from the Southern Qinshui Basin of China,and gas hydrate-bearing sediment(GHBS)samples from the Nankai Trough of Japan.The coal samples used in the experiments underwent nuclear magnetic resonance(NMR)measurements,and Scanning Electron Microscopy(SEM)imaging to investigate their original micro and macro fractures.Once done with these experiments,measurement of the rock mechanical properties,including tensile strength,was performed using a rock mechanical test system.However,the shear strength of GHBS samples was acquired through triaxial and direct shear tests.According to the obtained result,the new network structure outperformed the conventional neural networks in both cases of simulation-based and case study estimations of the tensile and shear strength.Even though the proposed approach of the current study originally aimed at resolving the issue of having a limited dataset,its unique properties would also be applied to larger datasets from other subsurface measurements.
基金Project (No. 20040248001) supported by the Ph.D. Programs Foun-dation of Ministry of Education of China
文摘Background knowledge is important for data mining, especially in complicated situation. Ontological engineering is the successor of knowledge engineering. The sharable knowledge bases built on ontology can be used to provide background knowledge to direct the process of data mining. This paper gives a common introduction to the method and presents a practical analysis example using SVM (support vector machine) as the classifier. Gene Ontology and the accompanying annotations compose a big knowledge base, on which many researches have been carried out. Microarray dataset is the output of DNA chip. With the help of Gene Ontology we present a more elaborate analysis on microarray data than former researchers. The method can also be used in other fields with similar scenario.
基金Funded by the Hong Kong Polytechnic University ASD research fund (No. 1.34.A222),Open Research Fund Program of LIESMARS (No. WKL(01) 0302) and the National Natural Science Foundation of China(No. 40401021)
文摘3D spatial data model and simulating are the core of 3D GIS can be adopted indifferent domains. A data model based on Quasi Tri-Prism Volume (QTPV) has been proposed. QTPVdefinition and its special cases have been discussed. Using QTPV and its special cases, irregularnatural geological bodies and regular subsurface engineering can be described efficiently. Theproposed model is composed of five primitives and six objects. Data structures and topologicalrelationship of the fives primitives and three objects describing stratigraphy are designed indetail. Some schemes are designed for the QTPV modelling of stratigraphy and subsurface engineeringaccording to modelling data. The model manipulation method of QTPV cutting by an arbitrary plane isdiscussed. Using VC^(++)6. 0 programming language integrated with SQL database and OpenGL graphiclibrary under windows environment, a system prototype 3DGeoMV has been developed. The experimentresult shows that the QTPV model is feasible and efficient in modelling subsurface engineering.
基金Project supported by the National Outstanding Youth Researchers Foundation (No.49525101)the Opening Research Foundation from LIESMARS(WKL(96)0302)
文摘The key to develop 3-D GISs is the study on 3-D data model and data structure. Some of the data models and data structures have been presented by scholars. Because of the complexity of 3-D spatial phenomenon, there are no perfect data structures that can describe all spatial entities. Every data structure has its own advantages and disadvantages. It is difficult to design a single data structure to meet different needs. The important subject in the3-D data models is developing a data model that has integrated vector and raster data structures. A special 3-D spatial data model based on distributing features of spatial entities should be designed. We took the geological exploration engineering as the research background and designed an integrated data model whose data structures integrats vector and raster data byadopting object-oriented technique. Research achievements are presented in this paper.
文摘With the rapid development of science and technology,the application of intelligent technology in the field of civil engineering is more extensive,especially in the safety evaluation and management of engineering structures.This paper discusses the role of intelligent technologies(such as artificial intelligence,Internet of Things,BIM,big data analysis,etc.)in the monitoring,evaluation,and maintenance of engineering structure safety.By studying the principle,application scenarios,and advantages of intelligent technology in structural safety evaluation,this paper summarizes how intelligent technology can improve engineering management efficiency and reduce safety risks,and puts forward the trend and challenge of future development.
文摘Since the late of previous decade, hypertext technique has been applied in many areas. A hypertext data model with version control which is applied to a digital delivery for engineering documents named Optical Disk based Electronic Archives Management System(ODEAMS) is presented first and it has successfully solved some problems in engineering data management. Then, this paper describes some details to implement the hypertext network in ODEAMS after introducing the requirements and characters of engineering data management.
文摘The fact that most engineering applications are developed by engineers themselves rather than computer professionals calls for the data modeling methods to be powerful enough to represent complex engineering phenomena, but simple enough to use. A data modeling method which can help engineers to write C++ code with high quality is introduced.
文摘Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.
文摘Reverse engineering in the manufacturing field is a process in which the digitized data are obtained from an existing object model or a part of it, and then the CAD model is reconstructed. This paper presents an RBF neural network approach to modify and fit the digitized data. The centers for the RBF are selected by using the orthogonal least squares learning algorithm. A mathematically known surface is used for generating a number of samples for training the networks. The trained networks then generated a number of new points which were compared with the calculating points from the equations. Moreover, a series of practice digitizing curves are used to test the approach. The results showed that this approach is effective in modifying and fitting digitized data and generating data points to reconstruct the surface model.
基金supported by Municipal Colleges and Universities Basic Scientific Research Business Expenses Project(X18199)Beijing Municipal Education Commission Scientific Research Project Science and Technology Plan General Project(FACE PROJECT)(Z18028)School Research Fund Natural Science Project-Ad Hoc Fund(ZF17067).
文摘A core element of the sustainable approach to global living quality improvement can now become the intensive and organized usage of underground space.There is a growing interest in underground building and growth worldwide.The reduced consumption of electricity,effective preservation of green land,sustainable wastewater and sewage treatment,efficient reverse degradation of the urban environment,and reliable critical infrastructure management can improve the quality of life.At the same time,technological innovations such as artificial intelligence(AI),cloud computing(CC),the internet of things(IoT),and big data analytics(BDA)play a significant role in improved quality of life.Hence,this study aims to integrate the technological innovations in urban underground engineering to ensure a high quality of life.Thus,this study uses big data analytics to carry out the status quo of foundation treatment and proposes a conceptual framework named the BDA with IoT on urban underground engineering(BI0T-UUE).This framework connects hidden features with various high-level sensing sources and practical predictive model characterization to lower building costs,productive infrastructure management,preparedness for disasters,and modern community smart services.The IoT integration gives an optimum opportunity to work towards the functionality of‘‘digital doubles’’of secret infrastructure,both economical and scalable,with the increasing sophistication and tooling of the underworld.The simulation analysis ensures the highest efficiency and cost-effectiveness of the underground engineering with a value of 96.54%and 97.46%.
文摘In this paper, we adopt a novel applied approach to fault analysis based on data mining theory. In our researches, global information will be introduced into the electric power system, we are using mainly cluster analysis technology of data mining theory to resolve quickly and exactly detection of fault components and fault sections, and finally accomplish fault analysis. The main technical contributions and innovations in this paper include, introducing global information into electrical engineering, developing a new application to fault analysis in electrical engineering. Data mining theory is defined as the process of automatically extracting valid, novel, potentially useful and ultimately comprehensive information from large databases. It has been widely utilized in both academic and applied scientific researches in which the data sets are generated by experiments. Data mining theory will contribute a lot in the study of electrical engineering.
文摘This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers in land management and territorial planning,by first screening for areas with a higher debris flow susceptibility.Five environmental predisposing factors,namely,bedrock lithology,fracture network,quaternary deposits,slope inclination,and hydrographic network,were selected as independent parameters and their mutual interactions were described and quantified using the Rock Engineering System(RES)methodology.For each parameter,specific indexes were proposed,aiming to provide a final synthetic and representative index of debris flow susceptibility at the basin scale.The methodology was tested in four basins located in the Upper Susa Valley(NW Italian Alps)where debris flow events are the predominant natural hazard.The proposed matrix can represent a useful standardized tool,universally applicable,since it is independent of type and characteristic of the basin.
文摘The analytical capacity of massive data has become increasingly necessary, given the high volume of data that has been generated daily by different sources. The data sources are varied and can generate a huge amount of data, which can be processed in batch or stream settings. The stream setting corresponds to the treatment of a continuous sequence of data that arrives in real-time flow and needs to be processed in real-time. The models, tools, methods and algorithms for generating intelligence from data stream culminate in the approaches of Data Stream Mining and Data Stream Learning. The activities of such approaches can be organized and structured according to Engineering principles, thus allowing the principles of Analytical Engineering, or more specifically, Analytical Engineering for Data Stream (AEDS). Thus, this article presents the AEDS conceptual framework composed of four pillars (Data, Model, Tool, People) and three processes (Acquisition, Retention, Review). The definition of these pillars and processes is carried out based on the main components of data stream setting, corresponding to four pillars, and also on the necessity to operationalize the activities of an Analytical Organization (AO) in the use of AEDS four pillars, which determines the three proposed processes. The AEDS framework favors the projects carried out in an AO, that is, its Analytical Projects (AP), to favor the delivery of results, or Analytical Deliverables (AD), carried out by the Analytical Teams (AT) in order to provide intelligence from stream data.
基金supported by the National Key Research and Development Program of China(2021YFB2600405).
文摘In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.
文摘Geotechnical seismic engineering belongs to the cross field of geotechnical engineering and earthquake engineering. Due to the dual effects of the variability of geotechnical objects and the complexity of dynamic loading, the scientific problems involved in this field have to face more uncertainties. Through the analysis on the status quo of uncertainty analysis methods in geotechnical earthquake engineering in the field of non deterministic analysis clear difficulties, weak links with all kinds of uncertainty analysis method to solve the inherent weakness of determining geotechnical earthquake engineering uncertainty problem, and can determine the establishment and improvement to provide some ideas and direction analysis model for non.
文摘This paper describes how database information and electronic 3D models are integrated to produce power plant designs more efficiently and accurately. Engineering CAD/CAE systems have evolved from strictly 3D modeling to spatial data management tools. This paper describes how process data, commodities, and location data are disseminated to the various project team members through a central integrated database. The database and 3D model also provide a cache of information that is valuable to the constructor, and operations and maintenance Personnel.
基金Supported by the National Natural Science Foundation of China and Aviation Fund(60879001)the Natural Science Foundation of Jiangsu Province(BK2009378)+1 种基金the Fundamental Research Fund of Nanjing University of Aeronautics and Astronautics(NS2010179)the Qinglan Project of Jiangsu Province~~
文摘Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount of valu- able information extracted from disparate data sources to obtain the comprehensive reliability knowledge. Consid- ering the degradation failure and the catastrophic failure simultaneously, which are competing risks and can affect the reliability, a reliability evaluation model based on data fusion for aircraft engines is developed, Above the characteristics of the proposed model, reliability evaluation is more feasible than that by only utilizing failure data alone, and is also more accurate than that by only considering single failure mode. Example shows the effective- ness of the proposed model.