With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration wi...With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.展开更多
The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D desi...The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the a...The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the application of BIM technology.This paper summarizes and analyzes the whole-process project cost management based on BIM,aiming to explore its application and development prospects in the construction industry.Firstly,this paper introduces the role and advantages of BIM technology in engineering cost management,including information integration,data sharing,and collaborative work.Secondly,the paper analyzes the key technologies and methods of the whole-process project cost management based on BIM,including model construction,data management,and cost control.In addition,the paper also discusses the challenges and limitations of the whole-process BIM project cost management,such as the inconsistency of technical standards,personnel training,and consciousness change.Finally,the paper summarizes the advantages and development prospects of the whole-process project cost management based on BIM and puts forward the direction and suggestions for future research.Through the research of this paper,it can provide a reference for construction cost management and promote innovation and development in the construction industry.展开更多
Life Cycle Cost Analysis (LCCA) provides a systematic approach to assess the total cost associated with owning, operating, and maintaining assets throughout their entire life. BIM empowers architects and designers to ...Life Cycle Cost Analysis (LCCA) provides a systematic approach to assess the total cost associated with owning, operating, and maintaining assets throughout their entire life. BIM empowers architects and designers to perform real-time evaluations to explore various design options. However, when integrated with LCCA, BIM provides a comprehensive economic perspective that helps stakeholders understand the long-term financial implications of design decisions. This study presents a methodology for developing a model that seamlessly integrates BIM and LCCA during the conceptual design stage of buildings. This integration allows for a comprehensive evaluation and analysis of the design process, ensuring that the development aligns with the principles of low carbon emissions by employing modular construction, 3D concrete printing methods, and different building design alternatives. The model considers the initial construction costs in addition to all the long-term operational, maintenance, and salvage values. It combines various tools and data through different modules, including energy analysis, Life Cycle Assessment (LCA), and Life Cycle Cost Analysis (LCCA) to execute a comprehensive assessment of the financial implications of a specific design option throughout the lifecycle of building projects. The development of the said model and its implementation involves the creation of a new plug-in for the BIM tool (i.e., Autodesk Revit) to enhance its functionalities and capabilities in forecasting the life-cycle costs of buildings in addition to generating associated cash flows, creating scenarios, and sensitivity analyses in an automatic manner. This model empowers designers to evaluate and justify their initial investments while designing and selecting potential construction methods for buildings, and enabling stakeholders to make informed decisions by assessing different design alternatives based on long-term financial considerations during the early stages of design.展开更多
Accurately forecasting the operational performance of a tunnel boring machine(TBM)in advance is useful for making timely adjustments to boring parameters,thereby enhancing overall boring efficiency.In this study,we us...Accurately forecasting the operational performance of a tunnel boring machine(TBM)in advance is useful for making timely adjustments to boring parameters,thereby enhancing overall boring efficiency.In this study,we used the Informer model to predict a critical performance parameter of the TBM,namely thrust.Leveraging data from the Guangzhou Metro Line 22 project on the big data platform in China,the model’s performance was validated,while data from Line 18 were used to assess its generalization capability.Results revealed that the Informer model surpasses random forest(RF),extreme gradient boosting(XGB),support vector regression(SVR),k-nearest neighbors(KNN),back propagation(BP),and long short-term memory(LSTM)models in both prediction accuracy and generalization performance.In addition,the optimal input lengths for maximizing accuracy in the single-time-step output model are within the range of 8–24,while for the multiple-time-step output model,the optimal input length is 8.Furthermore,the last predicted value in the case of multiple-time-step outputs showed the highest accuracy.It was also found that relaxation of the Pearson analysis method metrics to 0.95 improved the performance of the model.Finally,the prediction results were most affected by earth pressure,rotation speed,torque,boring speed,and the surrounding rock grade.The model can provide useful guidance for constructors when adjusting TBM operation parameters.展开更多
In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood...In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.展开更多
Underground pipeline networks constitute a major component of urban infrastructure,and thus,it is imperative to have an efficient mechanism to manage them.This study introduces a secondary development system to effici...Underground pipeline networks constitute a major component of urban infrastructure,and thus,it is imperative to have an efficient mechanism to manage them.This study introduces a secondary development system to efficiently model underground pipeline networks,using the building information modeling(BIM)-based software Revit.The system comprises separate pipe point and tubulation models.Using a Revit application programming interface(API),the spatial position and attribute data of the pipe points are extracted from a pipeline database,and the corresponding tubulation data are extracted from a tubulation database.Using the Family class in Revit API,the cluster in the self-built library of pipe point is inserted into the spatial location and the attribute data is added;in the same way,all pipeline instances in the pipeline system are created.The extension and localization of the model accelerated the modeling speed.The system was then used in a real construction project.The expansion of the model database and rapid modeling made the application of BIM technology in three-dimensional visualization of underground pipeline networks more convenient.Furthermore,it has applications in pipeline engineering construction and management.展开更多
Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technol...Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technology to study how slope aspect contributes to landslide growth from Yunyang to Wushan segment in the Three Gorges Reservoir area, and the relationship between aspect and landslide growth is quantified. Through the research on 205 landslides examples, it is found that the slope contributes most whose aspect is towards south,southeast and southwest aspect contribute moderately, and other five aspects contribute little. The research result inosculates preferably with the fact. The result of this paper can provide potent gist to the construction of Three Gorges Reservoir area in future.展开更多
Computer vision-based inspection methods show promise for automating post-earthquake building inspections.These methods survey a building with unmanned aerial vehicles and automatically detect damage in the collected ...Computer vision-based inspection methods show promise for automating post-earthquake building inspections.These methods survey a building with unmanned aerial vehicles and automatically detect damage in the collected images.Nevertheless,assessing the damage′s impact on structural safety requires localizing damage to specific building components with known design and function.This paper proposes a BIM-based automated inspection framework to provide context for visual surveys.A deep learning-based semantic segmentation algorithm is trained to automatically identify damage in images.The BIM automatically associates any identified damage with specific building components.Then,components are classified into damage states consistent with component fragility models for integration with a structural analysis.To demonstrate the framework,methods are developed to photorealistically simulate severe structural damage in a synthetic computer graphics environment.A graphics model of a real building in Urbana,Illinois,is generated to test the framework;the model is integrated with a structural analysis to apply earthquake damage in a physically realistic manner.A simulated UAV survey is flown of the graphics model and the framework is applied.The method achieves high accuracy in assigning damage states to visible structural components.This assignment enables integration with a performance-based earthquake assessment to classify building safety.展开更多
There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Ba...There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Based on knowledge graph,this paper proposes an assembly information model(KGAM)to integrate geometric information from CAD model,non-geometric information and semantic information from assembly process document.KGAM describes the integrated assembly process information as a knowledge graph in the form of“entity-relationship-entity”and“entity-attribute-value”,which can improve the efficiency of information interaction.Taking the trial assembly stage of a certain type of aeroengine compressor rotor component as an example,KGAM is used to get its assembly process knowledge graph.The trial data show the query and update rate of assembly attribute information is improved by more than once.And the query and update rate of assembly semantic information is improved by more than twice.In conclusion,KGAM can solve the heterogeneous problems between the CAD model and the assembly process document and improve the information interaction efficiency.展开更多
The vehicle routing and scheduling (VRS) problem with multi-objective and multi-constraint is analyzed, considering the complexity of the modern logistics in city economy and daily life based on the system engineering...The vehicle routing and scheduling (VRS) problem with multi-objective and multi-constraint is analyzed, considering the complexity of the modern logistics in city economy and daily life based on the system engineering. The objective and constraint includes loading, the dispatch and arrival time, transportation conditions,total cost,etc. An information model and a mathematical model are built,and a method based on knowledge and biologic immunity is put forward for optimizing and evaluating the programs dimensions in vehicle routing and scheduling with multi-objective and multi-constraints. The proposed model and method are illustrated in a case study concerning a transport network, and the result shows that more optimization solutions can be easily obtained and the method is efficient and feasible. Comparing with the standard GA and the standard GA without time constraint,the computational time of the algorithm is less in this paper. And the probability of gaining optimal solution is bigger and the result is better under the condition of multi-constraint.展开更多
A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of key...A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of keywords retrieval and concept retrieval but also can compensate for their shortcomings. Their parameters can be adjusted according to different usage in order to accept the best information retrieval result, and it has been proved by our experiments.展开更多
Information was a frequently used concept in many fields of investigation. However, this concept is still not really understood, when it is referred for instance to consciousness and its informational structure. In th...Information was a frequently used concept in many fields of investigation. However, this concept is still not really understood, when it is referred for instance to consciousness and its informational structure. In this paper it is followed the concept of information from philosophical to physics perspective, showing especially how this concept could be extended to matter in general and to the living in particular, as a result of the intimate interaction between matter and information, the human body appearing as a bipolar informed-matter structure. It is detailed on this way how this concept could be referred to consciousness, and an informational modeling of consciousness as an informational system of the human body is presented. Based on the anatomic architecture of the organism and on the inference of the specific information concepts, it is shown that the informational system of the human body could be described by seven informational subsystems, which are reflected in consciousness as corresponding cognitive centers. These results are able to explain the main properties of consciousness, both the cognitive and extra-cognitive properties of the mind, like that observed during the near-death experiences and other similar phenomena. Moreover, the results of such a modeling are compared with the existing empirical concepts and models on the energetic architecture of the organism, showing their relevance for the understanding of consciousness.展开更多
In the past decades several theoretical Maxwell's demon models have been proposed to exhibit effects such as refrigerating, doing work at the cost of information, and some experiments have been carried out to realize...In the past decades several theoretical Maxwell's demon models have been proposed to exhibit effects such as refrigerating, doing work at the cost of information, and some experiments have been carried out to realize these effects. We propose a model with a two-level demon, information represented by a sequence of bits, and two heat reservoirs. The reservoir that the demon is interacting with depends on the bit. When the temperature difference between the two heat reservoirs is large enough, the information can be erased. On the other hand, when the information is pure enough, heat transfer from one reservoir to the other can happen, resulting in the effect of refrigeration. Genuine examples of such a system are discussed.展开更多
In accordance with the requirements of expanding Machine-To-Machine communication (M2M), the network overlay is in progress in several domains such as Smart Grid. Consequently, it is predictable that opportunities and...In accordance with the requirements of expanding Machine-To-Machine communication (M2M), the network overlay is in progress in several domains such as Smart Grid. Consequently, it is predictable that opportunities and cases of integrating yielded data from devices such as sensors will increase more. Accordingly, the importance of Ontology and Information Models (IM) which normalize the semantics including sensor expressions, have increased, and the standards of these definitions have been more important as well. So far, there have been multiple initiatives for standardizing the Ontology and IM in regards to the sensors expression such as Sensor Standards Harmonization by the National Institute of Standards and Technology (NIST), W3C Semantic Sensor Network (SSN) and the recent W3C IoT-Lite Ontology. However, there is still room to improve the current level of the Ontology and IM on the viewpoint of the implementing structure. This paper presents a set of IMs on abstract sensors and contexts in regards to the phenomenon around these sensors from the point of view of a structure implementing these specified sensors. As several previous studies have pointed out, multiple aspects on the sensors should be modeled. Accordingly, multiple sets of Ontology and IM on these sensors should be defined. Our study has intended to clarify the relationship between configurations and physical measured quantities of the structures implementing a set of sensors. Up to present, they have not been generalized and have remained unformulated. Consequently, due to the result of this analysis, it is expected to implement a more generalized translator module easily, which aggregates the measured data from the sensors on the middleware level managing these Ontology and IM, instead of the layer of user application programs.展开更多
BACKGROUND Cerebrovascular disease(CVD)poses a serious threat to human health and safety.Thus,developing a reasonable exercise program plays an important role in the long-term recovery and prognosis for patients with ...BACKGROUND Cerebrovascular disease(CVD)poses a serious threat to human health and safety.Thus,developing a reasonable exercise program plays an important role in the long-term recovery and prognosis for patients with CVD.Studies have shown that predictive nursing can improve the quality of care and that the information–knowledge–attitude–practice(IKAP)nursing model has a positive impact on patients who suffered a stroke.Few studies have combined these two nursing models to treat CVD.AIM To explore the effect of the IKAP nursing model combined with predictive nursing on the Fugl–Meyer motor function(FMA)score,Barthel index score,and disease knowledge mastery rate in patients with CVD.METHODS A total of 140 patients with CVD treated at our hospital between December 2019 and September 2021 were randomly divided into two groups,with 70 patients in each.The control group received routine nursing,while the observation group received the IKAP nursing model combined with predictive nursing.Both groups were observed for self-care ability,motor function,and disease knowledge mastery rate after one month of nursing.RESULTS There was no clear difference between the Barthel index and FMA scores of the two groups before nursing(P>0.05);however,their scores increased after nursing.This increase was more apparent in the observation group,and the difference was statistically significant(P<0.05).The rates of disease knowledge mastery,timely medication,appropriate exercise,and reasonable diet were significantly higher in the observation group than in the control group(P<0.05).The satisfaction rate in the observation group(97.14%)was significantly higher than that in the control group(81.43%;P<0.05).CONCLUSION The IKAP nursing model,combined with predictive nursing,is more effective than routine nursing in the care of patients with CVD,and it can significantly improve the Barthel index and FMA scores with better knowledge acquisition,as well as produce high satisfaction in patients.Moreover,they can be widely used in the clinical setting.展开更多
This paper analyzes the semantics structure of enterprise process metric and gives guidelines to describe the semantics of metric for collecting metric data automatically. Based on domain ontology, a structure called ...This paper analyzes the semantics structure of enterprise process metric and gives guidelines to describe the semantics of metric for collecting metric data automatically. Based on domain ontology, a structure called semantic tree is defined to de- scribe the semantics relationships among measured entity, meas- urable attribute and constraints, which provides the same method to define semantics of process metrics and data elements in enterprise information model. The arithmetic to map process metrics to enterprise information model is put forward, which can compute the query conditions to retrieve process metrics data from enterprise information systems accurately. This arithmetic has been applied in an information project.展开更多
基金Under the auspices of National Natural Science Foundation of China(No.42330510)。
文摘With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.
文摘The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
文摘The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the application of BIM technology.This paper summarizes and analyzes the whole-process project cost management based on BIM,aiming to explore its application and development prospects in the construction industry.Firstly,this paper introduces the role and advantages of BIM technology in engineering cost management,including information integration,data sharing,and collaborative work.Secondly,the paper analyzes the key technologies and methods of the whole-process project cost management based on BIM,including model construction,data management,and cost control.In addition,the paper also discusses the challenges and limitations of the whole-process BIM project cost management,such as the inconsistency of technical standards,personnel training,and consciousness change.Finally,the paper summarizes the advantages and development prospects of the whole-process project cost management based on BIM and puts forward the direction and suggestions for future research.Through the research of this paper,it can provide a reference for construction cost management and promote innovation and development in the construction industry.
文摘Life Cycle Cost Analysis (LCCA) provides a systematic approach to assess the total cost associated with owning, operating, and maintaining assets throughout their entire life. BIM empowers architects and designers to perform real-time evaluations to explore various design options. However, when integrated with LCCA, BIM provides a comprehensive economic perspective that helps stakeholders understand the long-term financial implications of design decisions. This study presents a methodology for developing a model that seamlessly integrates BIM and LCCA during the conceptual design stage of buildings. This integration allows for a comprehensive evaluation and analysis of the design process, ensuring that the development aligns with the principles of low carbon emissions by employing modular construction, 3D concrete printing methods, and different building design alternatives. The model considers the initial construction costs in addition to all the long-term operational, maintenance, and salvage values. It combines various tools and data through different modules, including energy analysis, Life Cycle Assessment (LCA), and Life Cycle Cost Analysis (LCCA) to execute a comprehensive assessment of the financial implications of a specific design option throughout the lifecycle of building projects. The development of the said model and its implementation involves the creation of a new plug-in for the BIM tool (i.e., Autodesk Revit) to enhance its functionalities and capabilities in forecasting the life-cycle costs of buildings in addition to generating associated cash flows, creating scenarios, and sensitivity analyses in an automatic manner. This model empowers designers to evaluate and justify their initial investments while designing and selecting potential construction methods for buildings, and enabling stakeholders to make informed decisions by assessing different design alternatives based on long-term financial considerations during the early stages of design.
基金supported by the National Natural Science Foundation of China(No.41827807)the Foundation of the Guangdong Provincial Key Laboratory of Modern Civil Engineering Technology(No.2021B1212040003),China.
文摘Accurately forecasting the operational performance of a tunnel boring machine(TBM)in advance is useful for making timely adjustments to boring parameters,thereby enhancing overall boring efficiency.In this study,we used the Informer model to predict a critical performance parameter of the TBM,namely thrust.Leveraging data from the Guangzhou Metro Line 22 project on the big data platform in China,the model’s performance was validated,while data from Line 18 were used to assess its generalization capability.Results revealed that the Informer model surpasses random forest(RF),extreme gradient boosting(XGB),support vector regression(SVR),k-nearest neighbors(KNN),back propagation(BP),and long short-term memory(LSTM)models in both prediction accuracy and generalization performance.In addition,the optimal input lengths for maximizing accuracy in the single-time-step output model are within the range of 8–24,while for the multiple-time-step output model,the optimal input length is 8.Furthermore,the last predicted value in the case of multiple-time-step outputs showed the highest accuracy.It was also found that relaxation of the Pearson analysis method metrics to 0.95 improved the performance of the model.Finally,the prediction results were most affected by earth pressure,rotation speed,torque,boring speed,and the surrounding rock grade.The model can provide useful guidance for constructors when adjusting TBM operation parameters.
文摘In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.
基金supported by a grant(No.14DZ2292800,http://www.greengeo.net/)from“Technology Service Platform of Civil Engineering”of Science and Technology Commission of Shanghai Municipality.
文摘Underground pipeline networks constitute a major component of urban infrastructure,and thus,it is imperative to have an efficient mechanism to manage them.This study introduces a secondary development system to efficiently model underground pipeline networks,using the building information modeling(BIM)-based software Revit.The system comprises separate pipe point and tubulation models.Using a Revit application programming interface(API),the spatial position and attribute data of the pipe points are extracted from a pipeline database,and the corresponding tubulation data are extracted from a tubulation database.Using the Family class in Revit API,the cluster in the self-built library of pipe point is inserted into the spatial location and the attribute data is added;in the same way,all pipeline instances in the pipeline system are created.The extension and localization of the model accelerated the modeling speed.The system was then used in a real construction project.The expansion of the model database and rapid modeling made the application of BIM technology in three-dimensional visualization of underground pipeline networks more convenient.Furthermore,it has applications in pipeline engineering construction and management.
文摘Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technology to study how slope aspect contributes to landslide growth from Yunyang to Wushan segment in the Three Gorges Reservoir area, and the relationship between aspect and landslide growth is quantified. Through the research on 205 landslides examples, it is found that the slope contributes most whose aspect is towards south,southeast and southwest aspect contribute moderately, and other five aspects contribute little. The research result inosculates preferably with the fact. The result of this paper can provide potent gist to the construction of Three Gorges Reservoir area in future.
基金Financial support for this research was provided in part by the US Army Corps of Engineers through a subaward from the University of California,San Diego,USA。
文摘Computer vision-based inspection methods show promise for automating post-earthquake building inspections.These methods survey a building with unmanned aerial vehicles and automatically detect damage in the collected images.Nevertheless,assessing the damage′s impact on structural safety requires localizing damage to specific building components with known design and function.This paper proposes a BIM-based automated inspection framework to provide context for visual surveys.A deep learning-based semantic segmentation algorithm is trained to automatically identify damage in images.The BIM automatically associates any identified damage with specific building components.Then,components are classified into damage states consistent with component fragility models for integration with a structural analysis.To demonstrate the framework,methods are developed to photorealistically simulate severe structural damage in a synthetic computer graphics environment.A graphics model of a real building in Urbana,Illinois,is generated to test the framework;the model is integrated with a structural analysis to apply earthquake damage in a physically realistic manner.A simulated UAV survey is flown of the graphics model and the framework is applied.The method achieves high accuracy in assigning damage states to visible structural components.This assignment enables integration with a performance-based earthquake assessment to classify building safety.
基金the National Natural Science Foundation of China(No.51805079)。
文摘There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Based on knowledge graph,this paper proposes an assembly information model(KGAM)to integrate geometric information from CAD model,non-geometric information and semantic information from assembly process document.KGAM describes the integrated assembly process information as a knowledge graph in the form of“entity-relationship-entity”and“entity-attribute-value”,which can improve the efficiency of information interaction.Taking the trial assembly stage of a certain type of aeroengine compressor rotor component as an example,KGAM is used to get its assembly process knowledge graph.The trial data show the query and update rate of assembly attribute information is improved by more than once.And the query and update rate of assembly semantic information is improved by more than twice.In conclusion,KGAM can solve the heterogeneous problems between the CAD model and the assembly process document and improve the information interaction efficiency.
基金National natural science foundation (No:70371040)
文摘The vehicle routing and scheduling (VRS) problem with multi-objective and multi-constraint is analyzed, considering the complexity of the modern logistics in city economy and daily life based on the system engineering. The objective and constraint includes loading, the dispatch and arrival time, transportation conditions,total cost,etc. An information model and a mathematical model are built,and a method based on knowledge and biologic immunity is put forward for optimizing and evaluating the programs dimensions in vehicle routing and scheduling with multi-objective and multi-constraints. The proposed model and method are illustrated in a case study concerning a transport network, and the result shows that more optimization solutions can be easily obtained and the method is efficient and feasible. Comparing with the standard GA and the standard GA without time constraint,the computational time of the algorithm is less in this paper. And the probability of gaining optimal solution is bigger and the result is better under the condition of multi-constraint.
文摘A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of keywords retrieval and concept retrieval but also can compensate for their shortcomings. Their parameters can be adjusted according to different usage in order to accept the best information retrieval result, and it has been proved by our experiments.
文摘Information was a frequently used concept in many fields of investigation. However, this concept is still not really understood, when it is referred for instance to consciousness and its informational structure. In this paper it is followed the concept of information from philosophical to physics perspective, showing especially how this concept could be extended to matter in general and to the living in particular, as a result of the intimate interaction between matter and information, the human body appearing as a bipolar informed-matter structure. It is detailed on this way how this concept could be referred to consciousness, and an informational modeling of consciousness as an informational system of the human body is presented. Based on the anatomic architecture of the organism and on the inference of the specific information concepts, it is shown that the informational system of the human body could be described by seven informational subsystems, which are reflected in consciousness as corresponding cognitive centers. These results are able to explain the main properties of consciousness, both the cognitive and extra-cognitive properties of the mind, like that observed during the near-death experiences and other similar phenomena. Moreover, the results of such a modeling are compared with the existing empirical concepts and models on the energetic architecture of the organism, showing their relevance for the understanding of consciousness.
基金Supported by the National Basic Research Program of China under Grant No 2013CB921800the National Natural Science Foundation of China under Grant Nos 11227901,91021005,11104262,31470835,21233007,21303175,21322305,11374305 and 11274299the Strategic Priority Research Program(B)of the Chinese Academy of Sciences under Grant Nos XDB01030400 and 01020000
文摘In the past decades several theoretical Maxwell's demon models have been proposed to exhibit effects such as refrigerating, doing work at the cost of information, and some experiments have been carried out to realize these effects. We propose a model with a two-level demon, information represented by a sequence of bits, and two heat reservoirs. The reservoir that the demon is interacting with depends on the bit. When the temperature difference between the two heat reservoirs is large enough, the information can be erased. On the other hand, when the information is pure enough, heat transfer from one reservoir to the other can happen, resulting in the effect of refrigeration. Genuine examples of such a system are discussed.
文摘In accordance with the requirements of expanding Machine-To-Machine communication (M2M), the network overlay is in progress in several domains such as Smart Grid. Consequently, it is predictable that opportunities and cases of integrating yielded data from devices such as sensors will increase more. Accordingly, the importance of Ontology and Information Models (IM) which normalize the semantics including sensor expressions, have increased, and the standards of these definitions have been more important as well. So far, there have been multiple initiatives for standardizing the Ontology and IM in regards to the sensors expression such as Sensor Standards Harmonization by the National Institute of Standards and Technology (NIST), W3C Semantic Sensor Network (SSN) and the recent W3C IoT-Lite Ontology. However, there is still room to improve the current level of the Ontology and IM on the viewpoint of the implementing structure. This paper presents a set of IMs on abstract sensors and contexts in regards to the phenomenon around these sensors from the point of view of a structure implementing these specified sensors. As several previous studies have pointed out, multiple aspects on the sensors should be modeled. Accordingly, multiple sets of Ontology and IM on these sensors should be defined. Our study has intended to clarify the relationship between configurations and physical measured quantities of the structures implementing a set of sensors. Up to present, they have not been generalized and have remained unformulated. Consequently, due to the result of this analysis, it is expected to implement a more generalized translator module easily, which aggregates the measured data from the sensors on the middleware level managing these Ontology and IM, instead of the layer of user application programs.
基金Supported by Basic scientific research industry of Heilongjiang Provincial undergraduate universities in 2019,No.2019-KYYWF-1213.
文摘BACKGROUND Cerebrovascular disease(CVD)poses a serious threat to human health and safety.Thus,developing a reasonable exercise program plays an important role in the long-term recovery and prognosis for patients with CVD.Studies have shown that predictive nursing can improve the quality of care and that the information–knowledge–attitude–practice(IKAP)nursing model has a positive impact on patients who suffered a stroke.Few studies have combined these two nursing models to treat CVD.AIM To explore the effect of the IKAP nursing model combined with predictive nursing on the Fugl–Meyer motor function(FMA)score,Barthel index score,and disease knowledge mastery rate in patients with CVD.METHODS A total of 140 patients with CVD treated at our hospital between December 2019 and September 2021 were randomly divided into two groups,with 70 patients in each.The control group received routine nursing,while the observation group received the IKAP nursing model combined with predictive nursing.Both groups were observed for self-care ability,motor function,and disease knowledge mastery rate after one month of nursing.RESULTS There was no clear difference between the Barthel index and FMA scores of the two groups before nursing(P>0.05);however,their scores increased after nursing.This increase was more apparent in the observation group,and the difference was statistically significant(P<0.05).The rates of disease knowledge mastery,timely medication,appropriate exercise,and reasonable diet were significantly higher in the observation group than in the control group(P<0.05).The satisfaction rate in the observation group(97.14%)was significantly higher than that in the control group(81.43%;P<0.05).CONCLUSION The IKAP nursing model,combined with predictive nursing,is more effective than routine nursing in the care of patients with CVD,and it can significantly improve the Barthel index and FMA scores with better knowledge acquisition,as well as produce high satisfaction in patients.Moreover,they can be widely used in the clinical setting.
基金Supported by the National High Technology Research and Development Program of China (863 Progam) (2006AA09A102-15)the National Major Project of Science and Technology of China (2008ZX05023-05-05)
文摘This paper analyzes the semantics structure of enterprise process metric and gives guidelines to describe the semantics of metric for collecting metric data automatically. Based on domain ontology, a structure called semantic tree is defined to de- scribe the semantics relationships among measured entity, meas- urable attribute and constraints, which provides the same method to define semantics of process metrics and data elements in enterprise information model. The arithmetic to map process metrics to enterprise information model is put forward, which can compute the query conditions to retrieve process metrics data from enterprise information systems accurately. This arithmetic has been applied in an information project.