Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workfl...Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.展开更多
Classical management accounting (MA) Focusing on the facilitating perspective, focuses on decision facilitating and influencing (Demski & Feltham, 1976). MA has to provide information to managers and depending on...Classical management accounting (MA) Focusing on the facilitating perspective, focuses on decision facilitating and influencing (Demski & Feltham, 1976). MA has to provide information to managers and depending on the problem complexity, they have to solve problems in a dyadic way. A dual process model, the heuristic systematic model (HSM), expands this so-called manager-accountant-dyad and shows different cases of actual human information processing. Managers and accountants either process systematically or heuristically. So far, many concepts have been designed in relation to the normative concept of the economical rational principle. Consequently, recent research only uses systematic information processing, based on the principle of the economic man. In this paper, a decision-behavior oriented approach tries to describe actual decision makers such as managers and accountants and shows new possibilities within MA. Therefore, the potential of heuristic information processing is analyzed, based on the phenomenon of ecological rationality as one shape of bounded rationality. Thus, different cognitive heuristics in business economics are identified and analyzed. Furthermore, the outstanding performance of heuristics compared with more complex calculations is shown. Unfortunately, these findings have been limited to marketing and investments so far. Significant research is needed, regarding conditions for applications and success factors of heuristics in business economics. New empirical findings have to be explicitly transferred to MA.展开更多
The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and qu...The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and quality of the resource estimation. These techniques include: 1) the use of the Multivariate Discovery Process model (MDP) to derive unbiased distribution parameters of reservoir volumetric variables and to reveal correlations among the variables; 2) the use of the Geo-anchored method to estimate simultaneously the number of oil and gas pools in the same play; and 3) the crossvalidation of assessment results from different methods. These techniques are illustrated by using an example of crude oil and natural gas resource assessment of the Sverdrup Basin, Canadian Archipelago. The example shows that when direct volumetric measurements of the untested prospects are not available, the MDP model can help derive unbiased estimates of the distribution parameters by using information from the discovered oil and gas accumulations. It also shows that an estimation of the number of oil and gas accumulations and associated size ranges from a discovery process model can provide an alternative and efficient approach when inadequate geological data hinder the estimation. Cross-examination of assessment results derived using different methods allows one to focus on and analyze the causes for the major differences, thus providing a more reliable assessment outcome.展开更多
Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard ...Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard to build a good-quality model of this batch process.Besides,due to the properties of this process,the reliability of the model must be taken into consideration when optimizing the MVs.In this work,an optimal design strategy based on the self-learning Gaussian process model(GPM) is proposed to control this kind of spatial batch process.The GPM is utilized as the internal model to predict the thicknesses of thin films on all spatial-distributed wafers using the limited data.Unlike the conventional model based design,the uncertainties of predictions provided by GPM are taken into consideration to guide the optimal design of manipulated variables so that the designing can be more prudent Besides,the GPM is also actively enhanced using as little data as possible based on the predictive uncertainties.The effectiveness of the proposed strategy is successfully demonstrated in an LPCVD process.展开更多
Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has...Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has been considered a potential large-scale energy storage technology.In order to explore the gas injection char-acteristic of underground cavern,a detailed thermodynamic model of the system is established in the process modelling software gPROMS.The four subsystem models,i.e.the compressor,heat exchanger,underground cavern storage and expander,are connected with inlet-outlet equilibrium of flow rate/pressure/temperature to form an integrated CAES system model in gPROMS.The maximum air pressure and temperature in the cavern are focused to interrogate the critical condition of the cavern during the injection process.When analyzing the mass flow rate-pressure ratio relationship,it’s found that under specified operating conditions,an increase in mass flow rate can lead to a higher pressure ratio.Compression power demand also escalates significantly with increasing mass flow rates,underscoring the system’s energy-intensive nature.Additionally,the cooler outlet energy rate progressively decreases,becoming increasingly negative as the mass flow rate increases.These in-sights offer critical theoretical foundations for optimizing practical efficiency of CAES.展开更多
Building owners,designers and constructors are seeing a rapid increase in the number of sustainably designed high performance buildings.These buildings provide numerous benefits to the owners and occupants to include ...Building owners,designers and constructors are seeing a rapid increase in the number of sustainably designed high performance buildings.These buildings provide numerous benefits to the owners and occupants to include improved indoor air quality,energy efficiency,and environmental site standards;and ultimately enhance productivity for the building occupants.As the demand increases for higher building energy efficiency and environmental standards,application of a set of process models will support consistency and optimization during the design process.Systems engineering process models have proven effective in taking an integrated and comprehensive view of a system while allowing for clear stakeholder engagement,requirements definition,life cycle analysis,technology insertion,validation and verification.This paper overlays systems engineering on the sustainable design process by providing a framework for application of the Waterfall,Vee,and Spiral process models to high performance buildings.Each process model is mapped to the sustainable design process and is evaluated for its applicability to projects and building types.Adaptations of the models are provided as Green Building Process Models.展开更多
Aims Recent mechanistic explanations for community assembly focus on the debates surrounding niche-based deterministic and dispersalbased stochastic models.This body of work has emphasized the importance of both habit...Aims Recent mechanistic explanations for community assembly focus on the debates surrounding niche-based deterministic and dispersalbased stochastic models.This body of work has emphasized the importance of both habitat filtering and dispersal limitation,and many of these works have utilized the assumption of species spatial independence to simplify the complexity of the spatial modeling in natural communities when given dispersal limitation and/or habitat filtering.One potential drawback of this simplification is that it does not consider species interactions and how they may influence the spatial distribution of species,phylogenetic and functional diversity.Here,we assess the validity of the assumption of species spatial independence using data from a subtropical forest plot in southeastern China.Methods We use the four most commonly employed spatial statistical models—the homogeneous Poisson process representing pure random effect,the heterogeneous Poisson process for the effect of habitat heterogeneity,the homogenous Thomas process for sole dispersal limitation and the heterogeneous Thomas process for joint effect of habitat heterogeneity and dispersal limitation—to investigate the contribution of different mechanisms in shaping the species,phylogenetic and functional structures of communities.Important Findings Our evidence from species,phylogenetic and functional diversity demonstrates that the habitat filtering and/or dispersal-based models perform well and the assumption of species spatial independence is relatively valid at larger scales(50×50 m).Conversely,at local scales(10×10 and 20×20 m),the models often fail to predict the species,phylogenetic and functional diversity,suggesting that the assumption of species spatial independence is invalid and that biotic interactions are increasingly important at these spatial scales.展开更多
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
With regards to the assembly line of cost control of Dechang(HK)company,the motor housing’s cost control of process will be necessarily respected.Because the supply quantity is big in a machine the price of motor hou...With regards to the assembly line of cost control of Dechang(HK)company,the motor housing’s cost control of process will be necessarily respected.Because the supply quantity is big in a machine the price of motor housing is small,so that the cost control of automatic production line is significant with modeling.It is found that the control of equipment includes in shaft and crank linkage for benefit which also needs to be controlled in detail.For the sake of benefits can we fundamentally resolve the main problem of high cost process.展开更多
The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify ...The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify this,playing a crucial role in various industries.However,due to the complex interplay between these factors and the resulting properties,a simple yet accurate model to predict the mechanical properties of CRC,considering different rubbers,fillers,and processing techniques,is highly desired.This study aims to predict the dispersion of fillers in CRC and forecast the resultant mechanical properties of CRC by leveraging machine learning.We selected various rubbers and carbon black fillers,conducted mixing and vulcanizing,and subsequently measured filler dispersion and tensile performance.Based on 215 experimental data points,we evaluated the performance of different machine learning models.Our findings indicate that the manually designed deep neural network(DNN)models achieved superior results,exhibiting the highest coefficient of determination(R^(2))values(>0.95).Shapley additive explanations(SHAP)analysis of the DNN models revealed the intricate relationship between the properties of CRC and process parameters.Moreover,based on the robust predictive capabilities of the DNN models,we can recommend or optimize CRC fabrication process.This work provides valuable insights for employing machine learning in predicting polymer composite material properties and optimizing the fabrication of high-performance CRC.展开更多
The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly pr...The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly predict due to the complicated relationships between the chemical composition and process(like quenching temperature(Qr)).A Gaussian process regression model in machine learning was developed to predict V_(RA),and the model accuracy was further improved by introducing a metallurgical parameter of martensite fraction(fo)to accurately predict V_(RA) in Q&P steels.The developed machine learning model combined with Bayesian global optimization can serve as another selection strategy for the quenching temperature,and this strategy is very effcient as it found the"optimum"Qr with the maximum V_(RA) using only seven consecutive iterations.The benchmark experiment also reveals that the developed machine learning model predicts V_(RA) more accurately than the popular constrained carbon equilibrium thermodynamic model,even better than a thermo-kinetic quenching-partitioning-tempering-local equilibrium model.展开更多
Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are ...Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .展开更多
To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the...To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.展开更多
Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the ...Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.展开更多
To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new busi...To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.展开更多
There are numerous application areas of computing similarity between process models.It includes finding similar models from a repository,controlling redundancy of process models,and finding corresponding activities be...There are numerous application areas of computing similarity between process models.It includes finding similar models from a repository,controlling redundancy of process models,and finding corresponding activities between a pair of process models.The similarity between two process models is computed based on their similarity between labels,structures,and execution behaviors.Several attempts have been made to develop similarity techniques between activity labels,as well as their execution behavior.However,a notable problem with the process model similarity is that two process models can also be similar if there is a structural variation between them.However,neither a benchmark dataset exists for the structural similarity between process models nor there exist an effective technique to compute structural similarity.To that end,we have developed a large collection of process models in which structural changes are handcrafted while preserving the semantics of the models.Furthermore,we have used a machine learning-based approach to compute the similarity between a pair of process models having structural and label differences.Finally,we have evaluated the proposed approach using our generated collection of process models.展开更多
Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,t...Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are dev...Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are developed, which are built with the methods of multi-rigid-body dynam- ics, hybrid coordinate and substructure. Then an attitude control method suitable for the deploying process is proposed, which can keep stability under any dynamical parameter variation. Subse- quently, this attitude control is optimized to minimize attitude disturbance during the deploying process. The simulation results show that this attitude control method can keep stability and main- tain proper attitude variation during the deploying process, which indicates that this attitude con- trol method is suitable for practical applications.展开更多
Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input ...Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input training neural network (IT-NN) is proposed for the nonlinear system modelling in this paper. Mo-mentum factor and adaptive learning rate are introduced into learning algorithm to improve the training speed of IT-NN. Contrasting to the auto-associative neural network (ANN), IT-NN has less hidden layers and higher training speed. The effectiveness is illustrated through a comparison of IT-NN with linear PCA and ANN with experiments. Moreover, the IT-NN is combined with RBF neural network (RBF-NN) to model the yields of ethylene and propyl-ene in the naphtha pyrolysis system. From the illustrative example and practical application, IT-NN combined with RBF-NN is an effective method of nonlinear chemical process modelling.展开更多
文摘Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.
文摘Classical management accounting (MA) Focusing on the facilitating perspective, focuses on decision facilitating and influencing (Demski & Feltham, 1976). MA has to provide information to managers and depending on the problem complexity, they have to solve problems in a dyadic way. A dual process model, the heuristic systematic model (HSM), expands this so-called manager-accountant-dyad and shows different cases of actual human information processing. Managers and accountants either process systematically or heuristically. So far, many concepts have been designed in relation to the normative concept of the economical rational principle. Consequently, recent research only uses systematic information processing, based on the principle of the economic man. In this paper, a decision-behavior oriented approach tries to describe actual decision makers such as managers and accountants and shows new possibilities within MA. Therefore, the potential of heuristic information processing is analyzed, based on the phenomenon of ecological rationality as one shape of bounded rationality. Thus, different cognitive heuristics in business economics are identified and analyzed. Furthermore, the outstanding performance of heuristics compared with more complex calculations is shown. Unfortunately, these findings have been limited to marketing and investments so far. Significant research is needed, regarding conditions for applications and success factors of heuristics in business economics. New empirical findings have to be explicitly transferred to MA.
文摘The reservoir volumetric approach represents a widely accepted, but flawed method of petroleum play resource calculation. In this paper, we propose a combination of techniques that can improve the applicability and quality of the resource estimation. These techniques include: 1) the use of the Multivariate Discovery Process model (MDP) to derive unbiased distribution parameters of reservoir volumetric variables and to reveal correlations among the variables; 2) the use of the Geo-anchored method to estimate simultaneously the number of oil and gas pools in the same play; and 3) the crossvalidation of assessment results from different methods. These techniques are illustrated by using an example of crude oil and natural gas resource assessment of the Sverdrup Basin, Canadian Archipelago. The example shows that when direct volumetric measurements of the untested prospects are not available, the MDP model can help derive unbiased estimates of the distribution parameters by using information from the discovered oil and gas accumulations. It also shows that an estimation of the number of oil and gas accumulations and associated size ranges from a discovery process model can provide an alternative and efficient approach when inadequate geological data hinder the estimation. Cross-examination of assessment results derived using different methods allows one to focus on and analyze the causes for the major differences, thus providing a more reliable assessment outcome.
基金Supported by the National High Technology Research and Development Program of China(2014AA041803)the National Natural Science Foundation of China(61320106009)
文摘Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard to build a good-quality model of this batch process.Besides,due to the properties of this process,the reliability of the model must be taken into consideration when optimizing the MVs.In this work,an optimal design strategy based on the self-learning Gaussian process model(GPM) is proposed to control this kind of spatial batch process.The GPM is utilized as the internal model to predict the thicknesses of thin films on all spatial-distributed wafers using the limited data.Unlike the conventional model based design,the uncertainties of predictions provided by GPM are taken into consideration to guide the optimal design of manipulated variables so that the designing can be more prudent Besides,the GPM is also actively enhanced using as little data as possible based on the predictive uncertainties.The effectiveness of the proposed strategy is successfully demonstrated in an LPCVD process.
基金supported by National Natural Science Foundation of China Excellent Young Scientists Fund Program,Deep Earth Probe and Mineral Resources Exploration-National Science and Technology Major Project(grant No.2024ZD1004105)Shandong Excellent Young Scientists Fund Program(Overseas)(grant No.2022HWYQ-020)Shenzhen Science and Technology Program(grant No.JCYJ20220530141016036,GJHZ20240218113359001).
文摘Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has been considered a potential large-scale energy storage technology.In order to explore the gas injection char-acteristic of underground cavern,a detailed thermodynamic model of the system is established in the process modelling software gPROMS.The four subsystem models,i.e.the compressor,heat exchanger,underground cavern storage and expander,are connected with inlet-outlet equilibrium of flow rate/pressure/temperature to form an integrated CAES system model in gPROMS.The maximum air pressure and temperature in the cavern are focused to interrogate the critical condition of the cavern during the injection process.When analyzing the mass flow rate-pressure ratio relationship,it’s found that under specified operating conditions,an increase in mass flow rate can lead to a higher pressure ratio.Compression power demand also escalates significantly with increasing mass flow rates,underscoring the system’s energy-intensive nature.Additionally,the cooler outlet energy rate progressively decreases,becoming increasingly negative as the mass flow rate increases.These in-sights offer critical theoretical foundations for optimizing practical efficiency of CAES.
文摘Building owners,designers and constructors are seeing a rapid increase in the number of sustainably designed high performance buildings.These buildings provide numerous benefits to the owners and occupants to include improved indoor air quality,energy efficiency,and environmental site standards;and ultimately enhance productivity for the building occupants.As the demand increases for higher building energy efficiency and environmental standards,application of a set of process models will support consistency and optimization during the design process.Systems engineering process models have proven effective in taking an integrated and comprehensive view of a system while allowing for clear stakeholder engagement,requirements definition,life cycle analysis,technology insertion,validation and verification.This paper overlays systems engineering on the sustainable design process by providing a framework for application of the Waterfall,Vee,and Spiral process models to high performance buildings.Each process model is mapped to the sustainable design process and is evaluated for its applicability to projects and building types.Adaptations of the models are provided as Green Building Process Models.
基金NSFC grant of National Natural Science Foundation of China(31170401)Dimensions of biodiversity grant of Natural Science Fundation(NSF 1046113)Natural Science Foundation of Zhejiang Province(Y5100361).
文摘Aims Recent mechanistic explanations for community assembly focus on the debates surrounding niche-based deterministic and dispersalbased stochastic models.This body of work has emphasized the importance of both habitat filtering and dispersal limitation,and many of these works have utilized the assumption of species spatial independence to simplify the complexity of the spatial modeling in natural communities when given dispersal limitation and/or habitat filtering.One potential drawback of this simplification is that it does not consider species interactions and how they may influence the spatial distribution of species,phylogenetic and functional diversity.Here,we assess the validity of the assumption of species spatial independence using data from a subtropical forest plot in southeastern China.Methods We use the four most commonly employed spatial statistical models—the homogeneous Poisson process representing pure random effect,the heterogeneous Poisson process for the effect of habitat heterogeneity,the homogenous Thomas process for sole dispersal limitation and the heterogeneous Thomas process for joint effect of habitat heterogeneity and dispersal limitation—to investigate the contribution of different mechanisms in shaping the species,phylogenetic and functional structures of communities.Important Findings Our evidence from species,phylogenetic and functional diversity demonstrates that the habitat filtering and/or dispersal-based models perform well and the assumption of species spatial independence is relatively valid at larger scales(50×50 m).Conversely,at local scales(10×10 and 20×20 m),the models often fail to predict the species,phylogenetic and functional diversity,suggesting that the assumption of species spatial independence is invalid and that biotic interactions are increasingly important at these spatial scales.
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
文摘With regards to the assembly line of cost control of Dechang(HK)company,the motor housing’s cost control of process will be necessarily respected.Because the supply quantity is big in a machine the price of motor housing is small,so that the cost control of automatic production line is significant with modeling.It is found that the control of equipment includes in shaft and crank linkage for benefit which also needs to be controlled in detail.For the sake of benefits can we fundamentally resolve the main problem of high cost process.
基金supported by the National Key R&D Program of China(No.2022YFB3707303)the National Natural Science Foundation of China(No.52293471).
文摘The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify this,playing a crucial role in various industries.However,due to the complex interplay between these factors and the resulting properties,a simple yet accurate model to predict the mechanical properties of CRC,considering different rubbers,fillers,and processing techniques,is highly desired.This study aims to predict the dispersion of fillers in CRC and forecast the resultant mechanical properties of CRC by leveraging machine learning.We selected various rubbers and carbon black fillers,conducted mixing and vulcanizing,and subsequently measured filler dispersion and tensile performance.Based on 215 experimental data points,we evaluated the performance of different machine learning models.Our findings indicate that the manually designed deep neural network(DNN)models achieved superior results,exhibiting the highest coefficient of determination(R^(2))values(>0.95).Shapley additive explanations(SHAP)analysis of the DNN models revealed the intricate relationship between the properties of CRC and process parameters.Moreover,based on the robust predictive capabilities of the DNN models,we can recommend or optimize CRC fabrication process.This work provides valuable insights for employing machine learning in predicting polymer composite material properties and optimizing the fabrication of high-performance CRC.
基金The authors acknowledge financial support from the National Natural Science Foundation of China(Grant Nos.51771114 and 51371117).
文摘The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly predict due to the complicated relationships between the chemical composition and process(like quenching temperature(Qr)).A Gaussian process regression model in machine learning was developed to predict V_(RA),and the model accuracy was further improved by introducing a metallurgical parameter of martensite fraction(fo)to accurately predict V_(RA) in Q&P steels.The developed machine learning model combined with Bayesian global optimization can serve as another selection strategy for the quenching temperature,and this strategy is very effcient as it found the"optimum"Qr with the maximum V_(RA) using only seven consecutive iterations.The benchmark experiment also reveals that the developed machine learning model predicts V_(RA) more accurately than the popular constrained carbon equilibrium thermodynamic model,even better than a thermo-kinetic quenching-partitioning-tempering-local equilibrium model.
文摘Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .
基金The National Natural Science Foundation of China(No.70671024).
文摘To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.
文摘Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.
基金The National Natural Science Foundation of China(No60473078)
文摘To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.
文摘There are numerous application areas of computing similarity between process models.It includes finding similar models from a repository,controlling redundancy of process models,and finding corresponding activities between a pair of process models.The similarity between two process models is computed based on their similarity between labels,structures,and execution behaviors.Several attempts have been made to develop similarity techniques between activity labels,as well as their execution behavior.However,a notable problem with the process model similarity is that two process models can also be similar if there is a structural variation between them.However,neither a benchmark dataset exists for the structural similarity between process models nor there exist an effective technique to compute structural similarity.To that end,we have developed a large collection of process models in which structural changes are handcrafted while preserving the semantics of the models.Furthermore,we have used a machine learning-based approach to compute the similarity between a pair of process models having structural and label differences.Finally,we have evaluated the proposed approach using our generated collection of process models.
基金jointly supported by the National Key R&D Program of China(Grant No.2022YFE0209200)the National Natural Science Foundation of China(Grant Nos.U22A20562,42330607 and 41761144054)the National Large Scientific and Technological Infrastructure“Earth System Science Numerical Simulator Facility”(Earth-Lab)(https://cstr.cn/31134.02.EL)。
文摘Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金sponsored by the National Natural Science Foundation of China (No. 11272172)
文摘Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are developed, which are built with the methods of multi-rigid-body dynam- ics, hybrid coordinate and substructure. Then an attitude control method suitable for the deploying process is proposed, which can keep stability under any dynamical parameter variation. Subse- quently, this attitude control is optimized to minimize attitude disturbance during the deploying process. The simulation results show that this attitude control method can keep stability and main- tain proper attitude variation during the deploying process, which indicates that this attitude con- trol method is suitable for practical applications.
基金Supported by Beijing Municipal Education Commission (No.xk100100435) and the Key Research Project of Science andTechnology from Sinopec (No.E03007).
文摘Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input training neural network (IT-NN) is proposed for the nonlinear system modelling in this paper. Mo-mentum factor and adaptive learning rate are introduced into learning algorithm to improve the training speed of IT-NN. Contrasting to the auto-associative neural network (ANN), IT-NN has less hidden layers and higher training speed. The effectiveness is illustrated through a comparison of IT-NN with linear PCA and ANN with experiments. Moreover, the IT-NN is combined with RBF neural network (RBF-NN) to model the yields of ethylene and propyl-ene in the naphtha pyrolysis system. From the illustrative example and practical application, IT-NN combined with RBF-NN is an effective method of nonlinear chemical process modelling.