To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
Clinical practice guidelines(CPGs)contain evidence-based and economically reasonable medical treatment processes.Executable medical treatment processes in healthcare information systems can assist the treatment proces...Clinical practice guidelines(CPGs)contain evidence-based and economically reasonable medical treatment processes.Executable medical treatment processes in healthcare information systems can assist the treatment processes.To this end,business process modeling technologies have been exploited to model medical treatment processes.However,medical treatment processes are usually flexible and knowledge-intensive.To reduce the effort in modeling,we summarize several treatment patterns(i.e.,frequent behaviors in medical treatment processes in CPGs),and represent them by three process modeling languages(i.e.,BPMN,DMN,and CMMN).Based on the summarized treatment patterns,we propose a pattern-based integrated framework for modeling medical treatment processes.A modeling platform is implemented to support the use of treatment patterns,by which the feasibility of our approach is validated.An empirical analysis is discussed based on the coverage rates of treatment patterns.Feedback from interviewed physicians in a Chinese hospital shows that executable medical treatment processes of CPGs provide a convenient way to obtain guidance,thus assisting daily work for medical workers.展开更多
Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard ...Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard to build a good-quality model of this batch process.Besides,due to the properties of this process,the reliability of the model must be taken into consideration when optimizing the MVs.In this work,an optimal design strategy based on the self-learning Gaussian process model(GPM) is proposed to control this kind of spatial batch process.The GPM is utilized as the internal model to predict the thicknesses of thin films on all spatial-distributed wafers using the limited data.Unlike the conventional model based design,the uncertainties of predictions provided by GPM are taken into consideration to guide the optimal design of manipulated variables so that the designing can be more prudent Besides,the GPM is also actively enhanced using as little data as possible based on the predictive uncertainties.The effectiveness of the proposed strategy is successfully demonstrated in an LPCVD process.展开更多
In order to study the major performance indicators of the twin-rotor piston engine(TRPE), Matlab/simulink was used to simulate the mathematical models of its thermodynamic processes. With consideration of the characte...In order to study the major performance indicators of the twin-rotor piston engine(TRPE), Matlab/simulink was used to simulate the mathematical models of its thermodynamic processes. With consideration of the characteristics of the working processes in the TRPE, corresponding differential equations were established and then simplified by period features of the TRPE. Finally, the major boundary conditions were figured out. The changing trends of mass, pressure and temperature of working fuel in the working chamber during a complete engine cycle were presented. The simulation results are consistent with the trends of an actual working cycle in the TRPE, which indicates that the method of simulation is feasible. As the pressure in the working chamber is calculated, all the performance parameters of the TRPE can be obtained. The major performance indicators, such as the indicated mean effective pressure, power to weight ratio and the volume power, are also acquired. Compared with three different types of conventional engines, the TRPE has a bigger utilization ratio of cylinder volume, a higher power to weight ratio and a more compact structure. This indicates that TRPE is superior to conventional engines.展开更多
This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary erro...This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.展开更多
The gauge extension of the standard model with the U(1)B-L+xy symmetry predicts the existence of a lightgauge boson Z′ with small couplings to ordinary fermions. We discuss its contributions to the muon anomalous ...The gauge extension of the standard model with the U(1)B-L+xy symmetry predicts the existence of a lightgauge boson Z′ with small couplings to ordinary fermions. We discuss its contributions to the muon anomalous magnetic moment αμ. Taking account of the constraints on the relevant free parameters, we further calculate the contributions of the light gauge boson Z′ to the Higgs-strahlung processes e+ e-→ZH and e+ e- →Z′H.展开更多
Compaction processes are one the most important par ts of powder forming technology. The main applications are focused on pieces for a utomotive, aeronautic, electric and electronic industries. The main goals of the c...Compaction processes are one the most important par ts of powder forming technology. The main applications are focused on pieces for a utomotive, aeronautic, electric and electronic industries. The main goals of the compaction processes are to obtain a compact with the geometrical requirements, without cracks, and with a uniform distribution of density. Design of such proc esses consist, essentially, in determine the sequence and relative displacements of die and punches in order to achieve such goals. A.B. Khoei presented a gener al framework for the finite element simulation of powder forming processes based on the following aspects; a large displacement formulation, centred on a total and updated Lagrangian formulation; an adaptive finite element strategy based on error estimates and automatic remeshing techniques; a cap model based on a hard ening rule in modelling of the highly non-linear behaviour of material; and the use of an efficient contact algorithm in the context of an interface element fo rmulation. In these references, the non-linear behaviour of powder was adequately desc ribed by the cap plasticity model. However, it suffers from a serious deficiency when the stress-point reaches a yield surface. In the flow theory of plasticit y, the transition from an elastic state to an elasto-plastic state appears more or less abruptly. For powder material it is very difficult to define the locati on of yield surface, because there is no distinct transition from elastic to ela stic-plastic behaviour. Results of experimental test on some hard met al powder show that the plastic effects were begun immediately upon loading. In such mater ials the domain of the yield surface would collapse to a point, so making the di rection of plastic increment indeterminate, because all directions are normal to a point. Thus, the classical plasticity theory cannot deal with such materials and an advanced constitutive theory is necessary. In the present paper, the constitutive equations of powder materials will be discussed via an endochronic theory of plasticity. This theory provides a unifi ed point of view to describe the elastic-plastic behaviour of material since it places no requirement for a yield surface and a ’loading function’ to disting uish between loading an unloading. Endochronic theory of plasticity has been app lied to a number of metallic materials, concrete and sand, but to the knowledge of authors, no numerical scheme of the model has been applied to powder material . In the present paper, a new approach is developed based on an endochronic rate independent, density-dependent plasticity model for describing the isothermal deformation behavior of metal powder at low homologous temperature. Although the concept of yield surface has not been explicitly assumed in endochronic theory, it is shown that the cone-cap plasticity yield surface (Fig.1), which is the m ost commonly used plasticity models for describing the behavior of powder materi al can be easily derived as a special case of the proposed endochronic theory. Fig.1 Trace of cone-cap yield function on the meridian pl ane for different relative density As large deformation is observed in powder compaction process, a hypoelastic-pl astic formulation is developed in the context of finite deformation plasticity. Constitutive equations are stated in unrotated frame of reference that greatly s implifies endochronic constitutive relation in finite plasticity. Constitutive e quations of the endochronic theory and their numerical integration are establish ed and procedures for determining material parameters of the model are demonstra ted. Finally, the numerical schemes are examined for efficiency in the model ling of a tip shaped component, as shown in Fig.2. Fig.2 A shaped tip component. a) Geometry, boundary conditio n and finite element mesh; b) density distribution at final stage of展开更多
Need of transformation of means of support of project financing for commercial banks is proved.The analysis and modeling of business processes of project management by the contextual chart and the chart of decompositi...Need of transformation of means of support of project financing for commercial banks is proved.The analysis and modeling of business processes of project management by the contextual chart and the chart of decomposition is carried out that allowed to describe the main stages of project financing.With use of tools of programming the business application of project management which will promote operational assessment on selection of introduced drafts is created.展开更多
Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,t...Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.展开更多
With the raising complexity of modern civil aircraft,both academy and industry have shown strong interests on MBSE(Model-Based System Engineering).However,following the application of MBSE,the duration of the design p...With the raising complexity of modern civil aircraft,both academy and industry have shown strong interests on MBSE(Model-Based System Engineering).However,following the application of MBSE,the duration of the design phase exceeded expectations.This paper conducted a survey to the relevant participants involved in the design,revealed that a lack of proper process management is a critical issue.The current MBSE methodology does not provide clear guidelines for monitoring,controlling,and managing processes,which are crucial for both efficiency and effectiveness.To address this,the present paper introduced an improved Process Model(PM)within the MBSE framework for civil aircraft design.This improved model incorporates three new Management Blocks(MB):Progress Management Block(PMB),Review Management Block(RMB),and Configuration Management Block(CMB),developed based on the Capability Maturity Model Integration(CMMI).These additions aim to streamline the design process and better align it with engineering practices.The upgraded MBSE method with the improved PM offers a more structured approach to manage complex aircraft design projects,and a case study is conducted to validate its potential to reduce timelines and enhance overall project outcomes.展开更多
The textile industry,while creating material wealth,also exerts a significant impact on the environment.Particularly in the textile manufacturing phase,which is the most energy-intensive phase throughout the product l...The textile industry,while creating material wealth,also exerts a significant impact on the environment.Particularly in the textile manufacturing phase,which is the most energy-intensive phase throughout the product lifecycle,the problem of high energy usage is increasingly notable.Nevertheless,current analyses of carbon emissions in textile manufacturing emphasize the dynamic temporal characteristics while failing to adequately consider critical information such as material flows and energy consumption.A carbon emission analysis method based on a holographic process model(HPM)is proposed to address these issues.First,the system boundary in the textile manufacturing is defined,and the characteristics of carbon emissions are analyzed.Next,an HPM based on the object-centric Petri net(OCPN)is constructed,and simulation experiments are conducted on three different scenarios in the textile manufacturing.Subsequently,the constructed HPM is utilized to achieve a multi-perspective analysis of carbon emissions.Finally,the feasibility of the method is verified by using the production data of pure cotton products from a certain textile manufacturing enterprise.The results indicate that this method can analyze the impact of various factors on the carbon emissions of pure cotton product production,and by applying targeted optimization strategies,carbon emissions have been reduced by nearly 20%.This contributes to propelling the textile manufacturing industry toward sustainable development.展开更多
This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating t...This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating the oil production rate of the ZH86 reservoir block under waterflood recovery.The historical input variables:reservoir pressure,reservoir pore volume containing hydrocarbons,reservoir pore volume containing water and reservoir water injection rate used as inputs for ANN models.To create the NRS-ANN hybrid models,314 data sets extracted from the NRS model,which included reservoir pressure,reservoir pore volume containing hy-drocarbons,reservoir pore volume containing water and reservoir water injection rate were used.The output of the models was the historical oil production rate(HOPR in m^(3) per day)recorded from the ZH86 reservoir block.Models were developed using MATLAB R2021a and trained with 25 models in three replicate conditions(2,4 and 6),each at 1000 epochs.A comparative analysis indicated that,for all 25 models,the ANN outperformed the NRS-ANN in terms of processing speed and prediction performance.ANN models achieved an average of R^(2) and MAE of 0.8433 and 8.0964 m^(3)/day values,respectively,while NRS-ANN hybrid models achieved an average of R^(2) and MAE of 0.7828 and 8.2484 m^(3)/day values,respectively.In addition,ANN models achieved a processing speed of 49 epochs/sec,32 epochs/sec,and 24 epochs/sec after 2,4,and 6 replicates,respectively.Whereas the NRS-ANN hybrid models achieved lower average processing speeds of 45 epochs/sec,23 epochs/sec and 20 epochs/sec.In addition,the ANN optimal model outperforms the NRS-ANN model in terms of both processing speed and accuracy.The ANN optimal model achieved a speed of 336.44 epochs/sec,compared to the NRS-ANN hybrid optimal model,which achieved a speed of 52.16 epochs/sec.The ANN optimal model achieved lower RMSE and MAE values of 7.9291 m^(3)/day and 5.3855 m^(3)/day in the validation dataset compared with the hybrid ANS optimal model,which achieved 13.6821 m^(3)/day and 9.2047 m^(3)/day,respectively.The study also showed that the ANN optimal model consistently achieved higher R^(2) values:0.9472,0.9284 and 0.9316 in the training,test and validation data sets.Whereas the NRS-ANN hybrid optimal yielded lower R^(2) values of 0.8030,0.8622 and 0.7776 for the training,testing and validation datasets.The study showed that ANN models are a more effective and reliable tool,as they balance both processing speed and accuracy in estimating the oil production rate of the ZH86 reservoir block under the waterflooding recovery method.展开更多
To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the...To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.展开更多
Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the ...Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.展开更多
To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new busi...To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.展开更多
Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workfl...Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.展开更多
Accurately simulating the soil nitrogen(N)cycle is crucial for assessing food security and resource utilization efficiency.The accuracy of model predictions relies heavily on model parameterization.The sensitivity and...Accurately simulating the soil nitrogen(N)cycle is crucial for assessing food security and resource utilization efficiency.The accuracy of model predictions relies heavily on model parameterization.The sensitivity and uncertainty of the simulations of soil N cycle of winter wheat-summer maize rotation system in the North China Plain(NCP)to the parameters were analyzed.First,the N module in the Vegetation Interface Processes(VIP)model was expanded to capture the dynamics of soil N cycle calibrated with field measurements in three ecological stations from 2000 to 2015.Second,the Morris and Sobol algorithms were adopted to identify the sensitive parameters that impact soil nitrate stock,denitrification rate,and ammonia volatilization rate.Finally,the shuffled complex evolution developed at the University of Arizona(SCE-UA)algorithm was used to optimize the selected sensitive parameters to improve prediction accuracy.The results showed that the sensitive parameters related to soil nitrate stock included the potential nitrification rate,Michaelis constant,microbial C/N ratio,and slow humus C/N ratio,the sensitive parameters related to denitrification rate were the potential denitrification rate,Michaelis constant,and N2 O production rate,and the sensitive parameters related to ammonia volatilization rate included the coefficient of ammonia volatilization exchange and potential nitrification rate.Based on the optimized parameters,prediction efficiency was notably increased with the highest coefficient of determination being approximately 0.8.Moreover,the average relative interval length at the 95% confidence level for soil nitrate stock,denitrification rate,and ammonia volatilization rate were 11.92,0.008,and 4.26,respectively,and the percentages of coverage of the measured values in the 95% confidence interval were 68%,86%,and 92%,respectively.By identifying sensitive parameters related to soil N,the expanded VIP model optimized by the SCE-UA algorithm can effectively simulate the dynamics of soil nitrate stock,denitrification rate,and ammonia volatilization rate in the NCP.展开更多
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con...Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculati...The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculation of the drop in strip temperature by both water cooling and air cooling is summed up to obtain the change of heat transfer coefficient. It is found that the learning coefficient of heat transfer coefficient is the kernel coefficient of coiler temperature control (CTC) model tuning. To decrease the deviation between the calculated steel temperature and the measured one at coiler entrance, a laminar cooling control self-learning strategy is used. Using the data acquired in the field, the results of the self-learning model used in the field were analyzed. The analyzed results show that the self-learning function is effective.展开更多
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
基金supported by Chinese National Key Research and Development Program(No.2017YFB1400604).
文摘Clinical practice guidelines(CPGs)contain evidence-based and economically reasonable medical treatment processes.Executable medical treatment processes in healthcare information systems can assist the treatment processes.To this end,business process modeling technologies have been exploited to model medical treatment processes.However,medical treatment processes are usually flexible and knowledge-intensive.To reduce the effort in modeling,we summarize several treatment patterns(i.e.,frequent behaviors in medical treatment processes in CPGs),and represent them by three process modeling languages(i.e.,BPMN,DMN,and CMMN).Based on the summarized treatment patterns,we propose a pattern-based integrated framework for modeling medical treatment processes.A modeling platform is implemented to support the use of treatment patterns,by which the feasibility of our approach is validated.An empirical analysis is discussed based on the coverage rates of treatment patterns.Feedback from interviewed physicians in a Chinese hospital shows that executable medical treatment processes of CPGs provide a convenient way to obtain guidance,thus assisting daily work for medical workers.
基金Supported by the National High Technology Research and Development Program of China(2014AA041803)the National Natural Science Foundation of China(61320106009)
文摘Low pressure chemical vapor deposition(LPCVD) is one of the most important processes during semiconductor manufacturing.However,the spatial distribution of internal temperature and extremely few samples makes it hard to build a good-quality model of this batch process.Besides,due to the properties of this process,the reliability of the model must be taken into consideration when optimizing the MVs.In this work,an optimal design strategy based on the self-learning Gaussian process model(GPM) is proposed to control this kind of spatial batch process.The GPM is utilized as the internal model to predict the thicknesses of thin films on all spatial-distributed wafers using the limited data.Unlike the conventional model based design,the uncertainties of predictions provided by GPM are taken into consideration to guide the optimal design of manipulated variables so that the designing can be more prudent Besides,the GPM is also actively enhanced using as little data as possible based on the predictive uncertainties.The effectiveness of the proposed strategy is successfully demonstrated in an LPCVD process.
基金Project(7131109)supported by the National Defense Pre-research Foundation of ChinaProject(51175500)supported by the National Natural Science Foundation of China
文摘In order to study the major performance indicators of the twin-rotor piston engine(TRPE), Matlab/simulink was used to simulate the mathematical models of its thermodynamic processes. With consideration of the characteristics of the working processes in the TRPE, corresponding differential equations were established and then simplified by period features of the TRPE. Finally, the major boundary conditions were figured out. The changing trends of mass, pressure and temperature of working fuel in the working chamber during a complete engine cycle were presented. The simulation results are consistent with the trends of an actual working cycle in the TRPE, which indicates that the method of simulation is feasible. As the pressure in the working chamber is calculated, all the performance parameters of the TRPE can be obtained. The major performance indicators, such as the indicated mean effective pressure, power to weight ratio and the volume power, are also acquired. Compared with three different types of conventional engines, the TRPE has a bigger utilization ratio of cylinder volume, a higher power to weight ratio and a more compact structure. This indicates that TRPE is superior to conventional engines.
基金Supported by the National Natural Science Foundation of China(61374044)Shanghai Science Technology Commission(12510709400)+1 种基金Shanghai Municipal Education Commission(14ZZ088)Shanghai Talent Development Plan
文摘This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11275088 and 11545012the Natural Science Foundation of Liaoning Scientific Committee under Grant No 2014020151
文摘The gauge extension of the standard model with the U(1)B-L+xy symmetry predicts the existence of a lightgauge boson Z′ with small couplings to ordinary fermions. We discuss its contributions to the muon anomalous magnetic moment αμ. Taking account of the constraints on the relevant free parameters, we further calculate the contributions of the light gauge boson Z′ to the Higgs-strahlung processes e+ e-→ZH and e+ e- →Z′H.
文摘Compaction processes are one the most important par ts of powder forming technology. The main applications are focused on pieces for a utomotive, aeronautic, electric and electronic industries. The main goals of the compaction processes are to obtain a compact with the geometrical requirements, without cracks, and with a uniform distribution of density. Design of such proc esses consist, essentially, in determine the sequence and relative displacements of die and punches in order to achieve such goals. A.B. Khoei presented a gener al framework for the finite element simulation of powder forming processes based on the following aspects; a large displacement formulation, centred on a total and updated Lagrangian formulation; an adaptive finite element strategy based on error estimates and automatic remeshing techniques; a cap model based on a hard ening rule in modelling of the highly non-linear behaviour of material; and the use of an efficient contact algorithm in the context of an interface element fo rmulation. In these references, the non-linear behaviour of powder was adequately desc ribed by the cap plasticity model. However, it suffers from a serious deficiency when the stress-point reaches a yield surface. In the flow theory of plasticit y, the transition from an elastic state to an elasto-plastic state appears more or less abruptly. For powder material it is very difficult to define the locati on of yield surface, because there is no distinct transition from elastic to ela stic-plastic behaviour. Results of experimental test on some hard met al powder show that the plastic effects were begun immediately upon loading. In such mater ials the domain of the yield surface would collapse to a point, so making the di rection of plastic increment indeterminate, because all directions are normal to a point. Thus, the classical plasticity theory cannot deal with such materials and an advanced constitutive theory is necessary. In the present paper, the constitutive equations of powder materials will be discussed via an endochronic theory of plasticity. This theory provides a unifi ed point of view to describe the elastic-plastic behaviour of material since it places no requirement for a yield surface and a ’loading function’ to disting uish between loading an unloading. Endochronic theory of plasticity has been app lied to a number of metallic materials, concrete and sand, but to the knowledge of authors, no numerical scheme of the model has been applied to powder material . In the present paper, a new approach is developed based on an endochronic rate independent, density-dependent plasticity model for describing the isothermal deformation behavior of metal powder at low homologous temperature. Although the concept of yield surface has not been explicitly assumed in endochronic theory, it is shown that the cone-cap plasticity yield surface (Fig.1), which is the m ost commonly used plasticity models for describing the behavior of powder materi al can be easily derived as a special case of the proposed endochronic theory. Fig.1 Trace of cone-cap yield function on the meridian pl ane for different relative density As large deformation is observed in powder compaction process, a hypoelastic-pl astic formulation is developed in the context of finite deformation plasticity. Constitutive equations are stated in unrotated frame of reference that greatly s implifies endochronic constitutive relation in finite plasticity. Constitutive e quations of the endochronic theory and their numerical integration are establish ed and procedures for determining material parameters of the model are demonstra ted. Finally, the numerical schemes are examined for efficiency in the model ling of a tip shaped component, as shown in Fig.2. Fig.2 A shaped tip component. a) Geometry, boundary conditio n and finite element mesh; b) density distribution at final stage of
文摘Need of transformation of means of support of project financing for commercial banks is proved.The analysis and modeling of business processes of project management by the contextual chart and the chart of decomposition is carried out that allowed to describe the main stages of project financing.With use of tools of programming the business application of project management which will promote operational assessment on selection of introduced drafts is created.
基金jointly supported by the National Key R&D Program of China(Grant No.2022YFE0209200)the National Natural Science Foundation of China(Grant Nos.U22A20562,42330607 and 41761144054)the National Large Scientific and Technological Infrastructure“Earth System Science Numerical Simulator Facility”(Earth-Lab)(https://cstr.cn/31134.02.EL)。
文摘Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.
基金supported by the National Natural Science Foundation of China(No.62073267)。
文摘With the raising complexity of modern civil aircraft,both academy and industry have shown strong interests on MBSE(Model-Based System Engineering).However,following the application of MBSE,the duration of the design phase exceeded expectations.This paper conducted a survey to the relevant participants involved in the design,revealed that a lack of proper process management is a critical issue.The current MBSE methodology does not provide clear guidelines for monitoring,controlling,and managing processes,which are crucial for both efficiency and effectiveness.To address this,the present paper introduced an improved Process Model(PM)within the MBSE framework for civil aircraft design.This improved model incorporates three new Management Blocks(MB):Progress Management Block(PMB),Review Management Block(RMB),and Configuration Management Block(CMB),developed based on the Capability Maturity Model Integration(CMMI).These additions aim to streamline the design process and better align it with engineering practices.The upgraded MBSE method with the improved PM offers a more structured approach to manage complex aircraft design projects,and a case study is conducted to validate its potential to reduce timelines and enhance overall project outcomes.
基金National Key R&D Program of China(No.2019YFB1706300)。
文摘The textile industry,while creating material wealth,also exerts a significant impact on the environment.Particularly in the textile manufacturing phase,which is the most energy-intensive phase throughout the product lifecycle,the problem of high energy usage is increasingly notable.Nevertheless,current analyses of carbon emissions in textile manufacturing emphasize the dynamic temporal characteristics while failing to adequately consider critical information such as material flows and energy consumption.A carbon emission analysis method based on a holographic process model(HPM)is proposed to address these issues.First,the system boundary in the textile manufacturing is defined,and the characteristics of carbon emissions are analyzed.Next,an HPM based on the object-centric Petri net(OCPN)is constructed,and simulation experiments are conducted on three different scenarios in the textile manufacturing.Subsequently,the constructed HPM is utilized to achieve a multi-perspective analysis of carbon emissions.Finally,the feasibility of the method is verified by using the production data of pure cotton products from a certain textile manufacturing enterprise.The results indicate that this method can analyze the impact of various factors on the carbon emissions of pure cotton product production,and by applying targeted optimization strategies,carbon emissions have been reduced by nearly 20%.This contributes to propelling the textile manufacturing industry toward sustainable development.
基金National Natural Science Foundation of China grants no.41972326 and 51774258.
文摘This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating the oil production rate of the ZH86 reservoir block under waterflood recovery.The historical input variables:reservoir pressure,reservoir pore volume containing hydrocarbons,reservoir pore volume containing water and reservoir water injection rate used as inputs for ANN models.To create the NRS-ANN hybrid models,314 data sets extracted from the NRS model,which included reservoir pressure,reservoir pore volume containing hy-drocarbons,reservoir pore volume containing water and reservoir water injection rate were used.The output of the models was the historical oil production rate(HOPR in m^(3) per day)recorded from the ZH86 reservoir block.Models were developed using MATLAB R2021a and trained with 25 models in three replicate conditions(2,4 and 6),each at 1000 epochs.A comparative analysis indicated that,for all 25 models,the ANN outperformed the NRS-ANN in terms of processing speed and prediction performance.ANN models achieved an average of R^(2) and MAE of 0.8433 and 8.0964 m^(3)/day values,respectively,while NRS-ANN hybrid models achieved an average of R^(2) and MAE of 0.7828 and 8.2484 m^(3)/day values,respectively.In addition,ANN models achieved a processing speed of 49 epochs/sec,32 epochs/sec,and 24 epochs/sec after 2,4,and 6 replicates,respectively.Whereas the NRS-ANN hybrid models achieved lower average processing speeds of 45 epochs/sec,23 epochs/sec and 20 epochs/sec.In addition,the ANN optimal model outperforms the NRS-ANN model in terms of both processing speed and accuracy.The ANN optimal model achieved a speed of 336.44 epochs/sec,compared to the NRS-ANN hybrid optimal model,which achieved a speed of 52.16 epochs/sec.The ANN optimal model achieved lower RMSE and MAE values of 7.9291 m^(3)/day and 5.3855 m^(3)/day in the validation dataset compared with the hybrid ANS optimal model,which achieved 13.6821 m^(3)/day and 9.2047 m^(3)/day,respectively.The study also showed that the ANN optimal model consistently achieved higher R^(2) values:0.9472,0.9284 and 0.9316 in the training,test and validation data sets.Whereas the NRS-ANN hybrid optimal yielded lower R^(2) values of 0.8030,0.8622 and 0.7776 for the training,testing and validation datasets.The study showed that ANN models are a more effective and reliable tool,as they balance both processing speed and accuracy in estimating the oil production rate of the ZH86 reservoir block under the waterflooding recovery method.
基金The National Natural Science Foundation of China(No.70671024).
文摘To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.
文摘Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.
基金The National Natural Science Foundation of China(No60473078)
文摘To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.
文摘Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.
基金financially supported by the National Natural Science Foundations of China(Nos.41790424 and 41471026)。
文摘Accurately simulating the soil nitrogen(N)cycle is crucial for assessing food security and resource utilization efficiency.The accuracy of model predictions relies heavily on model parameterization.The sensitivity and uncertainty of the simulations of soil N cycle of winter wheat-summer maize rotation system in the North China Plain(NCP)to the parameters were analyzed.First,the N module in the Vegetation Interface Processes(VIP)model was expanded to capture the dynamics of soil N cycle calibrated with field measurements in three ecological stations from 2000 to 2015.Second,the Morris and Sobol algorithms were adopted to identify the sensitive parameters that impact soil nitrate stock,denitrification rate,and ammonia volatilization rate.Finally,the shuffled complex evolution developed at the University of Arizona(SCE-UA)algorithm was used to optimize the selected sensitive parameters to improve prediction accuracy.The results showed that the sensitive parameters related to soil nitrate stock included the potential nitrification rate,Michaelis constant,microbial C/N ratio,and slow humus C/N ratio,the sensitive parameters related to denitrification rate were the potential denitrification rate,Michaelis constant,and N2 O production rate,and the sensitive parameters related to ammonia volatilization rate included the coefficient of ammonia volatilization exchange and potential nitrification rate.Based on the optimized parameters,prediction efficiency was notably increased with the highest coefficient of determination being approximately 0.8.Moreover,the average relative interval length at the 95% confidence level for soil nitrate stock,denitrification rate,and ammonia volatilization rate were 11.92,0.008,and 4.26,respectively,and the percentages of coverage of the measured values in the 95% confidence interval were 68%,86%,and 92%,respectively.By identifying sensitive parameters related to soil N,the expanded VIP model optimized by the SCE-UA algorithm can effectively simulate the dynamics of soil nitrate stock,denitrification rate,and ammonia volatilization rate in the NCP.
文摘Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金Item Sponsored by National Natural Science Foundation of China(50474016)
文摘The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculation of the drop in strip temperature by both water cooling and air cooling is summed up to obtain the change of heat transfer coefficient. It is found that the learning coefficient of heat transfer coefficient is the kernel coefficient of coiler temperature control (CTC) model tuning. To decrease the deviation between the calculated steel temperature and the measured one at coiler entrance, a laminar cooling control self-learning strategy is used. Using the data acquired in the field, the results of the self-learning model used in the field were analyzed. The analyzed results show that the self-learning function is effective.