Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,t...Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.展开更多
Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has...Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has been considered a potential large-scale energy storage technology.In order to explore the gas injection char-acteristic of underground cavern,a detailed thermodynamic model of the system is established in the process modelling software gPROMS.The four subsystem models,i.e.the compressor,heat exchanger,underground cavern storage and expander,are connected with inlet-outlet equilibrium of flow rate/pressure/temperature to form an integrated CAES system model in gPROMS.The maximum air pressure and temperature in the cavern are focused to interrogate the critical condition of the cavern during the injection process.When analyzing the mass flow rate-pressure ratio relationship,it’s found that under specified operating conditions,an increase in mass flow rate can lead to a higher pressure ratio.Compression power demand also escalates significantly with increasing mass flow rates,underscoring the system’s energy-intensive nature.Additionally,the cooler outlet energy rate progressively decreases,becoming increasingly negative as the mass flow rate increases.These in-sights offer critical theoretical foundations for optimizing practical efficiency of CAES.展开更多
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
Basalt fiber reinforcement in polymer matrix composites is becoming more and more popular because of its environmental friendliness and mechanical qualities that are comparable to those of synthetic fibers.Basalt fibe...Basalt fiber reinforcement in polymer matrix composites is becoming more and more popular because of its environmental friendliness and mechanical qualities that are comparable to those of synthetic fibers.Basalt fiber strengthened vinyl ester matrix polymeric composite with filler addition of nano-and micro-sized silicon carbide(SiC)element spanning from 2 weight percent to 10 weight percent was studied for its mechanical and wear properties.The application of Artificial Neural Network(ANN)to correlate the filler addition composition for optimum mechanical properties is required due to the non-linear mechanical and tribological features of composites.The stuffing blend and composition of the composite are optimized using the hybrid model and Genetic Algorithm(GA)to maximize the mechanical and wear-resistant properties.The predicted and tested ANN–GA optimal values obtained for the composite combination had a tensile,flexural,impact resilience,hardness and wear properties of 202.93 MPa,501.67 MPa,3.460 J/s,43 HV and 0.196 g,respectively,for its optimum combination of filler and reinforcement.It can be noted that the nano-sized SiC filler particle enhances most of the properties of the composite which diversifies its applications.The predicted mechanical and wear values of the developed ANN–GA model were in closer agreement with the experimental values which validate the model.展开更多
The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify ...The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify this,playing a crucial role in various industries.However,due to the complex interplay between these factors and the resulting properties,a simple yet accurate model to predict the mechanical properties of CRC,considering different rubbers,fillers,and processing techniques,is highly desired.This study aims to predict the dispersion of fillers in CRC and forecast the resultant mechanical properties of CRC by leveraging machine learning.We selected various rubbers and carbon black fillers,conducted mixing and vulcanizing,and subsequently measured filler dispersion and tensile performance.Based on 215 experimental data points,we evaluated the performance of different machine learning models.Our findings indicate that the manually designed deep neural network(DNN)models achieved superior results,exhibiting the highest coefficient of determination(R^(2))values(>0.95).Shapley additive explanations(SHAP)analysis of the DNN models revealed the intricate relationship between the properties of CRC and process parameters.Moreover,based on the robust predictive capabilities of the DNN models,we can recommend or optimize CRC fabrication process.This work provides valuable insights for employing machine learning in predicting polymer composite material properties and optimizing the fabrication of high-performance CRC.展开更多
The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly pr...The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly predict due to the complicated relationships between the chemical composition and process(like quenching temperature(Qr)).A Gaussian process regression model in machine learning was developed to predict V_(RA),and the model accuracy was further improved by introducing a metallurgical parameter of martensite fraction(fo)to accurately predict V_(RA) in Q&P steels.The developed machine learning model combined with Bayesian global optimization can serve as another selection strategy for the quenching temperature,and this strategy is very effcient as it found the"optimum"Qr with the maximum V_(RA) using only seven consecutive iterations.The benchmark experiment also reveals that the developed machine learning model predicts V_(RA) more accurately than the popular constrained carbon equilibrium thermodynamic model,even better than a thermo-kinetic quenching-partitioning-tempering-local equilibrium model.展开更多
Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are ...Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .展开更多
Cognitive biases are commonly used by attackers to manipulate users’psychology in phishing emails.This study systematically analyzes the exploitation of cognitive biases in phishing emails and addresses the following...Cognitive biases are commonly used by attackers to manipulate users’psychology in phishing emails.This study systematically analyzes the exploitation of cognitive biases in phishing emails and addresses the following questions:(1)Which cognitive biases are frequently exploited in phishing emails?(2)How are cognitive biases exploited in phishing emails?(3)How effective are cognitive bias features in detecting phishing emails?(4)How can the exploitation of cognitive biases in phishing emails be modelled?To address these questions,this study constructed a cognitive processing model that explains how attackers manipulate users by leveraging cognitive biases at different cognitive stages.By annotating 482 phishing emails,this study identified 10 common types of cognitive biases and developed corresponding detection models to evaluate the effectiveness of these bias features in phishing email detection.The results show that models incorporating cognitive bias features significantly outperform baseline models in terms of accuracy,recall,and F1 score.This study provides crucial theoretical support for future anti-phishing methods,as a deeper understanding of cognitive biases offers key insights for designing more effective detection and prevention strategies.展开更多
To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the...To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.展开更多
Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the ...Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.展开更多
To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new busi...To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.展开更多
Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workfl...Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.展开更多
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con...Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.展开更多
With growing concerns over environmental issues,ethylene manufacturing is shifting from a sole focus on economic benefits to an additional consideration of environmental impacts.The operation of the thermal cracking f...With growing concerns over environmental issues,ethylene manufacturing is shifting from a sole focus on economic benefits to an additional consideration of environmental impacts.The operation of the thermal cracking furnace in ethylene manufacturing determines not only the profitability of an ethylene plant but also the carbon emissions it releases.While multi-objective optimization of the thermal cracking furnace to balance profit with environmental impact is an effective solution to achieve green ethylene man-ufacturing,it carries a high computational demand due to the complex dynamic processes involved.In this work,artificial intelligence(AI)is applied to develop a novel hybrid model based on physically consistent machine learning(PCML).This hybrid model not only reduces the computational demand but also retains the interpretability and scalability of the model.With this hybrid model,the computational demand of the multi-objective dynamic optimization is reduced to 77 s.The optimization results show that dynamically adjusting the operating variables with coke formation can effectively improve profit and reduce CO_(2)emissions.In addition,the results from this study indicate that sacrificing 28.97%of the annual profit can significantly reduce the annual CO_(2)emissions by 42.89%.The key findings of this study highlight the great potential for green ethylene manufacturing based on AI through modeling and optimization approaches.This study will be important for industrial practitioners and policy-makers.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculati...The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculation of the drop in strip temperature by both water cooling and air cooling is summed up to obtain the change of heat transfer coefficient. It is found that the learning coefficient of heat transfer coefficient is the kernel coefficient of coiler temperature control (CTC) model tuning. To decrease the deviation between the calculated steel temperature and the measured one at coiler entrance, a laminar cooling control self-learning strategy is used. Using the data acquired in the field, the results of the self-learning model used in the field were analyzed. The analyzed results show that the self-learning function is effective.展开更多
Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are dev...Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are developed, which are built with the methods of multi-rigid-body dynam- ics, hybrid coordinate and substructure. Then an attitude control method suitable for the deploying process is proposed, which can keep stability under any dynamical parameter variation. Subse- quently, this attitude control is optimized to minimize attitude disturbance during the deploying process. The simulation results show that this attitude control method can keep stability and main- tain proper attitude variation during the deploying process, which indicates that this attitude con- trol method is suitable for practical applications.展开更多
Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input ...Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input training neural network (IT-NN) is proposed for the nonlinear system modelling in this paper. Mo-mentum factor and adaptive learning rate are introduced into learning algorithm to improve the training speed of IT-NN. Contrasting to the auto-associative neural network (ANN), IT-NN has less hidden layers and higher training speed. The effectiveness is illustrated through a comparison of IT-NN with linear PCA and ANN with experiments. Moreover, the IT-NN is combined with RBF neural network (RBF-NN) to model the yields of ethylene and propyl-ene in the naphtha pyrolysis system. From the illustrative example and practical application, IT-NN combined with RBF-NN is an effective method of nonlinear chemical process modelling.展开更多
Efficient experiment design is of great significance for the validation of simulation model with high nonlinearity and large input space.Excessive validation experiment raises the cost while insufficient test increase...Efficient experiment design is of great significance for the validation of simulation model with high nonlinearity and large input space.Excessive validation experiment raises the cost while insufficient test increases the risks of accepting an invalid model.In this paper,an adaptive sequential experiment design method combining global exploration criterion and local exploitation criterion is proposed.The exploration criterion utilizes discrepancy metric to improve the space-filling property of the design points while the exploitation criterion employs the leave one out error to discover informative points.To avoid the clustering of samples in the local region,an adaptive weight updating approach is provided to maintain the balance between exploration and exploitation.Besides,the credibility distribution function characterizing the relationship between the input and result credibility is introduced to support the model validation experiment design.Finally,six benchmark problems and an engineering case are applied to examine the performance of the proposed method.The experiments indicate that the proposed method achieves satisfactory performance for function approximation in accuracy and convergence.展开更多
As a variant of process algebra, π calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new appro...As a variant of process algebra, π calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π calculus, thus facilitating the optimization of business processes.展开更多
基金jointly supported by the National Key R&D Program of China(Grant No.2022YFE0209200)the National Natural Science Foundation of China(Grant Nos.U22A20562,42330607 and 41761144054)the National Large Scientific and Technological Infrastructure“Earth System Science Numerical Simulator Facility”(Earth-Lab)(https://cstr.cn/31134.02.EL)。
文摘Accurate quantification of life-cycle greenhouse gas(GHG)footprints(GHG_(fp))for a crop cultivation system is urgently needed to address the conflict between food security and global warming mitigation.In this study,the hydrobiogeochemical model,CNMM-DNDC,was validated with in situ observations from maize-based cultivation systems at the sites of Yongji(YJ,China),Yanting(YT,China),and Madeya(MA,Kenya),subject to temperate,subtropical,and tropical climates,respectively,and updated to enable life-cycle GHG_(fp)estimation.The model validation provided satisfactory simulations on multiple soil variables,crop growth,and emissions of GHGs and reactive nitrogen gases.The locally conventional management practices resulted in GHG_(fp)values of 0.35(0.09–0.53 at the 95%confidence interval),0.21(0.01–0.73),0.46(0.27–0.60),and 0.54(0.21–0.77)kg CO_(2)e kg~(-1)d.m.(d.m.for dry matter in short)for maize–wheat rotation at YJ and YT,and for maize–maize and maize–Tephrosia rotations at MA,respectively.YT's smallest GHG_(fp)was attributed to its lower off-farm GHG emissions than YJ,though the soil organic carbon(SOC)storage and maize yield were slightly lower than those of YJ.MA's highest SOC loss and low yield in shifting cultivation for maize–Tephrosia rotation contributed to its highest GHG_(fp).Management practices of maize cultivation at these sites could be optimized by combination of synthetic and organic fertilizer(s)while incorporating 50%–100%crop residues.Further evaluation of the updated CNMM-DNDC is needed for different crops at site and regional scales to confirm its worldwide applicability in quantifying GHG_(fp)and optimizing management practices for achieving multiple sustainability goals.
基金supported by National Natural Science Foundation of China Excellent Young Scientists Fund Program,Deep Earth Probe and Mineral Resources Exploration-National Science and Technology Major Project(grant No.2024ZD1004105)Shandong Excellent Young Scientists Fund Program(Overseas)(grant No.2022HWYQ-020)Shenzhen Science and Technology Program(grant No.JCYJ20220530141016036,GJHZ20240218113359001).
文摘Renewable energies including solar and wind are intermittent,causing difficulty in connection to conventional power grids due to instability of output duty.Compressed air energy storage(CAES)in underground caverns has been considered a potential large-scale energy storage technology.In order to explore the gas injection char-acteristic of underground cavern,a detailed thermodynamic model of the system is established in the process modelling software gPROMS.The four subsystem models,i.e.the compressor,heat exchanger,underground cavern storage and expander,are connected with inlet-outlet equilibrium of flow rate/pressure/temperature to form an integrated CAES system model in gPROMS.The maximum air pressure and temperature in the cavern are focused to interrogate the critical condition of the cavern during the injection process.When analyzing the mass flow rate-pressure ratio relationship,it’s found that under specified operating conditions,an increase in mass flow rate can lead to a higher pressure ratio.Compression power demand also escalates significantly with increasing mass flow rates,underscoring the system’s energy-intensive nature.Additionally,the cooler outlet energy rate progressively decreases,becoming increasingly negative as the mass flow rate increases.These in-sights offer critical theoretical foundations for optimizing practical efficiency of CAES.
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
文摘Basalt fiber reinforcement in polymer matrix composites is becoming more and more popular because of its environmental friendliness and mechanical qualities that are comparable to those of synthetic fibers.Basalt fiber strengthened vinyl ester matrix polymeric composite with filler addition of nano-and micro-sized silicon carbide(SiC)element spanning from 2 weight percent to 10 weight percent was studied for its mechanical and wear properties.The application of Artificial Neural Network(ANN)to correlate the filler addition composition for optimum mechanical properties is required due to the non-linear mechanical and tribological features of composites.The stuffing blend and composition of the composite are optimized using the hybrid model and Genetic Algorithm(GA)to maximize the mechanical and wear-resistant properties.The predicted and tested ANN–GA optimal values obtained for the composite combination had a tensile,flexural,impact resilience,hardness and wear properties of 202.93 MPa,501.67 MPa,3.460 J/s,43 HV and 0.196 g,respectively,for its optimum combination of filler and reinforcement.It can be noted that the nano-sized SiC filler particle enhances most of the properties of the composite which diversifies its applications.The predicted mechanical and wear values of the developed ANN–GA model were in closer agreement with the experimental values which validate the model.
基金supported by the National Key R&D Program of China(No.2022YFB3707303)the National Natural Science Foundation of China(No.52293471).
文摘The performance and corresponding applications of polymer nanocomposites are highly dominated by the choice of base material,type of fillers,and the processing ways.Carbon black-filled rubber composites(CRC)exemplify this,playing a crucial role in various industries.However,due to the complex interplay between these factors and the resulting properties,a simple yet accurate model to predict the mechanical properties of CRC,considering different rubbers,fillers,and processing techniques,is highly desired.This study aims to predict the dispersion of fillers in CRC and forecast the resultant mechanical properties of CRC by leveraging machine learning.We selected various rubbers and carbon black fillers,conducted mixing and vulcanizing,and subsequently measured filler dispersion and tensile performance.Based on 215 experimental data points,we evaluated the performance of different machine learning models.Our findings indicate that the manually designed deep neural network(DNN)models achieved superior results,exhibiting the highest coefficient of determination(R^(2))values(>0.95).Shapley additive explanations(SHAP)analysis of the DNN models revealed the intricate relationship between the properties of CRC and process parameters.Moreover,based on the robust predictive capabilities of the DNN models,we can recommend or optimize CRC fabrication process.This work provides valuable insights for employing machine learning in predicting polymer composite material properties and optimizing the fabrication of high-performance CRC.
基金The authors acknowledge financial support from the National Natural Science Foundation of China(Grant Nos.51771114 and 51371117).
文摘The metastable retained austenite(RA)plays a significant role in the excellent mechanical performance of quenching and partitioning(Q&P)steels,while the volume fraction of RA(V_(RA))is challengeable to directly predict due to the complicated relationships between the chemical composition and process(like quenching temperature(Qr)).A Gaussian process regression model in machine learning was developed to predict V_(RA),and the model accuracy was further improved by introducing a metallurgical parameter of martensite fraction(fo)to accurately predict V_(RA) in Q&P steels.The developed machine learning model combined with Bayesian global optimization can serve as another selection strategy for the quenching temperature,and this strategy is very effcient as it found the"optimum"Qr with the maximum V_(RA) using only seven consecutive iterations.The benchmark experiment also reveals that the developed machine learning model predicts V_(RA) more accurately than the popular constrained carbon equilibrium thermodynamic model,even better than a thermo-kinetic quenching-partitioning-tempering-local equilibrium model.
文摘Software Development Life Cycle (SDLC) is one of the major ingredients for the development of efficient software systems within a time frame and low-cost involvement. From the literature, it is evident that there are various kinds of process models that are used by the software industries for the development of small, medium and long-term software projects, but many of them do not cover risk management. It is quite obvious that the improper selection of the software development process model leads to failure of the software products as it is time bound activity. In the present work, a new software development process model is proposed which covers the risks at any stage of the development of the software product. The model is named a Hemant-Vipin (HV) process model and may be helpful for the software industries for development of the efficient software products and timely delivery at the end of the client. The efficiency of the HV process model is observed by considering various kinds of factors like requirement clarity, user feedback, change agility, predictability, risk identification, practical implementation, customer satisfaction, incremental development, use of ready-made components, quick design, resource organization and many more and found through a case study that the presented approach covers many of parameters in comparison of the existing process models. .
文摘Cognitive biases are commonly used by attackers to manipulate users’psychology in phishing emails.This study systematically analyzes the exploitation of cognitive biases in phishing emails and addresses the following questions:(1)Which cognitive biases are frequently exploited in phishing emails?(2)How are cognitive biases exploited in phishing emails?(3)How effective are cognitive bias features in detecting phishing emails?(4)How can the exploitation of cognitive biases in phishing emails be modelled?To address these questions,this study constructed a cognitive processing model that explains how attackers manipulate users by leveraging cognitive biases at different cognitive stages.By annotating 482 phishing emails,this study identified 10 common types of cognitive biases and developed corresponding detection models to evaluate the effectiveness of these bias features in phishing email detection.The results show that models incorporating cognitive bias features significantly outperform baseline models in terms of accuracy,recall,and F1 score.This study provides crucial theoretical support for future anti-phishing methods,as a deeper understanding of cognitive biases offers key insights for designing more effective detection and prevention strategies.
基金The National Natural Science Foundation of China(No.70671024).
文摘To investigate the process of information technology (IT) impacts on firm competitiveness, an integrated process model of IT impacts on firm competitiveness is brought forward based on the process-oriented view, the resource-based view and the complementary resource view, which is comprised of an IT conversion process, an information system (IS) adoption process, an IS use process and a competition process. The application capability of IT plays the critical role, which determines the efficiency and effectiveness of the aforementioned four processes. The process model of IT impacts on firm competitiveness can also be used to explain why, under what situations and how IT can generate positive organizational outcomes, as well as theoretical bases for further empirical study.
文摘Workflow management is an important aspect in CSCW at present. The elementary knowledge of workflow process is introduced, the Petri nets based process modeling methodology and basic definitions are provided, and the analysis and verification of structural and behavioral correctness of workflow process are discussed. Finally, the algorithm of verification of process definitions is proposed.
基金The National Natural Science Foundation of China(No60473078)
文摘To achieve an on-demand and dynamic composition model of inter-organizational business processes, a new approach for business process modeling and verification is introduced by using the pi-calculus theory. A new business process model which is multi-role, multi-dimensional, integrated and dynamic is proposed relying on inter-organizational collaboration. Compatible with the traditional linear sequence model, the new model is an M x N multi-dimensional mesh, and provides horizontal and vertical formal descriptions for the collaboration business process model. Finally, the pi-calculus theory is utilized to verify the deadlocks, livelocks and synchronization of the example models. The result shows that the proposed approach is efficient and applicable in inter-organizational business process modeling.
文摘Along with the extensive use of workflow, analysis methods to verify the correctness of the workflow are becoming more and more important. In the paper, we exploit the verification method based on Petri net for workflow process models which deals with the verification of workflow and finds the potential errors in the process design. Additionally, an efficient verification algorithm is given.
文摘Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.
基金the financial support of the National Key Research and Development Program of China(2021YFE0112800)EU RISE project OPTIMAL(101007963).
文摘With growing concerns over environmental issues,ethylene manufacturing is shifting from a sole focus on economic benefits to an additional consideration of environmental impacts.The operation of the thermal cracking furnace in ethylene manufacturing determines not only the profitability of an ethylene plant but also the carbon emissions it releases.While multi-objective optimization of the thermal cracking furnace to balance profit with environmental impact is an effective solution to achieve green ethylene man-ufacturing,it carries a high computational demand due to the complex dynamic processes involved.In this work,artificial intelligence(AI)is applied to develop a novel hybrid model based on physically consistent machine learning(PCML).This hybrid model not only reduces the computational demand but also retains the interpretability and scalability of the model.With this hybrid model,the computational demand of the multi-objective dynamic optimization is reduced to 77 s.The optimization results show that dynamically adjusting the operating variables with coke formation can effectively improve profit and reduce CO_(2)emissions.In addition,the results from this study indicate that sacrificing 28.97%of the annual profit can significantly reduce the annual CO_(2)emissions by 42.89%.The key findings of this study highlight the great potential for green ethylene manufacturing based on AI through modeling and optimization approaches.This study will be important for industrial practitioners and policy-makers.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金Item Sponsored by National Natural Science Foundation of China(50474016)
文摘The mathematical model for online controlling hot rolled steel cooling on run-out table (ROT for abbreviation) was analyzed, and water cooling is found to be the main cooling mode for hot rolled steel. The calculation of the drop in strip temperature by both water cooling and air cooling is summed up to obtain the change of heat transfer coefficient. It is found that the learning coefficient of heat transfer coefficient is the kernel coefficient of coiler temperature control (CTC) model tuning. To decrease the deviation between the calculated steel temperature and the measured one at coiler entrance, a laminar cooling control self-learning strategy is used. Using the data acquired in the field, the results of the self-learning model used in the field were analyzed. The analyzed results show that the self-learning function is effective.
基金sponsored by the National Natural Science Foundation of China (No. 11272172)
文摘Modeling and attitude control methods for a satellite with a large deployable antenna are studied in the present paper. Firstly, for reducing the model dimension, three dynamic models for the deploying process are developed, which are built with the methods of multi-rigid-body dynam- ics, hybrid coordinate and substructure. Then an attitude control method suitable for the deploying process is proposed, which can keep stability under any dynamical parameter variation. Subse- quently, this attitude control is optimized to minimize attitude disturbance during the deploying process. The simulation results show that this attitude control method can keep stability and main- tain proper attitude variation during the deploying process, which indicates that this attitude con- trol method is suitable for practical applications.
基金Supported by Beijing Municipal Education Commission (No.xk100100435) and the Key Research Project of Science andTechnology from Sinopec (No.E03007).
文摘Many applications of principal component analysis (PCA) can be found in dimensionality reduction. But linear PCA method is not well suitable for nonlinear chemical processes. A new PCA method based on im-proved input training neural network (IT-NN) is proposed for the nonlinear system modelling in this paper. Mo-mentum factor and adaptive learning rate are introduced into learning algorithm to improve the training speed of IT-NN. Contrasting to the auto-associative neural network (ANN), IT-NN has less hidden layers and higher training speed. The effectiveness is illustrated through a comparison of IT-NN with linear PCA and ANN with experiments. Moreover, the IT-NN is combined with RBF neural network (RBF-NN) to model the yields of ethylene and propyl-ene in the naphtha pyrolysis system. From the illustrative example and practical application, IT-NN combined with RBF-NN is an effective method of nonlinear chemical process modelling.
基金supported by the National Natural Science Foundation of China(No.61627810)。
文摘Efficient experiment design is of great significance for the validation of simulation model with high nonlinearity and large input space.Excessive validation experiment raises the cost while insufficient test increases the risks of accepting an invalid model.In this paper,an adaptive sequential experiment design method combining global exploration criterion and local exploitation criterion is proposed.The exploration criterion utilizes discrepancy metric to improve the space-filling property of the design points while the exploitation criterion employs the leave one out error to discover informative points.To avoid the clustering of samples in the local region,an adaptive weight updating approach is provided to maintain the balance between exploration and exploitation.Besides,the credibility distribution function characterizing the relationship between the input and result credibility is introduced to support the model validation experiment design.Finally,six benchmark problems and an engineering case are applied to examine the performance of the proposed method.The experiments indicate that the proposed method achieves satisfactory performance for function approximation in accuracy and convergence.
文摘As a variant of process algebra, π calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π calculus, thus facilitating the optimization of business processes.