This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This...The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This era integrates biotechnology,artificial intelligence(AI),and big data information technology.In contrast,China is still in a transition period between stages 2.0 and 3.0,which primarily relies on conventional selection and molecular breeding.In the context of increasingly complex international situations,accurately identifying core issues in China's seed industry innovation and seizing the frontier of international seed technology are strategically important.These efforts are essential for ensuring food security and revitalizing the seed industry.This paper systematically analyzes the characteristics of crop breeding data from artificial selection to intelligent design breeding.It explores the applications and development trends of AI and big data in modern crop breeding from several key perspectives.These include highthroughput phenotype acquisition and analysis,multiomics big data database and management system construction,AI-based multiomics integrated analysis,and the development of intelligent breeding software tools based on biological big data and AI technology.Based on an in-depth analysis of the current status and challenges of China's seed industry technology development,we propose strategic goals and key tasks for China's new generation of AI and big data-driven intelligent design breeding.These suggestions aim to accelerate the development of an intelligent-driven crop breeding engineering system that features large-scale gene mining,efficient gene manipulation,engineered variety design,and systematized biobreeding.This study provides a theoretical basis and practical guidance for the development of China's seed industry technology.展开更多
As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driv...As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.展开更多
In the rapidly evolving landscape of digital health,the integration of data analytics and Internet healthserviceshasbecome a pivotal area of exploration.To meet keen social needs,Prof.Shan Liu(Xi'an Jiaotong Unive...In the rapidly evolving landscape of digital health,the integration of data analytics and Internet healthserviceshasbecome a pivotal area of exploration.To meet keen social needs,Prof.Shan Liu(Xi'an Jiaotong University)and Prof.Xing Zhang(Wuhan Textile University)have published the timely book Datadriven Internet Health Platform Service Value Co-creation through China Science Press.The book focuses on the provision of medical and health services from doctors to patients through Internet health platforms,where the service value is co-created by three parties.展开更多
The article "Data-driven soft sensors in blast furnace ironmaking:a survey,"written by Yueyang LUO,Xinmin ZHANG,Manabu KANO,Long DENG,Chunjie YANG,and Zhihuan SONG,was originally published electronically on ...The article "Data-driven soft sensors in blast furnace ironmaking:a survey,"written by Yueyang LUO,Xinmin ZHANG,Manabu KANO,Long DENG,Chunjie YANG,and Zhihuan SONG,was originally published electronically on the publisher's Internet portal on Mar.27,2023 without open access.展开更多
Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) ...Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) batteries. However, the time-consuming signal data acquisition and the lack of interpretability of model still hinder its efficient deployment. Motivated by this, this letter proposes a novel and interpretable data-driven learning strategy through combining the benefits of explainable AI and non-destructive ultrasonic detection for battery SoH estimation. Specifically, after equipping battery with advanced ultrasonic sensor to promise fast real-time ultrasonic signal measurement, an interpretable data-driven learning strategy named generalized additive neural decision ensemble(GANDE) is designed to rapidly estimate battery SoH and explain the effects of the involved ultrasonic features of interest.展开更多
Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve ...Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.展开更多
Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment w...Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.展开更多
Dear Editor,This letter studies the bipartite consensus tracking problem for heterogeneous multi-agent systems with actuator faults and a leader's unknown time-varying control input. To handle such a problem, the ...Dear Editor,This letter studies the bipartite consensus tracking problem for heterogeneous multi-agent systems with actuator faults and a leader's unknown time-varying control input. To handle such a problem, the continuous fault-tolerant control protocol via observer design is developed. In addition, it is strictly proved that the multi-agent system driven by the designed controllers can still achieve bipartite consensus tracking after faults occur.展开更多
Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising ...Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising from configuration-dependent geometric and non-geometric source errors,whereas the accuracy of data-driven methods depends on a large amount of measurement data.Using a 5-DOF(degrees of freedom)hybrid machining robot as an exemplar,this study presents a model data-driven approach for the calibration of robotic manipulators.An f-DOF realistic robot containing various source errors is visualized as a 6-DOF fictitious robot having error-free parameters,but erroneous actuated/virtual joint motions.The calibration process essentially involves four steps:(1)formulating the linear map relating the pose error twist to the joint motion errors,(2)parameterizing the joint motion errors using second-order polynomials in terms of nominal actuated joint variables,(3)identifying the polynomial coefficients using the weighted least squares plus principal component analysis,and(4)compensating the compensable pose errors by updating the nominal actuated joint variables.The merit of this approach is that it enables compensation of the pose errors caused by configuration-dependent geometric and non-geometric source errors using finite measurement configurations.Experimental studies on a prototype machine illustrate the effectiveness of the proposed approach.展开更多
Data trading is a crucial means of unlocking the value of Internet of Things(IoT)data.However,IoT data differs from traditional material goods due to its intangible and replicable nature.This difference leads to ambig...Data trading is a crucial means of unlocking the value of Internet of Things(IoT)data.However,IoT data differs from traditional material goods due to its intangible and replicable nature.This difference leads to ambiguous data rights,confusing pricing,and challenges in matching.Additionally,centralized IoT data trading platforms pose risks such as privacy leakage.To address these issues,we propose a profit-driven distributed trading mechanism for IoT data.First,a blockchain-based trading architecture for IoT data,leveraging the transparent and tamper-proof features of blockchain technology,is proposed to establish trust between data owners and data requesters.Second,an IoT data registration method that encompasses both rights confirmation and pricing is designed.The data right confirmation method uses non-fungible token to record ownership and authenticate IoT data.For pricing,we develop an IoT data value assessment index system and introduce a pricing model based on a combination of the sparrow search algorithm and the back propagation neural network.Finally,an IoT data matching method is designed based on the Stackelberg game.This establishes a Stackelberg game model involving multiple data owners and requesters,employing a hierarchical optimization method to determine the optimal purchase strategy.The security of the mechanism is analyzed and the performance of both the pricing method and matching method is evaluated.Experiments demonstrate that both methods outperform traditional approaches in terms of error rates and profit maximization.展开更多
AI applications have become ubiquitous,bringing significant convenience to various industries.In e-commerce,AI can enhance product recommendations for individuals and provide businesses with more accurate predictions ...AI applications have become ubiquitous,bringing significant convenience to various industries.In e-commerce,AI can enhance product recommendations for individuals and provide businesses with more accurate predictions for market strategy development.However,if the data used for AI applications is damaged or lost,it will inevitably affect the effectiveness of these AI applications.Therefore,it is essential to verify the integrity of e-commerce data.Although existing Provable Data Possession(PDP)protocols can verify the integrity of cloud data,they are not suitable for e-commerce scenarios due to the limited computational capabilities of edge servers,which cannot handle the high computational overhead of generating homomorphic verification tags in PDP.To address this issue,we propose PDP with Outsourced Tag Generation for AI-driven e-commerce,which outsources the computation of homomorphic verification tags to cloud servers while introducing a lightweight verification method to ensure that the tags match the uploaded data.Additionally,the proposed scheme supports dynamic operations such as adding,deleting,and modifying data,enhancing its practicality.Finally,experiments show that the additional computational overhead introduced by outsourcing homomorphic verification tags is acceptable compared to the original PDP.展开更多
Accurate identification and effective support of key blocks are crucial for ensuring the stability and safety of rock slopes.The number of structural planes and rock blocks were reduced in previous studies.This impair...Accurate identification and effective support of key blocks are crucial for ensuring the stability and safety of rock slopes.The number of structural planes and rock blocks were reduced in previous studies.This impairs the ability to characterize complex rock slopes accurately and inhibits the identification of key blocks.In this paper,a knowledge-data dually driven paradigm for accurate identification of key blocks in complex rock slopes is proposed.Our basic idea is to integrate key block theory into data-driven models based on finely characterizing structural features to identify key blocks in complex rock slopes accurately.The proposed novel paradigm consists of(1)representing rock slopes as graph-structured data based on complex systems theory,(2)identifying key nodes in the graph-structured data using graph deep learning,and(3)mapping the key nodes of graph-structured data to corresponding key blocks in the rock slope.Verification experiments and real-case applications are conducted by the proposed method.The verification results demonstrate excellent model performance,strong generalization capability,and effective classification results.Moreover,the real case application is conducted on the northern slope of the Yanqianshan Iron Mine.The results show that the proposed method can accurately identify key blocks in complex rock slopes,which can provide a decision-making basis and rational recommendations for effective support and instability prevention of rock slopes,thereby ensuring the stability of rock engineering and the safety of life and property.展开更多
With the constant changes of the times,China's science and technology have entered a period of rapid development.At the same time,the economic structure is also changing with the changes of the times,and the origi...With the constant changes of the times,China's science and technology have entered a period of rapid development.At the same time,the economic structure is also changing with the changes of the times,and the original Haikou logistics industry in the process is also facing new impacts and challenges.And related enterprises want to stand out in the fierce market competition,we must optimize and upgrade the current industry development situation,promote the integrated development of Haikou logistics and manufacturing industry,to constantly promote the innovative application of digital technology in the logistics industry and manufacturing industry,the formation of a multi-force economic development model.This paper mainly starts with the development status of Haikou logistics,analyzes the importance of the integration of Haikou logistics and manufacturing industry under the background of big data drive,and makes an in-depth discussion on the path of the integration of Haikou logistics and manufacturing industry under the drive of big data,hoping to contribute new strength to the development of social economy.展开更多
Missing data handling is vital for multi-sensor information fusion fault diagnosis of motors to prevent the accuracy decay or even model failure,and some promising results have been gained in several current studies.T...Missing data handling is vital for multi-sensor information fusion fault diagnosis of motors to prevent the accuracy decay or even model failure,and some promising results have been gained in several current studies.These studies,however,have the following limitations:1)effective supervision is neglected for missing data across different fault types and 2)imbalance in missing rates among fault types results in inadequate learning during model training.To overcome the above limitations,this paper proposes a dynamic relative advantagedriven multi-fault synergistic diagnosis method to accomplish accurate fault diagnosis of motors under imbalanced missing data rates.Firstly,a cross-fault-type generalized synergistic diagnostic strategy is established based on variational information bottleneck theory,which is able to ensure sufficient supervision in handling missing data.Then,a dynamic relative advantage assessment technique is designed to reduce diagnostic accuracy decay caused by imbalanced missing data rates.The proposed method is validated using multi-sensor data from motor fault simulation experiments,and experimental results demonstrate its effectiveness and superiority in improving diagnostic accuracy and generalization under imbalanced missing data rates.展开更多
During the process of urbanization,community environments encounter challenges such as data disconnection and the underutilization of small and micro spaces.The establishment of“complete communities”necessitates the...During the process of urbanization,community environments encounter challenges such as data disconnection and the underutilization of small and micro spaces.The establishment of“complete communities”necessitates the implementation of refined governance strategies.This research develops a path for the precise establishment of community micro-gardens driven by mobile measurement.It involves the collection of environmental data via mobile devices equipped with various types of sensors,the generation of visualization maps that are adjusted for spatio-temporal synchronization,and the identification of environmental paint points,including areas of excessive temperature exposure and zones with elevated noise levels.Based on the aforementioned considerations,various plant allocation strategies have been proposed for distinct areas.For instance,the implementation of a composite shade and cooling vegetation system is recommended for regions experiencing high temperatures,while a triple protection structure is suggested for areas affected by odor contamination.The efficacy of these strategies is demonstrated through a case study of the micro-garden transformation in the Dongjie Community of Wulituo Street,Shijingshan,Beijing.The study presents operational technical pathways and plant response solutions aimed at facilitating data-driven governance of community micro-environments.展开更多
The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quan...The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quantitative risk assessment (QRA) and predictive maintenance (PdM) are essential to effectively manage coking risks influenced by multiple factors. However, the inherent uncertainties of the coking process, combined with the mixed-frequency nature of distributed control systems (DCS) and laboratory information management systems (LIMS) data, present significant challenges for the application of data-driven methods and their practical implementation in industrial environments. This study proposes a hierarchical framework that integrates deep learning and fuzzy logic inference, leveraging data and domain knowledge to monitor the coking condition and inform prescriptive maintenance planning. The framework proposes the multi-layer fuzzy inference system to construct the coking risk index, utilizes multi-label methods to select the optimal feature dataset across the reactor-regenerator and fractionation system using coking risk factors as label space, and designs the parallel encoder-integrated decoder architecture to address mixed-frequency data disparities and enhance adaptation capabilities through extracting the operation state and physical properties information. Additionally, triple attention mechanisms, whether in parallel or temporal modules, adaptively aggregate input information and enhance intrinsic interpretability to support the disposal decision-making. Applied in the 2.8 million tons FCCU under long-period complex operating conditions, enabling precise coking risk management at the fractionating tower bottom.展开更多
In order to improve the competitiveness of smart tourist attractions in the tourism market,this paper selects a scenic spot in Shenyang and uses big data technology to predict the passenger flow of the scenic spot.Fir...In order to improve the competitiveness of smart tourist attractions in the tourism market,this paper selects a scenic spot in Shenyang and uses big data technology to predict the passenger flow of the scenic spot.Firstly,this paper introduces the big data-driven forecast model of scenic spot passenger flow.Based on the traditional autoregressive integral moving average model and artificial neural network model,it builds a big data analysis and forecast model.Through the analysis of data source,model building,scenic spot passenger flow accuracy,and modeling time comparison,it affirms the advantages of big data analysis in forecasting scenic spot passenger flow.Finally,it puts forward four commercial operation optimization strategies:adjusting the ticket pricing of scenic spots,upgrading the catering and accommodation services in scenic spots,planning and designing play projects,and formulating accurate scenic spot marketing strategies,in order to provide references for the optimization and upgrading of smart tourist attractions in the future.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
基金partially supported by the Construction of Collaborative Innovation Center of Beijing Academy of Agricultural and Forestry Sciences(KJCX20240406)the Beijing Natural Science Foundation(JQ24037)+1 种基金the National Natural Science Foundation of China(32330075)the Earmarked Fund for China Agriculture Research System(CARS-02 and CARS-54)。
文摘The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This era integrates biotechnology,artificial intelligence(AI),and big data information technology.In contrast,China is still in a transition period between stages 2.0 and 3.0,which primarily relies on conventional selection and molecular breeding.In the context of increasingly complex international situations,accurately identifying core issues in China's seed industry innovation and seizing the frontier of international seed technology are strategically important.These efforts are essential for ensuring food security and revitalizing the seed industry.This paper systematically analyzes the characteristics of crop breeding data from artificial selection to intelligent design breeding.It explores the applications and development trends of AI and big data in modern crop breeding from several key perspectives.These include highthroughput phenotype acquisition and analysis,multiomics big data database and management system construction,AI-based multiomics integrated analysis,and the development of intelligent breeding software tools based on biological big data and AI technology.Based on an in-depth analysis of the current status and challenges of China's seed industry technology development,we propose strategic goals and key tasks for China's new generation of AI and big data-driven intelligent design breeding.These suggestions aim to accelerate the development of an intelligent-driven crop breeding engineering system that features large-scale gene mining,efficient gene manipulation,engineered variety design,and systematized biobreeding.This study provides a theoretical basis and practical guidance for the development of China's seed industry technology.
基金supported by the National Natural Science Foundation of China(62302234).
文摘As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.
文摘In the rapidly evolving landscape of digital health,the integration of data analytics and Internet healthserviceshasbecome a pivotal area of exploration.To meet keen social needs,Prof.Shan Liu(Xi'an Jiaotong University)and Prof.Xing Zhang(Wuhan Textile University)have published the timely book Datadriven Internet Health Platform Service Value Co-creation through China Science Press.The book focuses on the provision of medical and health services from doctors to patients through Internet health platforms,where the service value is co-created by three parties.
文摘The article "Data-driven soft sensors in blast furnace ironmaking:a survey,"written by Yueyang LUO,Xinmin ZHANG,Manabu KANO,Long DENG,Chunjie YANG,and Zhihuan SONG,was originally published electronically on the publisher's Internet portal on Mar.27,2023 without open access.
基金supported by the National Natural Science Foundation of China(62373224,62333013,U23A20327)the Natural Science Foundation of Shandong Province(ZR2024JQ021)
文摘Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) batteries. However, the time-consuming signal data acquisition and the lack of interpretability of model still hinder its efficient deployment. Motivated by this, this letter proposes a novel and interpretable data-driven learning strategy through combining the benefits of explainable AI and non-destructive ultrasonic detection for battery SoH estimation. Specifically, after equipping battery with advanced ultrasonic sensor to promise fast real-time ultrasonic signal measurement, an interpretable data-driven learning strategy named generalized additive neural decision ensemble(GANDE) is designed to rapidly estimate battery SoH and explain the effects of the involved ultrasonic features of interest.
基金the National Natural Science Foundation of China (Grants No. 12072090 and No.12302056) to provide fund for conducting experiments。
文摘Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.
基金supported by China Postdoctoral Science Foundation,Grant 2024M753544Science and Technology Project of CSG,Grant GDKJXM2022106.
文摘Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.
基金supported by the National Natural Science Foundation of China(62325304,U22B2046,62073079,62376029)the Jiangsu Provincial Scientific Research Center of Applied Mathematics(BK20233002)the China Postdoctoral Science Foundation(2023M730255,2024T171123)
文摘Dear Editor,This letter studies the bipartite consensus tracking problem for heterogeneous multi-agent systems with actuator faults and a leader's unknown time-varying control input. To handle such a problem, the continuous fault-tolerant control protocol via observer design is developed. In addition, it is strictly proved that the multi-agent system driven by the designed controllers can still achieve bipartite consensus tracking after faults occur.
基金Supported by National Natural Science Foundation of China(Grant Nos.52325501,U24B2047).
文摘Current research on robot calibration can be roughly classified into two categories,and both of them have certain inherent limitations.Model-based methods are difficult to model and compensate the pose errors arising from configuration-dependent geometric and non-geometric source errors,whereas the accuracy of data-driven methods depends on a large amount of measurement data.Using a 5-DOF(degrees of freedom)hybrid machining robot as an exemplar,this study presents a model data-driven approach for the calibration of robotic manipulators.An f-DOF realistic robot containing various source errors is visualized as a 6-DOF fictitious robot having error-free parameters,but erroneous actuated/virtual joint motions.The calibration process essentially involves four steps:(1)formulating the linear map relating the pose error twist to the joint motion errors,(2)parameterizing the joint motion errors using second-order polynomials in terms of nominal actuated joint variables,(3)identifying the polynomial coefficients using the weighted least squares plus principal component analysis,and(4)compensating the compensable pose errors by updating the nominal actuated joint variables.The merit of this approach is that it enables compensation of the pose errors caused by configuration-dependent geometric and non-geometric source errors using finite measurement configurations.Experimental studies on a prototype machine illustrate the effectiveness of the proposed approach.
基金supported by the National Key Research and Development Program of China(No.2022YFF0610003)the BUPT Excellent Ph.D.Students Foundation(No.CX2022218)the Fund of Central University Basic Research Projects(No.2023ZCTH11).
文摘Data trading is a crucial means of unlocking the value of Internet of Things(IoT)data.However,IoT data differs from traditional material goods due to its intangible and replicable nature.This difference leads to ambiguous data rights,confusing pricing,and challenges in matching.Additionally,centralized IoT data trading platforms pose risks such as privacy leakage.To address these issues,we propose a profit-driven distributed trading mechanism for IoT data.First,a blockchain-based trading architecture for IoT data,leveraging the transparent and tamper-proof features of blockchain technology,is proposed to establish trust between data owners and data requesters.Second,an IoT data registration method that encompasses both rights confirmation and pricing is designed.The data right confirmation method uses non-fungible token to record ownership and authenticate IoT data.For pricing,we develop an IoT data value assessment index system and introduce a pricing model based on a combination of the sparrow search algorithm and the back propagation neural network.Finally,an IoT data matching method is designed based on the Stackelberg game.This establishes a Stackelberg game model involving multiple data owners and requesters,employing a hierarchical optimization method to determine the optimal purchase strategy.The security of the mechanism is analyzed and the performance of both the pricing method and matching method is evaluated.Experiments demonstrate that both methods outperform traditional approaches in terms of error rates and profit maximization.
基金funded by the Taiwan Comprehensive University System and the National Science and Technology Council of Taiwan under grant number NSTC 111-2410-H-019-006-MY3Additionally,this work was financially/partially supported by the Advanced Institute of Manufacturing with High-tech Innovations(AIM-HI)from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan+1 种基金the National Natural Science Foundation of China,No.62402444the Zhejiang Provincial Natural Science Foundation of China,No.LQ24F020012.
文摘AI applications have become ubiquitous,bringing significant convenience to various industries.In e-commerce,AI can enhance product recommendations for individuals and provide businesses with more accurate predictions for market strategy development.However,if the data used for AI applications is damaged or lost,it will inevitably affect the effectiveness of these AI applications.Therefore,it is essential to verify the integrity of e-commerce data.Although existing Provable Data Possession(PDP)protocols can verify the integrity of cloud data,they are not suitable for e-commerce scenarios due to the limited computational capabilities of edge servers,which cannot handle the high computational overhead of generating homomorphic verification tags in PDP.To address this issue,we propose PDP with Outsourced Tag Generation for AI-driven e-commerce,which outsources the computation of homomorphic verification tags to cloud servers while introducing a lightweight verification method to ensure that the tags match the uploaded data.Additionally,the proposed scheme supports dynamic operations such as adding,deleting,and modifying data,enhancing its practicality.Finally,experiments show that the additional computational overhead introduced by outsourcing homomorphic verification tags is acceptable compared to the original PDP.
基金supported by the National Natural Science Foundation of China(Grant Nos.42277161,42230709).
文摘Accurate identification and effective support of key blocks are crucial for ensuring the stability and safety of rock slopes.The number of structural planes and rock blocks were reduced in previous studies.This impairs the ability to characterize complex rock slopes accurately and inhibits the identification of key blocks.In this paper,a knowledge-data dually driven paradigm for accurate identification of key blocks in complex rock slopes is proposed.Our basic idea is to integrate key block theory into data-driven models based on finely characterizing structural features to identify key blocks in complex rock slopes accurately.The proposed novel paradigm consists of(1)representing rock slopes as graph-structured data based on complex systems theory,(2)identifying key nodes in the graph-structured data using graph deep learning,and(3)mapping the key nodes of graph-structured data to corresponding key blocks in the rock slope.Verification experiments and real-case applications are conducted by the proposed method.The verification results demonstrate excellent model performance,strong generalization capability,and effective classification results.Moreover,the real case application is conducted on the northern slope of the Yanqianshan Iron Mine.The results show that the proposed method can accurately identify key blocks in complex rock slopes,which can provide a decision-making basis and rational recommendations for effective support and instability prevention of rock slopes,thereby ensuring the stability of rock engineering and the safety of life and property.
基金Research on the Digital Transformation of Financial Management Major and the Training Model of Outstanding Talents(2023122203988)Research on the Integration of Haikou Logistics and Manufacturing Driven by Big Data and Its Consumption Promotion Effect(HKKY2024-ZD-24)。
文摘With the constant changes of the times,China's science and technology have entered a period of rapid development.At the same time,the economic structure is also changing with the changes of the times,and the original Haikou logistics industry in the process is also facing new impacts and challenges.And related enterprises want to stand out in the fierce market competition,we must optimize and upgrade the current industry development situation,promote the integrated development of Haikou logistics and manufacturing industry,to constantly promote the innovative application of digital technology in the logistics industry and manufacturing industry,the formation of a multi-force economic development model.This paper mainly starts with the development status of Haikou logistics,analyzes the importance of the integration of Haikou logistics and manufacturing industry under the background of big data drive,and makes an in-depth discussion on the path of the integration of Haikou logistics and manufacturing industry under the drive of big data,hoping to contribute new strength to the development of social economy.
文摘Missing data handling is vital for multi-sensor information fusion fault diagnosis of motors to prevent the accuracy decay or even model failure,and some promising results have been gained in several current studies.These studies,however,have the following limitations:1)effective supervision is neglected for missing data across different fault types and 2)imbalance in missing rates among fault types results in inadequate learning during model training.To overcome the above limitations,this paper proposes a dynamic relative advantagedriven multi-fault synergistic diagnosis method to accomplish accurate fault diagnosis of motors under imbalanced missing data rates.Firstly,a cross-fault-type generalized synergistic diagnostic strategy is established based on variational information bottleneck theory,which is able to ensure sufficient supervision in handling missing data.Then,a dynamic relative advantage assessment technique is designed to reduce diagnostic accuracy decay caused by imbalanced missing data rates.The proposed method is validated using multi-sensor data from motor fault simulation experiments,and experimental results demonstrate its effectiveness and superiority in improving diagnostic accuracy and generalization under imbalanced missing data rates.
文摘During the process of urbanization,community environments encounter challenges such as data disconnection and the underutilization of small and micro spaces.The establishment of“complete communities”necessitates the implementation of refined governance strategies.This research develops a path for the precise establishment of community micro-gardens driven by mobile measurement.It involves the collection of environmental data via mobile devices equipped with various types of sensors,the generation of visualization maps that are adjusted for spatio-temporal synchronization,and the identification of environmental paint points,including areas of excessive temperature exposure and zones with elevated noise levels.Based on the aforementioned considerations,various plant allocation strategies have been proposed for distinct areas.For instance,the implementation of a composite shade and cooling vegetation system is recommended for regions experiencing high temperatures,while a triple protection structure is suggested for areas affected by odor contamination.The efficacy of these strategies is demonstrated through a case study of the micro-garden transformation in the Dongjie Community of Wulituo Street,Shijingshan,Beijing.The study presents operational technical pathways and plant response solutions aimed at facilitating data-driven governance of community micro-environments.
基金financially supported by the Innovative Research Group Project of the National Natural Science Foundation of China (22021004)Sinopec Major Science and Technology Projects (321123-1)
文摘The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quantitative risk assessment (QRA) and predictive maintenance (PdM) are essential to effectively manage coking risks influenced by multiple factors. However, the inherent uncertainties of the coking process, combined with the mixed-frequency nature of distributed control systems (DCS) and laboratory information management systems (LIMS) data, present significant challenges for the application of data-driven methods and their practical implementation in industrial environments. This study proposes a hierarchical framework that integrates deep learning and fuzzy logic inference, leveraging data and domain knowledge to monitor the coking condition and inform prescriptive maintenance planning. The framework proposes the multi-layer fuzzy inference system to construct the coking risk index, utilizes multi-label methods to select the optimal feature dataset across the reactor-regenerator and fractionation system using coking risk factors as label space, and designs the parallel encoder-integrated decoder architecture to address mixed-frequency data disparities and enhance adaptation capabilities through extracting the operation state and physical properties information. Additionally, triple attention mechanisms, whether in parallel or temporal modules, adaptively aggregate input information and enhance intrinsic interpretability to support the disposal decision-making. Applied in the 2.8 million tons FCCU under long-period complex operating conditions, enabling precise coking risk management at the fractionating tower bottom.
文摘In order to improve the competitiveness of smart tourist attractions in the tourism market,this paper selects a scenic spot in Shenyang and uses big data technology to predict the passenger flow of the scenic spot.Firstly,this paper introduces the big data-driven forecast model of scenic spot passenger flow.Based on the traditional autoregressive integral moving average model and artificial neural network model,it builds a big data analysis and forecast model.Through the analysis of data source,model building,scenic spot passenger flow accuracy,and modeling time comparison,it affirms the advantages of big data analysis in forecasting scenic spot passenger flow.Finally,it puts forward four commercial operation optimization strategies:adjusting the ticket pricing of scenic spots,upgrading the catering and accommodation services in scenic spots,planning and designing play projects,and formulating accurate scenic spot marketing strategies,in order to provide references for the optimization and upgrading of smart tourist attractions in the future.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.