Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) ...Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) batteries. However, the time-consuming signal data acquisition and the lack of interpretability of model still hinder its efficient deployment. Motivated by this, this letter proposes a novel and interpretable data-driven learning strategy through combining the benefits of explainable AI and non-destructive ultrasonic detection for battery SoH estimation. Specifically, after equipping battery with advanced ultrasonic sensor to promise fast real-time ultrasonic signal measurement, an interpretable data-driven learning strategy named generalized additive neural decision ensemble(GANDE) is designed to rapidly estimate battery SoH and explain the effects of the involved ultrasonic features of interest.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i...During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol ...Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).展开更多
Data mining (also known as Knowledge Discovery in Databases - KDD) is defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. The aims and objectives of data...Data mining (also known as Knowledge Discovery in Databases - KDD) is defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. The aims and objectives of data mining are to discover knowledge of interest to user needs.Data mining is really a useful tool in many domains such as marketing, decision making, etc. However, some basic issues of data mining are ignored. What is data mining? What is the product of a data mining process? What are we doing in a data mining process? Is there any rule we should obey in a data mining process? In order to discover patterns and knowledge really interesting and actionable to the real world Zhang et al proposed a domain-driven human-machine-cooperated data mining process.Zhao and Yao proposed an interactive user-driven classification method using the granule network. In our work, we find that data mining is a kind of knowledge transforming process to transform knowledge from data format into symbol format. Thus, no new knowledge could be generated (born) in a data mining process. In a data mining process, knowledge is just transformed from data format, which is not understandable for human, into symbol format,which is understandable for human and easy to be used.It is similar to the process of translating a book from Chinese into English.In this translating process,the knowledge itself in the book should remain unchanged. What will be changed is the format of the knowledge only. That is, the knowledge in the English book should be kept the same as the knowledge in the Chinese one.Otherwise, there must be some mistakes in the translating proces, that is, we are transforming knowledge from one format into another format while not producing new knowledge in a data mining process. The knowledge is originally stored in data (data is a representation format of knowledge). Unfortunately, we can not read, understand, or use it, since we can not understand data. With this understanding of data mining, we proposed a data-driven knowledge acquisition method based on rough sets. It also improved the performance of classical knowledge acquisition methods. In fact, we also find that the domain-driven data mining and user-driven data mining do not conflict with our data-driven data mining. They could be integrated into domain-oriented data-driven data mining. It is just like the views of data base. Users with different views could look at different partial data of a data base. Thus, users with different tasks or objectives wish, or could discover different knowledge (partial knowledge) from the same data base. However, all these partial knowledge should be originally existed in the data base. So, a domain-oriented data-driven data mining method would help us to extract the knowledge which is really existed in a data base, and really interesting and actionable to the real world.展开更多
The field of fluid simulation is developing rapidly,and data-driven methods provide many frameworks and techniques for fluid simulation.This paper presents a survey of data-driven methods used in fluid simulation in c...The field of fluid simulation is developing rapidly,and data-driven methods provide many frameworks and techniques for fluid simulation.This paper presents a survey of data-driven methods used in fluid simulation in computer graphics in recent years.First,we provide a brief introduction of physical based fluid simulation methods based on their spatial discretization,including Lagrangian,Eulerian,and hybrid methods.The characteristics of these underlying structures and their inherent connection with data driven methodologies are then analyzed.Subsequently,we review studies pertaining to a wide range of applications,including data-driven solvers,detail enhancement,animation synthesis,fluid control,and differentiable simulation.Finally,we discuss some related issues and potential directions in data-driven fluid simulation.We conclude that the fluid simulation combined with data-driven methods has some advantages,such as higher simulation efficiency,rich details and different pattern styles,compared with traditional methods under the same parameters.It can be seen that the data-driven fluid simulation is feasible and has broad prospects.展开更多
Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading...Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.展开更多
Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on ...Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on predictive analytics and machine learning (ML) that can work in real-time to help avoid risks and increase project adaptability. The main research aim of the study is to ascertain risk presence in projects by using historical data from previous projects, focusing on important aspects such as time, task time, resources and project results. t-SNE technique applies feature engineering in the reduction of the dimensionality while preserving important structural properties. This process is analysed using measures including recall, F1-score, accuracy and precision measurements. The results demonstrate that the Gradient Boosting Machine (GBM) achieves an impressive 85% accuracy, 82% precision, 85% recall, and 80% F1-score, surpassing previous models. Additionally, predictive analytics achieves a resource utilisation efficiency of 85%, compared to 70% for traditional allocation methods, and a project cost reduction of 10%, double the 5% achieved by traditional approaches. Furthermore, the study indicates that while GBM excels in overall accuracy, Logistic Regression (LR) offers more favourable precision-recall trade-offs, highlighting the importance of model selection in project risk management.展开更多
Predicting the material stability is essential for accelerating the discovery of advanced materials in renewable energy, aerospace, and catalysis. Traditional approaches, such as Density Functional Theory (DFT), are a...Predicting the material stability is essential for accelerating the discovery of advanced materials in renewable energy, aerospace, and catalysis. Traditional approaches, such as Density Functional Theory (DFT), are accurate but computationally expensive and unsuitable for high-throughput screening. This study introduces a machine learning (ML) framework trained on high-dimensional data from the Open Quantum Materials Database (OQMD) to predict formation energy, a key stability metric. Among the evaluated models, deep learning outperformed Gradient Boosting Machines and Random Forest, achieving up to 0.88 R2 prediction accuracy. Feature importance analysis identified thermodynamic, electronic, and structural properties as the primary drivers of stability, offering interpretable insights into material behavior. Compared to DFT, the proposed ML framework significantly reduces computational costs, enabling the rapid screening of thousands of compounds. These results highlight ML’s transformative potential in materials discovery, with direct applications in energy storage, semiconductors, and catalysis.展开更多
基金supported by the National Natural Science Foundation of China(62373224,62333013,U23A20327)the Natural Science Foundation of Shandong Province(ZR2024JQ021)
文摘Dear Editor,Health management is essential to ensure battery performance and safety, while data-driven learning system is a promising solution to enable efficient state of health(SoH) estimation of lithium-ion(Liion) batteries. However, the time-consuming signal data acquisition and the lack of interpretability of model still hinder its efficient deployment. Motivated by this, this letter proposes a novel and interpretable data-driven learning strategy through combining the benefits of explainable AI and non-destructive ultrasonic detection for battery SoH estimation. Specifically, after equipping battery with advanced ultrasonic sensor to promise fast real-time ultrasonic signal measurement, an interpretable data-driven learning strategy named generalized additive neural decision ensemble(GANDE) is designed to rapidly estimate battery SoH and explain the effects of the involved ultrasonic features of interest.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金partially supported by the National Natural Science Foundation of China(61751306,61801208,61671233)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the Fundamental Research Funds for the Central Universities(021014380094)
文摘During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
基金supported by the National Key R&D Program of China(Grant No.2021YFC2100100)the National Natural Science Foundation of China(Grant No.21901157)+1 种基金the Shanghai Science and Technology Project of China(Grant No.21JC1403400)the SJTU Global Strategic Partnership Fund(Grant No.2020 SJTUHUJI)。
文摘The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
基金supported by the National Nature Science Foundation of China(U21A20166)the Science and Technology Development Foundation of Jilin Province(20230508095RC)+2 种基金the Major Science and Technology Projects of Jilin Province and Changchun City(20220301033GX)the Development and Reform Commission Foundation of Jilin Province(2023C034-3)the Interdisciplinary Integration and Innovation Project of JLU(JLUXKJC2020202).
文摘Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).
文摘Data mining (also known as Knowledge Discovery in Databases - KDD) is defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. The aims and objectives of data mining are to discover knowledge of interest to user needs.Data mining is really a useful tool in many domains such as marketing, decision making, etc. However, some basic issues of data mining are ignored. What is data mining? What is the product of a data mining process? What are we doing in a data mining process? Is there any rule we should obey in a data mining process? In order to discover patterns and knowledge really interesting and actionable to the real world Zhang et al proposed a domain-driven human-machine-cooperated data mining process.Zhao and Yao proposed an interactive user-driven classification method using the granule network. In our work, we find that data mining is a kind of knowledge transforming process to transform knowledge from data format into symbol format. Thus, no new knowledge could be generated (born) in a data mining process. In a data mining process, knowledge is just transformed from data format, which is not understandable for human, into symbol format,which is understandable for human and easy to be used.It is similar to the process of translating a book from Chinese into English.In this translating process,the knowledge itself in the book should remain unchanged. What will be changed is the format of the knowledge only. That is, the knowledge in the English book should be kept the same as the knowledge in the Chinese one.Otherwise, there must be some mistakes in the translating proces, that is, we are transforming knowledge from one format into another format while not producing new knowledge in a data mining process. The knowledge is originally stored in data (data is a representation format of knowledge). Unfortunately, we can not read, understand, or use it, since we can not understand data. With this understanding of data mining, we proposed a data-driven knowledge acquisition method based on rough sets. It also improved the performance of classical knowledge acquisition methods. In fact, we also find that the domain-driven data mining and user-driven data mining do not conflict with our data-driven data mining. They could be integrated into domain-oriented data-driven data mining. It is just like the views of data base. Users with different views could look at different partial data of a data base. Thus, users with different tasks or objectives wish, or could discover different knowledge (partial knowledge) from the same data base. However, all these partial knowledge should be originally existed in the data base. So, a domain-oriented data-driven data mining method would help us to extract the knowledge which is really existed in a data base, and really interesting and actionable to the real world.
基金the Natural Key Research and Development Program of China(2018YFB1004902)the Natural Science Foundation of China(61772329,61373085).
文摘The field of fluid simulation is developing rapidly,and data-driven methods provide many frameworks and techniques for fluid simulation.This paper presents a survey of data-driven methods used in fluid simulation in computer graphics in recent years.First,we provide a brief introduction of physical based fluid simulation methods based on their spatial discretization,including Lagrangian,Eulerian,and hybrid methods.The characteristics of these underlying structures and their inherent connection with data driven methodologies are then analyzed.Subsequently,we review studies pertaining to a wide range of applications,including data-driven solvers,detail enhancement,animation synthesis,fluid control,and differentiable simulation.Finally,we discuss some related issues and potential directions in data-driven fluid simulation.We conclude that the fluid simulation combined with data-driven methods has some advantages,such as higher simulation efficiency,rich details and different pattern styles,compared with traditional methods under the same parameters.It can be seen that the data-driven fluid simulation is feasible and has broad prospects.
文摘Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.
文摘Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on predictive analytics and machine learning (ML) that can work in real-time to help avoid risks and increase project adaptability. The main research aim of the study is to ascertain risk presence in projects by using historical data from previous projects, focusing on important aspects such as time, task time, resources and project results. t-SNE technique applies feature engineering in the reduction of the dimensionality while preserving important structural properties. This process is analysed using measures including recall, F1-score, accuracy and precision measurements. The results demonstrate that the Gradient Boosting Machine (GBM) achieves an impressive 85% accuracy, 82% precision, 85% recall, and 80% F1-score, surpassing previous models. Additionally, predictive analytics achieves a resource utilisation efficiency of 85%, compared to 70% for traditional allocation methods, and a project cost reduction of 10%, double the 5% achieved by traditional approaches. Furthermore, the study indicates that while GBM excels in overall accuracy, Logistic Regression (LR) offers more favourable precision-recall trade-offs, highlighting the importance of model selection in project risk management.
文摘Predicting the material stability is essential for accelerating the discovery of advanced materials in renewable energy, aerospace, and catalysis. Traditional approaches, such as Density Functional Theory (DFT), are accurate but computationally expensive and unsuitable for high-throughput screening. This study introduces a machine learning (ML) framework trained on high-dimensional data from the Open Quantum Materials Database (OQMD) to predict formation energy, a key stability metric. Among the evaluated models, deep learning outperformed Gradient Boosting Machines and Random Forest, achieving up to 0.88 R2 prediction accuracy. Feature importance analysis identified thermodynamic, electronic, and structural properties as the primary drivers of stability, offering interpretable insights into material behavior. Compared to DFT, the proposed ML framework significantly reduces computational costs, enabling the rapid screening of thousands of compounds. These results highlight ML’s transformative potential in materials discovery, with direct applications in energy storage, semiconductors, and catalysis.