The Reliability-Based Design Optimization(RBDO)of complex engineering structures considering uncertainties has problems of being high-dimensional,highly nonlinear,and timeconsuming,which requires a significant amount ...The Reliability-Based Design Optimization(RBDO)of complex engineering structures considering uncertainties has problems of being high-dimensional,highly nonlinear,and timeconsuming,which requires a significant amount of sampling simulation computation.In this paper,a basis-adaptive Polynomial Chaos(PC)-Kriging surrogate model is proposed,in order to relieve the computational burden and enhance the predictive accuracy of a metamodel.The active learning basis-adaptive PC-Kriging model is combined with a quantile-based RBDO framework.Finally,five engineering cases have been implemented,including a benchmark RBDO problem,three high-dimensional explicit problems,and a high-dimensional implicit problem.Compared with Support Vector Regression(SVR),Kriging,and polynomial chaos expansion models,results show that the proposed basis-adaptive PC-Kriging model is more accurate and efficient for RBDO problems of complex engineering structures.展开更多
In materials science,a significant correlation often exists between material input parameters and their corresponding performance attributes.Nevertheless,the inherent challenges associated with small data obscure thes...In materials science,a significant correlation often exists between material input parameters and their corresponding performance attributes.Nevertheless,the inherent challenges associated with small data obscure these statistical correlations,impeding machine learning models from effectively capturing the underlying patterns,thereby hampering efficient optimization of material properties.This work presents a novel active learning framework that integrates generative adversarial networks(GAN)with a directionally constrained expected absolute improvement(EAI)acquisition function to accelerate the discovery of ultra-high temperature ceramics(UHTCs)using small data.The framework employs GAN for data augmentation,symbolic regression for feature weight derivation,and a self-developed EAI function that incorporates input feature importance weighting to quantify bidirectional deviations from zero ablation rate.Through only two iterations,this framework successfully identified the optimal composition of HfB_(2)-3.52SiC-5.23TaSi_(2),which exhibits robust near-zero ablation rates under plasma ablation at 2500℃ for 200 s,demonstrating superior sampling efficiency compared to conventional active learning approaches.Microstructural analysis reveals that the exceptional performance stems from the formation of a highly viscous HfO_(2)-SiO_(2)-Ta_(2)O_(5)-HfSiO_(4)-Hf_(3)(BO_(3))_(4) oxide layer,which provides effective oxygen barrier protection.This work demonstrates an efficient and universal approach for rapid materials discovery using small data.展开更多
Objective:Deep learning(DL)has become the prevailing method in chest radiograph analysis,yet its performance heavily depends on large quantities of annotated images.To mitigate the cost,cold-start active learning(AL),...Objective:Deep learning(DL)has become the prevailing method in chest radiograph analysis,yet its performance heavily depends on large quantities of annotated images.To mitigate the cost,cold-start active learning(AL),comprising an initialization followed by subsequent learning,selects a small subset of informative data points for labeling.Recent advancements in pretrained models by supervised or self-supervised learning tailored to chest radiograph have shown broad applicability to diverse downstream tasks.However,their potential in cold-start AL remains unexplored.Methods:To validate the efficacy of domain-specific pretraining,we compared two foundation models:supervised TXRV and self-supervised REMEDIS with their general domain counterparts pretrained on ImageNet.Model performance was evaluated at both initialization and subsequent learning stages on two diagnostic tasks:psychiatric pneumonia and COVID-19.For initialization,we assessed their integration with three strategies:diversity,uncertainty,and hybrid sampling.For subsequent learning,we focused on uncertainty sampling powered by different pretrained models.We also conducted statistical tests to compare the foundation models with ImageNet counterparts,investigate the relationship between initialization and subsequent learning,examine the performance of one-shot initialization against the full AL process,and investigate the influence of class balance in initialization samples on initialization and subsequent learning.Results:First,domain-specific foundation models failed to outperform ImageNet counterparts in six out of eight experiments on informative sample selection.Both domain-specific and general pretrained models were unable to generate representations that could substitute for the original images as model inputs in seven of the eight scenarios.However,pretrained model-based initialization surpassed random sampling,the default approach in cold-start AL.Second,initialization performance was positively correlated with subsequent learning performance,highlighting the importance of initialization strategies.Third,one-shot initialization performed comparably to the full AL process,demonstrating the potential of reducing experts'repeated waiting during AL iterations.Last,a U-shaped correlation was observed between the class balance of initialization samples and model performance,suggesting that the class balance is more strongly associated with performance at middle budget levels than at low or high budgets.Conclusions:In this study,we highlighted the limitations of medical pretraining compared to general pretraining in the context of cold-start AL.We also identified promising outcomes related to cold-start AL,including initialization based on pretrained models,the positive influence of initialization on subsequent learning,the potential for one-shot initialization,and the influence of class balance on middle-budget AL.Researchers are encouraged to improve medical pretraining for versatile DL foundations and explore novel AL methods.展开更多
Human Activity Recognition(HAR)has become increasingly critical in civic surveillance,medical care monitoring,and institutional protection.Current deep learning-based approaches often suffer from excessive computation...Human Activity Recognition(HAR)has become increasingly critical in civic surveillance,medical care monitoring,and institutional protection.Current deep learning-based approaches often suffer from excessive computational complexity,limited generalizability under varying conditions,and compromised real-time performance.To counter these,this paper introduces an Active Learning-aided Heuristic Deep Spatio-Textural Ensemble Learning(ALH-DSEL)framework.The model initially identifies keyframes from the surveillance videos with a Multi-Constraint Active Learning(MCAL)approach,with features extracted from DenseNet121.The frames are then segmented employing an optimized Fuzzy C-Means clustering algorithm with Firefly to identify areas of interest.A deep ensemble feature extractor,comprising DenseNet121,EfficientNet-B7,MobileNet,and GLCM,extracts varied spatial and textural features.Fused characteristics are enhanced through PCA and Min-Max normalization and discriminated by a maximum voting ensemble of RF,AdaBoost,and XGBoost.The experimental results show that ALH-DSEL provides higher accuracy,precision,recall,and F1-score,validating its superiority for real-time HAR in surveillance scenarios.展开更多
To capture the nonlinear dynamics and gain evolution in chirped pulse amplification(CPA)systems,the split-step Fourier method and the fourth-order Runge–Kutta method are integrated to iteratively address the generali...To capture the nonlinear dynamics and gain evolution in chirped pulse amplification(CPA)systems,the split-step Fourier method and the fourth-order Runge–Kutta method are integrated to iteratively address the generalized nonlinear Schrödinger equation and the rate equations.However,this approach is burdened by substantial computational demands,resulting in significant time expenditures.In the context of intelligent laser optimization and inverse design,the necessity for numerous simulations further exacerbates this issue,highlighting the need for fast and accurate simulation methodologies.Here,we introduce an end-to-end model augmented with active learning(E2E-AL)with decent generalization through different dedicated embedding methods over various parameters.On an identical computational platform,the artificial intelligence–driven model is 2000 times faster than the conventional simulation method.Benefiting from the active learning strategy,the E2E-AL model achieves decent precision with only two-thirds of the training samples compared with the case without such a strategy.Furthermore,we demonstrate a multi-objective inverse design of the CPA systems enabled by the E2E-AL model.The E2E-AL framework manifests the potential of becoming a standard approach for the rapid and accurate modeling of ultrafast lasers and is readily extended to simulate other complex systems.展开更多
Dynamical systems often exhibit multiple attractors representing significantly different functioning conditions.A global map of attraction basins can offer valuable guidance for stabilizing or transitioning system sta...Dynamical systems often exhibit multiple attractors representing significantly different functioning conditions.A global map of attraction basins can offer valuable guidance for stabilizing or transitioning system states.Such a map can be constructed without prior system knowledge by identifying attractors across a sufficient number of points in the state space.However,determining the attractor for each initial state can be a laborious task.Here,we tackle the challenge of reconstructing attraction basins using as few initial points as possible.In each iteration of our approach,informative points are selected through random seeding and are driven along the current classification boundary,promoting the eventual selection of points that are both diverse and enlightening.The results across various experimental dynamical systems demonstrate that our approach requires fewer points than baseline methods while achieving comparable mapping accuracy.Additionally,the reconstructed map allows us to accurately estimate the minimum escape distance required to transition the system state to a target basin.展开更多
For complex engineering problems,multi-fidelity modeling has been used to achieve efficient reliability analysis by leveraging multiple information sources.However,most methods require nested training samples to captu...For complex engineering problems,multi-fidelity modeling has been used to achieve efficient reliability analysis by leveraging multiple information sources.However,most methods require nested training samples to capture the correlation between different fidelity data,which may lead to a significant increase in low-fidelity samples.In addition,it is difficult to build accurate surrogate models because current methods do not fully consider the nonlinearity between different fidelity samples.To address these problems,a novel multi-fidelity modeling method with active learning is proposed in this paper.Firstly,a nonlinear autoregressive multi-fidelity Kriging(NAMK)model is used to build a surrogate model.To avoid introducing redundant samples in the process of NAMK model updating,a collective learning function is then developed by a combination of a U-learning function,the correlation between different fidelity samples,and the sampling cost.Furthermore,a residual model is constructed to automatically generate low-fidelity samples when high-fidelity samples are selected.The efficiency and accuracy of the proposed method are demonstrated using three numerical examples and an engineering case.展开更多
Surrogate models offer an efficient approach to tackle the computationally intensive evaluation of performance functions in reliability analysis.Nevertheless,the approximations inherent in surrogate models necessitate...Surrogate models offer an efficient approach to tackle the computationally intensive evaluation of performance functions in reliability analysis.Nevertheless,the approximations inherent in surrogate models necessitate the consideration of surrogate model uncertainty in estimating failure probabilities.This paper proposes a new reliability analysis method in which the uncertainty from the Kriging surrogate model is quantified simultaneously.This method treats surrogate model uncertainty as an independent entity,characterizing the estimation error of failure probabilities.Building upon the probabilistic classification function,a failure probability uncertainty is proposed by integrating the difference between the traditional indicator function and the probabilistic classification function to quantify the impact of surrogate model uncertainty on failure probability estimation.Furthermore,the proposed uncertainty quantification method is applied to a newly designed reliability analysis approach termed SUQ-MCS,incorporating a proposed median approximation function for active learning.The proposed failure probability uncertainty serves as the stopping criterion of this framework.Through benchmarking,the effectiveness of the proposed uncertainty quantification method is validated.The empirical results present the competitive performance of the SUQ-MCS method relative to alternative approaches.展开更多
Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still...Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.展开更多
The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression data.However,labeling large datasets demands signific...The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression data.However,labeling large datasets demands significant human,time,and financial resources.Although active learning methods have mitigated the dependency on extensive labeled data,a cold-start problem persists in small to medium-sized expression recognition datasets.This issue arises because the initial labeled data often fails to represent the full spectrum of facial expression characteristics.This paper introduces an active learning approach that integrates uncertainty estimation,aiming to improve the precision of facial expression recognition regardless of dataset scale variations.The method is divided into two primary phases.First,the model undergoes self-supervised pre-training using contrastive learning and uncertainty estimation to bolster its feature extraction capabilities.Second,the model is fine-tuned using the prior knowledge obtained from the pre-training phase to significantly improve recognition accuracy.In the pretraining phase,the model employs contrastive learning to extract fundamental feature representations from the complete unlabeled dataset.These features are then weighted through a self-attention mechanism with rank regularization.Subsequently,data from the low-weighted set is relabeled to further refine the model’s feature extraction ability.The pre-trained model is then utilized in active learning to select and label information-rich samples more efficiently.Experimental results demonstrate that the proposed method significantly outperforms existing approaches,achieving an improvement in recognition accuracy of 5.09%and 3.82%over the best existing active learning methods,Margin,and Least Confidence methods,respectively,and a 1.61%improvement compared to the conventional segmented active learning method.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to bes...Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.展开更多
The sampling of the training data is a bottleneck in the development of artificial intelligence(AI)models due to the processing of huge amounts of data or to the difficulty of access to the data in industrial practice...The sampling of the training data is a bottleneck in the development of artificial intelligence(AI)models due to the processing of huge amounts of data or to the difficulty of access to the data in industrial practices.Active learning(AL)approaches are useful in such a context since they maximize the performance of the trained model while minimizing the number of training samples.Such smart sampling methodologies iteratively sample the points that should be labeled and added to the training set based on their informativeness and pertinence.To judge the relevance of a data instance,query rules are defined.In this paper,we propose an AL methodology based on a physics-based query rule.Given some industrial objectives from the physical process where the AI model is implied in,the physics-based AL approach iteratively converges to the data instances fulfilling those objectives while sampling training points.Therefore,the trained surrogate model is accurate where the potentially interesting data instances from the industrial point of view are,while coarse everywhere else where the data instances are of no interest in the industrial context studied.展开更多
Support vector machines(SVMs) are a popular class of supervised learning algorithms, and are particularly applicable to large and high-dimensional classification problems. Like most machine learning methods for data...Support vector machines(SVMs) are a popular class of supervised learning algorithms, and are particularly applicable to large and high-dimensional classification problems. Like most machine learning methods for data classification and information retrieval, they require manually labeled data samples in the training stage. However, manual labeling is a time consuming and errorprone task. One possible solution to this issue is to exploit the large number of unlabeled samples that are easily accessible via the internet. This paper presents a novel active learning method for text categorization. The main objective of active learning is to reduce the labeling effort, without compromising the accuracy of classification, by intelligently selecting which samples should be labeled.The proposed method selects a batch of informative samples using the posterior probabilities provided by a set of multi-class SVM classifiers, and these samples are then manually labeled by an expert. Experimental results indicate that the proposed active learning method significantly reduces the labeling effort, while simultaneously enhancing the classification accuracy.展开更多
This paper describes a new method for active learning in content-based image retrieval. The proposed method firstly uses support vector machine (SVM) classifiers to learn an initial query concept. Then the proposed ac...This paper describes a new method for active learning in content-based image retrieval. The proposed method firstly uses support vector machine (SVM) classifiers to learn an initial query concept. Then the proposed active learning scheme employs similarity measure to check the current version space and selects images with maximum expected information gain to solicit user's label. Finally, the learned query is refined based on the user's further feedback. With the combination of SVM classifier and similarity measure, the proposed method can alleviate model bias existing in each of them. Our experiments on several query concepts show that the proposed method can learn the user's query concept quickly and effectively only with several iterations.展开更多
In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects...In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.展开更多
Active learning has been widely utilized to reduce the labeling cost of supervised learning.By selecting specific instances to train the model,the performance of the model was improved within limited steps.However,rar...Active learning has been widely utilized to reduce the labeling cost of supervised learning.By selecting specific instances to train the model,the performance of the model was improved within limited steps.However,rare work paid attention to the effectiveness of active learning on it.In this paper,we proposed a deep active learning model with bidirectional encoder representations from transformers(BERT)for text classification.BERT takes advantage of the self-attention mechanism to integrate contextual information,which is beneficial to accelerate the convergence of training.As for the process of active learning,we design an instance selection strategy based on posterior probabilities Margin,Intra-correlation and Inter-correlation(MII).Selected instances are characterized by small margin,low intra-cohesion and high inter-cohesion.We conduct extensive experiments and analytics with our methods.The effect of learner is compared while the effect of sampling strategy and text classification is assessed from three real datasets.The results show that our method outperforms the baselines in terms of accuracy.展开更多
This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of lim...This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of limit analysis, a rotational failure mechanism is adopted to describe the face failure considering different shear strength parameters in the two layers. The surrogate Kriging model is introduced to replace the actual performance function to perform a Monte Carlo simulation. An active learning function is used to train the Kriging model which can ensure an efficient tunnel face failure probability prediction without loss of accuracy. The deterministic stability analysis is given to validate the proposed tunnel face failure model. Subsequently, the number of initial sampling points, the correlation coefficient, the distribution type and the coefficient of variability of random variables are discussed to show their influences on the failure probability. The proposed approach is an advisable alternative for the tunnel face stability assessment and can provide guidance for tunnel design.展开更多
Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intellig...Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intelligence,many security analysts rely on cumbersome and time-consuming manual efforts.Cybersecurity knowledge graph plays a significant role in automatics analysis of cyber threat intelligence.As the foundation for constructing cybersecurity knowledge graph,named entity recognition(NER)is required for identifying critical threat-related elements from textual cyber threat intelligence.Recently,deep neural network-based models have attained very good results in NER.However,the performance of these models relies heavily on the amount of labeled data.Since labeled data in cybersecurity is scarce,in this paper,we propose an adversarial active learning framework to effectively select the informative samples for further annotation.In addition,leveraging the long short-term memory(LSTM)network and the bidirectional LSTM(BiLSTM)network,we propose a novel NER model by introducing a dynamic attention mechanism into the BiLSTM-LSTM encoderdecoder.With the selected informative samples annotated,the proposed NER model is retrained.As a result,the performance of the NER model is incrementally enhanced with low labeling cost.Experimental results show the effectiveness of the proposed method.展开更多
The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained ...The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.展开更多
基金supported by the National Key R&D Program of China(No.2021YFB1715000)the National Natural Science Foundation of China(No.52375073)。
文摘The Reliability-Based Design Optimization(RBDO)of complex engineering structures considering uncertainties has problems of being high-dimensional,highly nonlinear,and timeconsuming,which requires a significant amount of sampling simulation computation.In this paper,a basis-adaptive Polynomial Chaos(PC)-Kriging surrogate model is proposed,in order to relieve the computational burden and enhance the predictive accuracy of a metamodel.The active learning basis-adaptive PC-Kriging model is combined with a quantile-based RBDO framework.Finally,five engineering cases have been implemented,including a benchmark RBDO problem,three high-dimensional explicit problems,and a high-dimensional implicit problem.Compared with Support Vector Regression(SVR),Kriging,and polynomial chaos expansion models,results show that the proposed basis-adaptive PC-Kriging model is more accurate and efficient for RBDO problems of complex engineering structures.
基金supported by the Natural Science Foundation of China[grant numbers 52302093]Natural Science Foundation of Jiangxi Province[grant numbers 20224BAB204021].
文摘In materials science,a significant correlation often exists between material input parameters and their corresponding performance attributes.Nevertheless,the inherent challenges associated with small data obscure these statistical correlations,impeding machine learning models from effectively capturing the underlying patterns,thereby hampering efficient optimization of material properties.This work presents a novel active learning framework that integrates generative adversarial networks(GAN)with a directionally constrained expected absolute improvement(EAI)acquisition function to accelerate the discovery of ultra-high temperature ceramics(UHTCs)using small data.The framework employs GAN for data augmentation,symbolic regression for feature weight derivation,and a self-developed EAI function that incorporates input feature importance weighting to quantify bidirectional deviations from zero ablation rate.Through only two iterations,this framework successfully identified the optimal composition of HfB_(2)-3.52SiC-5.23TaSi_(2),which exhibits robust near-zero ablation rates under plasma ablation at 2500℃ for 200 s,demonstrating superior sampling efficiency compared to conventional active learning approaches.Microstructural analysis reveals that the exceptional performance stems from the formation of a highly viscous HfO_(2)-SiO_(2)-Ta_(2)O_(5)-HfSiO_(4)-Hf_(3)(BO_(3))_(4) oxide layer,which provides effective oxygen barrier protection.This work demonstrates an efficient and universal approach for rapid materials discovery using small data.
文摘Objective:Deep learning(DL)has become the prevailing method in chest radiograph analysis,yet its performance heavily depends on large quantities of annotated images.To mitigate the cost,cold-start active learning(AL),comprising an initialization followed by subsequent learning,selects a small subset of informative data points for labeling.Recent advancements in pretrained models by supervised or self-supervised learning tailored to chest radiograph have shown broad applicability to diverse downstream tasks.However,their potential in cold-start AL remains unexplored.Methods:To validate the efficacy of domain-specific pretraining,we compared two foundation models:supervised TXRV and self-supervised REMEDIS with their general domain counterparts pretrained on ImageNet.Model performance was evaluated at both initialization and subsequent learning stages on two diagnostic tasks:psychiatric pneumonia and COVID-19.For initialization,we assessed their integration with three strategies:diversity,uncertainty,and hybrid sampling.For subsequent learning,we focused on uncertainty sampling powered by different pretrained models.We also conducted statistical tests to compare the foundation models with ImageNet counterparts,investigate the relationship between initialization and subsequent learning,examine the performance of one-shot initialization against the full AL process,and investigate the influence of class balance in initialization samples on initialization and subsequent learning.Results:First,domain-specific foundation models failed to outperform ImageNet counterparts in six out of eight experiments on informative sample selection.Both domain-specific and general pretrained models were unable to generate representations that could substitute for the original images as model inputs in seven of the eight scenarios.However,pretrained model-based initialization surpassed random sampling,the default approach in cold-start AL.Second,initialization performance was positively correlated with subsequent learning performance,highlighting the importance of initialization strategies.Third,one-shot initialization performed comparably to the full AL process,demonstrating the potential of reducing experts'repeated waiting during AL iterations.Last,a U-shaped correlation was observed between the class balance of initialization samples and model performance,suggesting that the class balance is more strongly associated with performance at middle budget levels than at low or high budgets.Conclusions:In this study,we highlighted the limitations of medical pretraining compared to general pretraining in the context of cold-start AL.We also identified promising outcomes related to cold-start AL,including initialization based on pretrained models,the positive influence of initialization on subsequent learning,the potential for one-shot initialization,and the influence of class balance on middle-budget AL.Researchers are encouraged to improve medical pretraining for versatile DL foundations and explore novel AL methods.
文摘Human Activity Recognition(HAR)has become increasingly critical in civic surveillance,medical care monitoring,and institutional protection.Current deep learning-based approaches often suffer from excessive computational complexity,limited generalizability under varying conditions,and compromised real-time performance.To counter these,this paper introduces an Active Learning-aided Heuristic Deep Spatio-Textural Ensemble Learning(ALH-DSEL)framework.The model initially identifies keyframes from the surveillance videos with a Multi-Constraint Active Learning(MCAL)approach,with features extracted from DenseNet121.The frames are then segmented employing an optimized Fuzzy C-Means clustering algorithm with Firefly to identify areas of interest.A deep ensemble feature extractor,comprising DenseNet121,EfficientNet-B7,MobileNet,and GLCM,extracts varied spatial and textural features.Fused characteristics are enhanced through PCA and Min-Max normalization and discriminated by a maximum voting ensemble of RF,AdaBoost,and XGBoost.The experimental results show that ALH-DSEL provides higher accuracy,precision,recall,and F1-score,validating its superiority for real-time HAR in surveillance scenarios.
基金supported by the National Natural Science Foundation of China(Grant Nos.62227821,62025503,and 62205199).
文摘To capture the nonlinear dynamics and gain evolution in chirped pulse amplification(CPA)systems,the split-step Fourier method and the fourth-order Runge–Kutta method are integrated to iteratively address the generalized nonlinear Schrödinger equation and the rate equations.However,this approach is burdened by substantial computational demands,resulting in significant time expenditures.In the context of intelligent laser optimization and inverse design,the necessity for numerous simulations further exacerbates this issue,highlighting the need for fast and accurate simulation methodologies.Here,we introduce an end-to-end model augmented with active learning(E2E-AL)with decent generalization through different dedicated embedding methods over various parameters.On an identical computational platform,the artificial intelligence–driven model is 2000 times faster than the conventional simulation method.Benefiting from the active learning strategy,the E2E-AL model achieves decent precision with only two-thirds of the training samples compared with the case without such a strategy.Furthermore,we demonstrate a multi-objective inverse design of the CPA systems enabled by the E2E-AL model.The E2E-AL framework manifests the potential of becoming a standard approach for the rapid and accurate modeling of ultrafast lasers and is readily extended to simulate other complex systems.
基金supported by the National Natural Science Foundation of China(Grant Nos.T2225022,12350710786,62088101,and 12161141016)Shuguang Program of Shanghai Education Development Foundation(Grant No.22SG21)Shanghai Municipal Education Commission,and the Fundamental Research Funds for the Central Universities。
文摘Dynamical systems often exhibit multiple attractors representing significantly different functioning conditions.A global map of attraction basins can offer valuable guidance for stabilizing or transitioning system states.Such a map can be constructed without prior system knowledge by identifying attractors across a sufficient number of points in the state space.However,determining the attractor for each initial state can be a laborious task.Here,we tackle the challenge of reconstructing attraction basins using as few initial points as possible.In each iteration of our approach,informative points are selected through random seeding and are driven along the current classification boundary,promoting the eventual selection of points that are both diverse and enlightening.The results across various experimental dynamical systems demonstrate that our approach requires fewer points than baseline methods while achieving comparable mapping accuracy.Additionally,the reconstructed map allows us to accurately estimate the minimum escape distance required to transition the system state to a target basin.
基金supported by the Major Projects of Zhejiang Provincial Natural Science Foundation of China(No.LD22E050009)the National Natural Science Foundation of China(No.51475425)the College Student’s Science and Technology Innovation Project of Zhejiang Province(No.2022R403B060),China.
文摘For complex engineering problems,multi-fidelity modeling has been used to achieve efficient reliability analysis by leveraging multiple information sources.However,most methods require nested training samples to capture the correlation between different fidelity data,which may lead to a significant increase in low-fidelity samples.In addition,it is difficult to build accurate surrogate models because current methods do not fully consider the nonlinearity between different fidelity samples.To address these problems,a novel multi-fidelity modeling method with active learning is proposed in this paper.Firstly,a nonlinear autoregressive multi-fidelity Kriging(NAMK)model is used to build a surrogate model.To avoid introducing redundant samples in the process of NAMK model updating,a collective learning function is then developed by a combination of a U-learning function,the correlation between different fidelity samples,and the sampling cost.Furthermore,a residual model is constructed to automatically generate low-fidelity samples when high-fidelity samples are selected.The efficiency and accuracy of the proposed method are demonstrated using three numerical examples and an engineering case.
基金supported by the National Key Research and Development Program of China(No.2023YFB3406900)the National Natural Science Foundation of China(No.52075068).
文摘Surrogate models offer an efficient approach to tackle the computationally intensive evaluation of performance functions in reliability analysis.Nevertheless,the approximations inherent in surrogate models necessitate the consideration of surrogate model uncertainty in estimating failure probabilities.This paper proposes a new reliability analysis method in which the uncertainty from the Kriging surrogate model is quantified simultaneously.This method treats surrogate model uncertainty as an independent entity,characterizing the estimation error of failure probabilities.Building upon the probabilistic classification function,a failure probability uncertainty is proposed by integrating the difference between the traditional indicator function and the probabilistic classification function to quantify the impact of surrogate model uncertainty on failure probability estimation.Furthermore,the proposed uncertainty quantification method is applied to a newly designed reliability analysis approach termed SUQ-MCS,incorporating a proposed median approximation function for active learning.The proposed failure probability uncertainty serves as the stopping criterion of this framework.Through benchmarking,the effectiveness of the proposed uncertainty quantification method is validated.The empirical results present the competitive performance of the SUQ-MCS method relative to alternative approaches.
基金supported by the National Natural Science Foundation of China(No.52488201)the Natural Science Basic Research Program of Shaanxi(No.2024JC-YBMS-284)+1 种基金the Key Research and Development Program of Shaanxi(No.2024GHYBXM-02)the Fundamental Research Funds for the Central Universities.
文摘Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.
基金supported by National Science Foundation of China(61971078)Chongqing Municipal Education Commission Science and Technology Major Project(KJZDM202301901).
文摘The effectiveness of facial expression recognition(FER)algorithms hinges on the model’s quality and the availability of a substantial amount of labeled expression data.However,labeling large datasets demands significant human,time,and financial resources.Although active learning methods have mitigated the dependency on extensive labeled data,a cold-start problem persists in small to medium-sized expression recognition datasets.This issue arises because the initial labeled data often fails to represent the full spectrum of facial expression characteristics.This paper introduces an active learning approach that integrates uncertainty estimation,aiming to improve the precision of facial expression recognition regardless of dataset scale variations.The method is divided into two primary phases.First,the model undergoes self-supervised pre-training using contrastive learning and uncertainty estimation to bolster its feature extraction capabilities.Second,the model is fine-tuned using the prior knowledge obtained from the pre-training phase to significantly improve recognition accuracy.In the pretraining phase,the model employs contrastive learning to extract fundamental feature representations from the complete unlabeled dataset.These features are then weighted through a self-attention mechanism with rank regularization.Subsequently,data from the low-weighted set is relabeled to further refine the model’s feature extraction ability.The pre-trained model is then utilized in active learning to select and label information-rich samples more efficiently.Experimental results demonstrate that the proposed method significantly outperforms existing approaches,achieving an improvement in recognition accuracy of 5.09%and 3.82%over the best existing active learning methods,Margin,and Least Confidence methods,respectively,and a 1.61%improvement compared to the conventional segmented active learning method.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金supported by the DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowshipsupported by the NGA under Contract No.HM04762110003.
文摘Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.
文摘The sampling of the training data is a bottleneck in the development of artificial intelligence(AI)models due to the processing of huge amounts of data or to the difficulty of access to the data in industrial practices.Active learning(AL)approaches are useful in such a context since they maximize the performance of the trained model while minimizing the number of training samples.Such smart sampling methodologies iteratively sample the points that should be labeled and added to the training set based on their informativeness and pertinence.To judge the relevance of a data instance,query rules are defined.In this paper,we propose an AL methodology based on a physics-based query rule.Given some industrial objectives from the physical process where the AI model is implied in,the physics-based AL approach iteratively converges to the data instances fulfilling those objectives while sampling training points.Therefore,the trained surrogate model is accurate where the potentially interesting data instances from the industrial point of view are,while coarse everywhere else where the data instances are of no interest in the industrial context studied.
文摘Support vector machines(SVMs) are a popular class of supervised learning algorithms, and are particularly applicable to large and high-dimensional classification problems. Like most machine learning methods for data classification and information retrieval, they require manually labeled data samples in the training stage. However, manual labeling is a time consuming and errorprone task. One possible solution to this issue is to exploit the large number of unlabeled samples that are easily accessible via the internet. This paper presents a novel active learning method for text categorization. The main objective of active learning is to reduce the labeling effort, without compromising the accuracy of classification, by intelligently selecting which samples should be labeled.The proposed method selects a batch of informative samples using the posterior probabilities provided by a set of multi-class SVM classifiers, and these samples are then manually labeled by an expert. Experimental results indicate that the proposed active learning method significantly reduces the labeling effort, while simultaneously enhancing the classification accuracy.
文摘This paper describes a new method for active learning in content-based image retrieval. The proposed method firstly uses support vector machine (SVM) classifiers to learn an initial query concept. Then the proposed active learning scheme employs similarity measure to check the current version space and selects images with maximum expected information gain to solicit user's label. Finally, the learned query is refined based on the user's further feedback. With the combination of SVM classifier and similarity measure, the proposed method can alleviate model bias existing in each of them. Our experiments on several query concepts show that the proposed method can learn the user's query concept quickly and effectively only with several iterations.
基金the National Basic Research Program (973) of China (No. 2004CB719401)the National Research Foundation for the Doctoral Program of Higher Education of China (No.20060003060)
文摘In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.
基金This work is supported by National Natural Science Foundation of China(61402225,61728204)Innovation Funding(NJ20160028,NT2018028,NS2018057)+1 种基金Aeronautical Science Foundation of China(2016551500)State Key Laboratory for smart grid protection and operation control Foundation,and the Science and Technology Funds from National State Grid Ltd.,China degree and Graduate Education Fund.
文摘Active learning has been widely utilized to reduce the labeling cost of supervised learning.By selecting specific instances to train the model,the performance of the model was improved within limited steps.However,rare work paid attention to the effectiveness of active learning on it.In this paper,we proposed a deep active learning model with bidirectional encoder representations from transformers(BERT)for text classification.BERT takes advantage of the self-attention mechanism to integrate contextual information,which is beneficial to accelerate the convergence of training.As for the process of active learning,we design an instance selection strategy based on posterior probabilities Margin,Intra-correlation and Inter-correlation(MII).Selected instances are characterized by small margin,low intra-cohesion and high inter-cohesion.We conduct extensive experiments and analytics with our methods.The effect of learner is compared while the effect of sampling strategy and text classification is assessed from three real datasets.The results show that our method outperforms the baselines in terms of accuracy.
基金Projects supported by the China Scholarship Council
文摘This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of limit analysis, a rotational failure mechanism is adopted to describe the face failure considering different shear strength parameters in the two layers. The surrogate Kriging model is introduced to replace the actual performance function to perform a Monte Carlo simulation. An active learning function is used to train the Kriging model which can ensure an efficient tunnel face failure probability prediction without loss of accuracy. The deterministic stability analysis is given to validate the proposed tunnel face failure model. Subsequently, the number of initial sampling points, the correlation coefficient, the distribution type and the coefficient of variability of random variables are discussed to show their influences on the failure probability. The proposed approach is an advisable alternative for the tunnel face stability assessment and can provide guidance for tunnel design.
基金the National Natural Science Foundation of China undergrant 61501515.
文摘Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intelligence,many security analysts rely on cumbersome and time-consuming manual efforts.Cybersecurity knowledge graph plays a significant role in automatics analysis of cyber threat intelligence.As the foundation for constructing cybersecurity knowledge graph,named entity recognition(NER)is required for identifying critical threat-related elements from textual cyber threat intelligence.Recently,deep neural network-based models have attained very good results in NER.However,the performance of these models relies heavily on the amount of labeled data.Since labeled data in cybersecurity is scarce,in this paper,we propose an adversarial active learning framework to effectively select the informative samples for further annotation.In addition,leveraging the long short-term memory(LSTM)network and the bidirectional LSTM(BiLSTM)network,we propose a novel NER model by introducing a dynamic attention mechanism into the BiLSTM-LSTM encoderdecoder.With the selected informative samples annotated,the proposed NER model is retrained.As a result,the performance of the NER model is incrementally enhanced with low labeling cost.Experimental results show the effectiveness of the proposed method.
文摘The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.